Academia Arena



Yüklə 283,52 Kb.
səhifə4/14
tarix25.07.2018
ölçüsü283,52 Kb.
#58996
1   2   3   4   5   6   7   8   9   ...   14

Case 1:

m = m0 / (1 − v2/c2) ½

For v = 30km/s = 3 × 10 4 m/s

m = 1.000000005m0



Case 2:

m = m0 exp (v2/2c2)

For v = 30km/s = 3 × 10 4 m/s

m = 1.000000005m0

Conclusion: for velocity v = 30km/s, both the equations give values of mass as m = 1.000000005m0. Therefore, the equation m = m0 exp (v2/2c2) justifies that mass of non-relativistic particle varies with velocity. However, since m = 1.000000005m0 the variation of mass is negligible.

When Einstein was 26 years old, he calculated precisely how energy must change if the relativity principle was correct, and he discovered the relation E = mc2 (which led to the Manhattan Project and ultimately to the bombs that exploded over Hiroshima and Nagasaki in 1945). This is now probably the only equation in physics that even people with no background in physics have at least heard of this and are aware of its prodigious influence on the world we live in. And since c is constant (because the maximum distance a light can travel in one second is 3 ×10 to the power of 8 meter), this equation tells us that mass and energy are interconvertible and are two different forms of the same thing and are in fact equivalent. Suppose a mass m is converted into energy E, the resulting energy carries mass = m and moves at the speed of light c. Hence, energy E is defined by E= mc squared. As we know c squared (the speed of light multiplied by itself) is an astronomically large number: 9 × 10 to the power of 16 meters square per second square. So if we convert a small amount of mass, we'll get a tremendous amount of energy. For example, if we convert 1kg of mass, we'll get energy of 9 × 10 to the power of 16 Joules (i.e., the energy more than 1 million times the energy released in a chemical explosion. Perhaps since c is not just the constant namely the maximum distance a light can travel in one second but rather a fundamental feature of the way space and time are married to form space-time. One can think that in the presence of unified space and time, mass and energy are equivalent and interchangeable. But WHY? The question lingers, unanswered. Until now.

The black holes of nature are the most perfect macroscopic objects there are in the universe: the only elements in their construction are our concepts of space and time.

Subrahmanyan Chandrasekhar (1910 – 1995)



However, the equation E = mc2 has some remarkable consequences (e.g. conversion of less than 1% of 2 pounds of uranium into energy was used in the atomic bomb over Hiroshima and body at rest still contains energy. When a body is moving, it carries an additional energy of motion called kinetic energy. In chemical and nuclear interactions, kinetic energy can be converted into rest energy, which is equivalent to generating mass. Also, the rest energy can be converted into kinetic energy. In that way, chemical and nuclear interactions can generate kinetic energy, which then can be used to run engines or blow things up). Because E = mc2, the energy which a body possess due to its motion will add to its rest mass. This effect is only really significant for bodies moving at speeds close to the speed of light. For example, at 10 percent of the speed of light a body’s mass M is only 0.5 percent more than its rest mass m, while at 90 percent of the speed of light it would be more than twice its rest mass. And as an body approaches the speed of light, its mass raise ever more quickly, it acquire infinite mass and since an infinite mass cannot be accelerated any faster by any force, the issue of infinite mass remains an intractable problem. For this reason all the bodies are forever confined by relativity to move at speeds slower than the speed of light. Only tiny packets/particles of light (dubbed “photons” by chemist Gilbert Lewis) that have no intrinsic mass can move at the speed of light. There is little disagreement on this point. Now, being more advanced, we do not just consider conclusions like photons have no intrinsic mass. We constantly test them, trying to prove or disprove. So far, relativity has withstood every test. And try as we might, we can measure no mass for the photon. We can just put upper limits on what mass it can have. These upper limits are determined by the sensitivity of the experiment we are using to try to weigh the photon. The last number we can see that a photon, if it has any mass at all, must be less than 4 ×10 to the power of − 48 grams. For comparison, the electron has a mass of 9 × 10 to the power of − 28 grams. Moreover, if the mass of the photon is not considered to zero, then quantum mechanics would be in trouble. And it also an uphill task to conduct an experiment which proves the photon mass to be exactly zero. Tachyons the putative class of hypothetical particles (with negative mass squared: m2 < 0) is believed to travel faster than the speed of light. But, the existence of tachyons is still in question and if they exist, how can they be detected is still a? However, on one thing most physicists agree: (Just because we haven’t found anything yet that can go faster than light doesn’t mean that we won’t one day have to eat our words. We should be more open-minded to other possibilities that just may not have occurred to us). Moreover, in expanding space − recession velocity keeps increasing with distance. Beyond a certain distance, known as the Hubble distance, it exceeds the velocity greater than the speed of light in vacuum. But, this is not a violation of relativity, because recession velocity is caused not by motion through space but by the expansion of space.

“His work has given one of the most powerful of all impulses to the progress of science. His ideas will be effective as long as physical science lasts,” Einstein wrote about Max Planck (1858—1947).

The first step toward quantum theory had come in 1900, when German scientist Max Planck in Berlin discovered that the radiation from a body that was glowing red-hot was explainable if light could be emitted or absorbed only if it came in indivisible discrete pieces, called quanta. And each quanta behaved very much like point particles of energy E = hυ. In one of his groundbreaking papers, written in 1905 when he was at the patent office, Einstein showed that Planck's quantum hypothesis could explain what is called the photoelectric effect, the way certain metals give off electrons when light falls on them − discovered by German physicist Heinrich Hertz in 1887. He attributed particle nature to a photon (that made up a crisis for classical physics around the turn of the 20th century and it provided proof of the quantization of light) and considered a photon as a particle of mass m = hυ/c2 and said that photoelectric effect is the result of an elastic collision between a photon of incident radiation and a free electron inside the photo metal. During the collision the electron absorbs the energy of the photon completely. A part of the absorbed energy hυ of the photon is used by the electron in doing work against the surface forces of the metal. This part of the energy (hυ1) represents the work function W of the photo metal. Other part (hυ2) of the absorbed energy hυ of the photon manifests as kinetic energy (KE) of the emitted electron i.e.,

(hυ2) = KE

But hυ2 = p2c (p2 is the momentum and c is the speed of light in vacuum) and KE = pv/2 where p is the momentum and v is the velocity of ejected electron. Therefore: p2 c = pv/2. If we assume that p2 = p i.e., momentum p2 completely manifests as the momentum p of the ejected electron, then

v = 2c


Nothing can travel faster than the speed of light in vacuum, which itself frame the central principle of Albert Einstein’s special theory of relativity (which resolved the conflict of James Clerk Maxwell’s laws of electromagnetism (which stated that one cannot catch up with a departing beam of light) by overturning the understanding of space and time). If the electron with rest mass = 9.1 × 10 to the power of –31 kg travels with the velocity v = 2c, then the fundamental rules of physics would have to be rewritten. However, v=2c is meaningless as the non-relativistic electron can only travel with velocity v << c. Hence: p2 is ≠ p. This means: only a part (p2A) of the momentum p2 manifests as the momentum p of the ejected electron.

p2 = (p2A) + (p2B)

p2 = p +?

The stopping potential “VS’’ required to stop the electron of charge e (which is = – 1.602 × 10 –19 Coulombs) and kinetic energy KE emitted from a metal surface is calculated using the equation:

KE = e × VS

If the kinetic energy of the emitted electron is 0 i.e., KE = 0, then VS required to stop the emitted electron = 0. Under this condition: e = KE / VS = 0/0 i.e., charge on the electron becomes UNDEFINED. There can be no bigger limitation than this. Electron charge cannot be undefined because e is = – 1.602 × 10 –19 Coulombs.

E= hυ (which implies the energy a photon can have is proportional to its frequency: larger frequency (shorter wavelength) implies larger photon energy and smaller frequency (longer wavelength) implies smaller photon energy) – because h is constant, energy and frequency of the photon are equivalent and are different forms of the same thing. And since h − which is one of the most fundamental numbers in physics, ranking alongside the speed of light c and confines most of these radical departures from life-as-usual to the microscopic realm − is incredibly small (i.e., 6 × 10 to the power of –34 — a decimal point followed by 33 zeros and a 6 — of a joule second), the frequency of the photon is always greater than its energy, so it would not take many quanta to radiate even ten thousand megawatts. And some say the only thing that quantum mechanics (the great intellectual achievement of the first half of this century) has going for it, in fact, is that it is unquestionably correct. Since the Planck's constant is almost infinitesimally small, quantum mechanics is for little things. Suppose this number would have been too long to keep writing down i.e., h would have been = 6.625×10 to the power of 34 Js, then the wavelength of photon would have been very large. Since the area of the photon is proportional to the square of its wavelength, photon area would have been sufficiently large to consider the photon to be macroscopic. And quantum mechanical effects would have been noticeable for macroscopic objects. For example, the De Broglie wavelength of a 100 kg man walking at 1 m/s would have been = h/mv = (6.625 ×10 34 Js) / (100kg) (1m/s) = 6.625 × 10 to the power of 32 m (very large to be noticeable).The work on atomic science in the first thirty five years of this century took our understanding down to lengths of a millionth of a millimeter. Then we discovered that protons and neutrons are made of even smaller particles called quarks (which were named by the Caltech physicist Murray Gell-Mann, who won the Nobel Prize in 1969 for his work on them). We might indeed expect to find several new layers of structure more basic than the quarks and leptons that we now regard as elemental particles. Are there elementary particles that have not yet been observed, and, if so, which ones are they and what are their properties? What lies beyond the quarks and the leptons? If we find answers to them, then the entire picture of particle physics would be quite different.

“Another very good test some readers may want to look up, which we do not have space to describe here, is the Casimir effect, where forces between metal plates in empty space are modified by the presence of virtual particles. Thus virtual particles are indeed real and have observable effects that physicists have devised ways of measuring. Their properties and consequences are well established and well understood consequences of quantum mechanics.”

― Gordon L. Kane

Experimental evidence supporting the Watson and Crick model was published in a series of five articles in the same issue of Nature – caused an explosion in biochemistry and transformed the science. Of these, Franklin and Gosling's paper was the first publication of their own x-ray diffraction data and original analysis method that partially supported the Watson and Crick model; this issue also contained an article on DNA (a main family of polynucleotides in living cells) structure by Maurice Wilkins and two of his colleagues, whose analysis supported their double-helix molecular model of DNA. In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. From each gene's point of view, the 'background' genes are those with which it shares bodies in its journey down the generations. DNA (deoxyribonucleic acid) – which is known to occur in the chromosomes of all cells (whose coded characters spell out specific instructions for building willow trees that will shed a new generation of downy seeds). Most forms of life including vertebrates, reptiles, Craniates or suckling pigs, chimps and dogs and crocodiles and bats and cockroaches and humans and worms and dandelions, carry the amazing complexity of the information within the some kind of replicator—molecules called DNA in each cell of their body, that a live reading of that code at a rate of one letter per second would take thirty-one years, even if reading continued day and night. Just as protein molecules are chains of amino acids, so DNA molecules are chains of nucleotides. Linking the two chains in the DNA, are pairs of nucleic acids (purines + pyrimidines). There are four types of nucleic acid, adenine “A”, cytosine “C”, guanine “G”, and thiamine “T.” An adenine (purine) on one chain is always matched with a thiamine (pyrimidine) on the other chain, and a guanine (purine) with a cytosine (pyrimidine). Thus DNA exhibits all the properties of genetic material, such as replication, mutation and recombination. Hence, it is called the molecule of life. We need DNA to create enzymes in the cell, but we need enzymes to unzip the DNA. Which came first, proteins or protein synthesis? If proteins are needed to make proteins, how did the whole thing get started? We need precision genetic experiments to know for sure.

The backwards-moving electron when viewed with time moving forwards appears the same as an ordinary electron, except that it is attracted to normal electrons - we say it has a positive charge. For this reason it's called a positron. The positron is a sister particle to the electron, and is an example of an anti-particle...This phenomena is general. Every particle in Nature has an amplitude to move backwards in time, and therefore has an anti-particle. (Feynman, 1985)

For many years after Newton, partial reflection by two surfaces was happily explained by a theory of waves,* but when experiments were made with very weak light hitting photomultipliers, the wave theory collapsed: as the light got dimmer and dimmer, the photomultipliers kept making full sized clicks - there were just fewer of them. Light behaves as particles. This idea made use of the fact that waves can combine or cancel out, and the calculations based on this model matched the results of Newton's experiments, as well as those done for hundreds of years afterwards. But when experiments were developed that were sensitive enough to detect a single photon, the wave theory predicted that the clicks of a photomultiplier would get softer and softer, whereas they stayed at full strength - they just occurred less and less often. No reasonable model could explain this fact.

This state of confusion was called the wave - particle duality of light. (Feynman, 1985)

Albert Einstein’s theory of general relativity (a theoretical framework for understanding the universe on the largest of scales) predicts that massive bodies that are accelerated will cause the emission of gravity waves, ripples in the curvature of 4 dimensional fabric of space-time that travel away in all directions like waves in a lake at a specific speed, the speed of light (which is not something we can see with the naked eye). These are similar to light waves, which are ripples of the electromagnetic field, but they have not yet been observed even though a number of powerful gravity wave detectors are being built in outer space and huge atom smashers in the United States, Europe, and Japan to detect them with an accuracy of one part in a billion trillion (corresponding to a shift that is one hundredth the width of a single atom) – and are considered as a decades-old dream of probing the mysteries of the universe and the fossils from the very instant of creation.... since no other signal have survived from that era. Like light, gravity waves carry energy away from the bodies that emit them. One would therefore expect a system of massive bodies to settle down eventually to a stationary state, because the energy in any movement would be carried away by the emission of gravity waves. (It is rather like dropping a tennis ball into water: at first it bobs up and down a great deal, but as the ripples carries away its energy, it eventually settles down to a stationary state). For example, the movement of the earth in its orbit round the sun produces gravitational waves. The effect of the energy loss will be to change the orbit of the earth so that gradually it gets nearer and nearer to the sun at a rate = − dr/dt = 64G3 (Msun × mearth) (Msun + mearth) / 5 c5 r3, eventually collides with it, and settles down to a stationary state. The rate of energy loss into space in the form of gravity waves in the case of the earth and the sun is very low – about enough to run a small electric heater and is = − dE/dt = 32 G4 (Msun × mearth) 2 (Msun + mearth) / 5c5 r5.

Dividing – dE/dt by – dr/dt, we get: 2 × (−dE/dt) = G (Msun × mearth) / r2 × (− dr/dt)

Since G (Msun × mearth) / r2 = FGravitation (the force of gravitation between the earth and the sun). Therefore:

2 (−dE/dt) = FGravitation × (− dr/dt)

Suppose no gravity waves is emitted by the earth-sun system, then

(−dE/dt) = 0 and (−dr/dt) = 0

FGravitation = 2 × {(−dE/dt) / (−dr/dt)} = 2 × (0/0) = 0 / 0 i.e., the force of gravitation between the earth and the sun becomes UNDEFINED. The earth-sun system should lose its energy in the form of weak gravity waves in order to maintain a well-defined force of gravitation between them. We can test this precision observation to measure the accuracy of general relativity itself. If proved correct, we find that general relativity is at least 99.7 percent accurate and it would represent the crowning achievement of the last two thousand years of research in physics, ever since the Greeks first began the search for a single coherent and comprehensive theory of the universe.

Gravity waves are vibrations in the 4 dimensional fabric of space-time. Gravitons are their quanta

The life time of the earth-sun orbit is given by the equation:

t life = 5c5r4 /256 G3 (Msun × mearth) (Msun + mearth)

Now comparing the above equation with the equation − dr/dt = 64G3 (Msun × mearth) (Msun + mearth) / 5 c5 r3 we get:

− dr/dt = r /4t life

Representing the rate of orbital decay (− dr/dt) by the symbol R1 we get:

R1= r /4t life

However, the distance between the orbiting masses not only decrease due to the emission of gravity waves but also increase at the same time due to the Hubble expansion of the space. The rate of increase of distance between the earth and sun due to the expansion of the space is given by the equation:

R2= dr/dt = H × r, where H is the Hubble parameter.

On dividing R1 by R2 we get:

R1 / R2 = 1 / 4Ht life

Since H = 1/ tage (where tage = age of the universe). Therefore:

R1 / R2 = tage / 4t life

Since the life time of the earth-sun orbit is about 3.44 × 1030 s:

R1 / R2 = tage / 4 × (3.44 × 1030 s)

Since tage ≈ 4.347 × 10 17s. Therefore:

R1 / R2 = 3.159 × 10 − 14

Which means: R2 > R1 i.e., the rate of increase of distance between the earth and the sun due to the Hubble expansion of space is far greater than the rate of decrease of distance between the earth and the sun due to the emission of gravity waves.

If tage = 4t life = 1.376 × 10 31s, then

R1 = R2

i.e., when the age of the universe approaches 1.376 × 10 31s the rate of decrease of distance between the earth and the sun due to the emission of gravity waves is exactly equal to the rate of increase of distance between the earth and the sun due to the Hubble expansion of space ( i.e., R1 = R2). However, even before tage approaches 1.376 × 10 31s the earth will be swallowed by the sun in the red giant stage of its life in a few billion years’ time.

A theory is a good theory if it satisfies one requirement. It must make definite predictions about the results of future observations. Basically, all scientific theories are scientific statements that predict, explain, and perhaps describe the basic features of reality. Despite having received some great deal, discrepancies frequently lead to doubt and discomfort. For example, the most precise estimate of sun’s age is around 10 million years, based on linear density model. But geologists have the evidence that the formation of the rocks, and the fossils in them, would have taken hundreds or thousands of millions of years. This is far longer than the age of the Earth, predicted by linear density model. Hence the earth existed even before the birth of the sun! Which is absolutely has no sense. The linear density model therefore fails to account for the age of the sun. Any physical theory is always provisional, in the sense that it is only a hypothesis: it can be disproved by finding even a single observation that disagrees with the predictions of the theory. Towards the end of the nineteenth century, physicists thought they were close to a complete understanding of the universe. They believed that entire universe was filled by a hypothetical medium called the ether. As a material medium is required for the propagation of waves, it was believed that light waves propagate through ether as the pressure waves propagate through air. Soon, however, inconsistencies with the idea of ether begin to appear. Yet a series of experiments failed to support this idea. The most careful and accurate experiments were carried out by two Americans: Albert Michelson and Edward Morley (who showed that light always traveled at a speed of one hundred and eighty six thousand miles a second (no matter where it came from) and disproved Michell and Laplace’s idea of light as consisting of particles, rather like cannon balls, that could be slowed down by gravity, and made to fall back on the star) at the Case School of Applied Science in Cleveland, Ohio, in 1887 − which proved to be a serve blow to the existence of ether. All the known subatomic particles in the universe belong to one of two groups, Fermions or bosons. Fermions are particles with integer spin ½ and they make up ordinary matter. Their ground state energies are negative. Bosons are particles (whose ground state energies are positive) with integer spin 0, 1, 2 and they act as the force carriers between fermions (For example: The electromagnetic force of attraction between electron and a proton is pictured as being caused by the exchange of large numbers of virtual massless bosons of spin 1, called photons).

Positive ground state energy of bosons plus negative ground state energy of fermions = 0

But Why?


May be because to eliminate the biggest infinity in supergravity theory (the theory which introduced a superpartner to the conjectured subatomic particle with spin 2 that is the quanta of gravity “the graviton” (called the gravitino, meaning “little graviton,” with spin 3/2) – that even inspired one of the most brilliant theoretical physicists since Einstein “Stephen Hawking” to speak of “the end of theoretical physics” being in sight when he gave his inaugural lecture upon taking the Lucasian Chair of Mathematics at Cambridge University, the same chair once held by Isaac Newton – a person who developed the theory of mechanics, which gave us the classical laws governing machines which in turn, greatly accelerated the Industrial Revolution, which unleashed political forces that eventually overthrew the feudal dynasties of Europe)?

There is strong evidence... that the universe is permeated with dark matter approximately six times as much as normal visible matter (i.e. invisible matter became apparent in 1933 by Swiss astronomer Fritz Zwicky – which can be considered to have energy, too, because E = mc2 – exist in a huge halo around galaxies and does not participate in the processes of nuclear fusion that powers stars, does not give off light and does not interact with light but bend starlight due to its gravity, somewhat similar to the way glass bends light). Although we live in a dark matter dominated universe (i.e., dark matter, according to the latest data, makes up 23 percent of the total matter/energy content of the universe) experiments to detect dark matter in the laboratory have been exceedingly difficult to perform because dark matter particles such as the neutralino, which represent higher vibrations of the superstring – interact so weakly with ordinary matter. Although dark matter was discovered almost a century ago, it is still a mystery shining on library shelves that everyone yearns to resolve.



Yüklə 283,52 Kb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   14




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə