There are two well known phenomena which are due to the finite speed of electromagnetic radiation, but are essentially classical in nature, requiring no other facts of special relativity for their understanding.
A distant object can appear to travel faster than the speed of light relative to us, provided that it has some component of motion towards us as well as perpendicular to our line of sight. Say that on Jan. 1 you make a position measurement of galaxy X. One month later, you measure it again. Assuming you know its distance from us by some independent measurement, you derive its linear speed, and conclude that it is moving faster than the speed of light.
What have you forgotten? Let's say that on Jan. 1, the object is D km from us, and that between Jan. 1 and Feb. 1, the object has moved d km closer to us. You have assumed that the light you measured on Jan. 1 and Feb. 1 were emitted exactly one month apart. Not so. The first light beam had further to travel, and was actually emitted (1 + d/c) months before the second measurement, if we measure c in km/month. The object has traveled the given angular distance in more time than you thought. Similarly, if the object is moving away from us, the apparent angular velocity will be too slow, if you do not correct for this effect, which becomes significant when the object is moving along a line close to our line of sight.
Note that most extragalactic objects are moving away from us due to the Hubble expansion. So for most objects, you don't get superluminal apparent velocities. But the effect is still there, and you need to take it into account if you want to measure velocities by this technique.
Consider a cube moving across your field of view with speed near the speed of light. The trailing face of the cube is edge on to your line of sight as it passes you. However, the light from the back edge of that face (the edge of the square farthest from you) takes longer to get to your eye than the light from the front edge. At any given instant you are seeing light from the front edge at time t and the back edge at time t-(L/c), where L is the length of an edge. This means you see the back edge where it was some time earlier. This has the effect of *rotating* the *image* of the cube on your retina.
This does not mean that the cube itself rotates. The *image* is rotated. And this depends only on the finite speed of light, not any other postulate or special relativity. You can calculate the rotation angle by noting that the side face of the cube is Lorentz contracted to L' = L/gamma. This will correspond to a rotation angle of arccos(1/gamma).
It turns out, if you do the math for a sphere, that the amount of apparent rotation exactly cancels the Lorentz contraction. The object itself is flattened, but then you see *behind* it as it flies by just enough to restore it to its original size. So the image of a sphere is unaffected by the Lorentz flattening that it experiences.
Another implication of this is that if the object is moving at nearly the speed of light, although it is contracted into an infinitesimally thin pancake, you see it rotated by almost a full 90 degrees, so you see the complete trailing face of the object, and it doesn't disappear from view. In the case of the sphere, you see the transverse cross-section (which suffers no contraction), so that it still appears to be exactly a sphere.
That it took so long historically to realize this is undoubtedly due to the fact that although we were regularly accelerating particle beams in 1959 to relativistic speeds, we still do not have the technology to accelerate any macroscopic objects to speeds necessary to reveal the effect.
You put two pails of water outside on a freezing day. One has hot water (95 degrees C) and the other has an equal amount of colder water (50 degrees C). Which freezes first? The hot water freezes first! Why?
It is commonly argued that the hot water will take some time to reach the initial temperature of the cold water, and then follow the same cooling curve. So it seems at first glance difficult to believe that the hot water freezes first. The answer lies mostly in evaporation. The effect is definitely real and can be duplicated in your own kitchen.
Every "proof" that hot water can't freeze faster assumes that the state of the water can be described by a single number. Remember that temperature is a function of position. There are also other factors besides temperature, such as motion of the water, gas content, etc. With these multiple parameters, any argument based on the hot water having to pass through the initial state of the cold water before reaching the freezing point will fall apart. The most important factor is evaporation.
The cooling of pails without lids is partly Newtonian and partly by evaporation of the contents. The proportions depend on the walls and on temperature. At sufficiently high temperatures evaporation is more important. If equal masses of water are taken at two starting temperatures, more rapid evaporation from the hotter one may diminish its mass enough to compensate for the greater temperature range it must cover to reach freezing. The mass lost when cooling is by evaporation is not negligible. In one experiment, water cooling from 100C lost 16% of its mass by 0C, and lost a further 12% on freezing, for a total loss of 26%.
The cooling effect of evaporation is twofold. First, mass is carried off so that less needs to be cooled from then on. Also, evaporation carries off the hottest molecules, lowering considerably the average kinetic energy of the molecules remaining. This is why "blowing on your soup" cools it. It encourages evaporation by removing the water vapor above the soup.
Thus experiment and theory agree that hot water freezes faster than cold for sufficiently high starting temperatures, if the cooling is by evaporation. Cooling in a wooden pail or barrel is mostly by evaporation. In fact, a wooden bucket of water starting at 100C would finish freezing in 90% of the time taken by an equal volume starting at room temperature. The folklore on this matter may well have started a century or more ago when wooden pails were usual. Considerable heat is transferred through the sides of metal pails, and evaporation no longer dominates the cooling, so the belief is unlikely to have started from correct observations after metal pails became common.
The dimples, paradoxically, *do* increase drag slightly. But they also increase `Magnus lift', that peculiar lifting force experienced by rotating bodies travelling through a medium. Contrary to Freshman physics, golf balls do not travel in inverted parabolas. They follow an 'impetus trajectory':
* * * * (golfer) * * * * <-- trajectory \O/ * * | * * -/ \-T---------------------------------------------------------------ground
This is because of the combination of drag (which reduces horizontal speed late in the trajectory) and Magnus lift, which supports the ball during the initial part of the trajectory, making it relatively straight. The trajectory can even curve upwards at first, depending on conditions! Here is a cheesy diagram of a golf ball in flight, with some relevant vectors:
F(magnus) ^ | F(drag) <--- O -------> V \ \----> (sense of rotation)
The Magnus force can be thought of as due to the relative drag on the air on the top and bottom portions of the golf ball: the top portion is moving slower relative to the air around it, so there is less drag on the air that goes over the ball. The boundary layer is relatively thin, and air in the not-too-near region moves rapidly relative to the ball. The bottom portion moves fast relative to the air around it; there is more drag on the air passing by the bottom, and the boundary (turbulent) layer is relatively thick; air in the not-too-near region moves more slowly relative to the ball. The Bernoulli force produces lift. (Alternatively, one could say that `the flow lines past the ball are displaced down, so the ball is pushed up.')
The difficulty comes near the transition region between laminar flow and turbulent flow. At low speeds, the flow around the ball is laminar. As speed is increased, the bottom part tends to go turbulent *first*. But turbulent flow can follow a surface much more easily than laminar flow.
As a result, the (laminar) flow lines around the top break away >from the surface sooner than otherwise, and there is a net displacement *up* of the flow lines. The magnus lift goes *negative*.
The dimples aid the rapid formation of a turbulent boundary layer around the golf ball in flight, giving more lift. Without 'em, the ball would travel in more of a parabolic trajectory, hitting the ground sooner (and not coming straight down).
"I've had this idea for making radioactive nuclei decay faster/slower than they normally do. You do [this, that, and the other thing]. Will this work?"
Short Answer: Possibly, but probably not usefully.
"One of the paradigms of nuclear science since the very early days of its study has been the general understanding that the half-life, or decay constant, of a radioactive substance is independent of extranuclear considerations." (Emery, cited below.) Like all paradigms, this one is subject to some interpretation. Normal decay of radioactive stuff proceeds via one of four mechanisms:
Gamma emission often occurs from the daughter of one of the other decay modes. We neglect *very* exotic processes like C-14 emission or double beta decay in this analysis.
"Beta decay" refers most often to a nucleus with a neutron excess, which decays by converting a neutron into a proton:
n ----> p + e- + anti-nu(e),
where n means neutron, p means proton, e- means electron, and anti-nu(e) means an antineutrino of the electron type. The type of beta decay which involves destruction of a proton is not familiar to many people, so deserves a little elaboration. Either of two processes may occur when this kind of decay happens:
p ----> n + e+ + nu(e),
where e+ means positron and nu(e) means electron neutrino; or
p + e- ----> n + nu(e),where e- means a negatively charged electron, which is captured from the neighborhood of the nucleus undergoing decay. These processes are called "positron emission" and "electron capture," respectively. A given nucleus which has too many protons for stability may undergo beta decay through either, and typically both, of these reactions.
"Conversion electrons" are produced by the process of "internal conversion," whereby the photon that would normally be emitted in gamma decay is *virtual* and its energy is absorbed by an atomic electron. The absorbed energy is sufficient to unbind the electron from the nucleus (ignoring a few exceptional cases), and it is ejected from the atom as a result.
Now for the tie-in to decay rates. Both the electron-capture and internal conversion phenomena require an electron somewhere close to the decaying nucleus. In any normal atom, this requirement is satisfied in spades: the innermost electrons are in states such that their probability of being close to the nucleus is both large and insensitive to things in the environment. The decay rate depends on the electronic wavefunctions, i.e, how much of their time the inner electrons spend very near the nucleus -- but only very weakly. For most nuclides that decay by electron capture or internal conversion, most of the time, the probability of grabbing or converting an electron is also insensitive to the environment, as the innermost electrons are the ones most likely to get grabbed/converted.
However, there are exceptions, the most notable being the the astrophysically important isotope beryllium-7. Be-7 decays purely by electron capture (positron emission being impossible because of inadequate decay energy) with a half-life of somewhat over 50 days. It has been shown that differences in chemical environment result in half-life variations of the order of 0.2%, and high pressures produce somewhat similar changes. Other cases where known changes in decay rate occur are Zr-89 and Sr-85, also electron capturers; Tc-99m ("m" implying an excited state), which decays by both beta and gamma emission; and various other "metastable" things that decay by gamma emission with internal conversion. With all of these other cases the magnitude of the effect is less than is typically the case with Be-7.
What makes these cases special? The answer is that one or more of the usual starting assumptions -- insensitivity of electron wave function near the nucleus to external forces, or availability of the innermost electrons for capture/conversion -- are not completely valid. Atomic beryllium only has 4 electrons to begin with, so that the "innermost electrons" are also practically the *outermost* ones and therefore much more sensitive to chemical effects than usual. With most of the other cases, there is so little energy available from the decay (as little as a few electron volts; compare most radioactive decays, where hundreds or thousands of *kilo*volts are released), courtesy of accidents of nuclear structure, that the innermost electrons can't undergo internal conversion. Remember that converting an electron requires dumping enough energy into it to expel it from the atom (more or less); "enough energy," in context, is typically some tens of keV, so they don't get converted at all in these cases. Conversion therefore works only on some of the outer electrons, which again are more sensitive to the environment.
A real anomaly is the beta emitter Re-187. Its decay energy is only about 2.6 keV, practically nothing by nuclear standards. "That this decay occurs at all is an example of the effects of the atomic environment on nuclear decay: the bare nucleus Re-187 [i.e., stripped of all orbital electrons -- MWJ] is stable against beta decay [but not to bound state beta decay, in which the outgoing electron is captured by the daughter nucleus into a tightly bound orbital -SIC] and it is the difference of 15 keV in the total electronic binding energy of osmium [to which it decays -- MWJ] and rhenium ... which makes the decay possible" (Emery). The practical significance of this little peculiarity, of course, is low, as Re-187 already has a half life of over 10^10 years.
Alpha decay and spontaneous fission might also be affected by changes in the electron density near the nucleus, for a different reason. These processes occur as a result of penetration of the "Coulomb barrier" that inhibits emission of charged particles from the nucleus, and their rate is *very* sensitive to the height of the barrier. Changes in the electron density could, in principle, affect the barrier by some tiny amount. However, the magnitude of the effect is *very* small, according to theoretical calculations; for a few alpha emitters, the change has been estimated to be of the order of 1 part in 10^7 (!) or less, which would be unmeasurable in view of the fact that the alpha emitters' half lives aren't known to that degree of accuracy to begin with.
All told, the existence of changes in radioactive decay rates due to the environment of the decaying nuclei is on solid grounds both experimentally and theoretically. But the magnitude of the changes is nothing to get very excited about.
Reference: The best review article on this subject is now 20 years old: G. T. Emery, "Perturbation of Nuclear Decay Rates," Annual Review of Nuclear Science vol. 22, p. 165 (1972). Papers describing specific experiments are cited in that article, which contains considerable arcane math but also gives a reasonable qualitative "feel" for what is involved.
The Anatomy and Habits of a Dippy Bird:
Short answer: Thermodynamics plus Mechanics.
Medium answer (and essential clues): Evaporative cooling on the outside; pV=nRT, evaporation/condensation, and gravity on the inside.
Initially the system is at equilibrium, with T equal in both chambers and pV/n in each compensating for the fluid levels. Evaporation of water outside the head draws heat from inside it; the vapor inside condenses, reducing pV/RT. This imbalances the pressures, so the vapor in the abdomen pushes down, which pushes fluid up the thorax, which reduces V in the head. Since p is decreasing in the abdomen, evaporation occurs, increasing n, and drawing heat from outside the body.
The rising fluid raises the CM above the pivot point; the hips are slightly concave dorsally, so the bird dips forward. Tabs on the legs and the pivot maintain the angle at full dip, for drainage. The amount of fluid is set so that at full dip the lower end of the tube is exposed to the vapor. (The tube reaches almost to the bottom of the abdomen, like a straw in a soda, but flows into the head like the neck of a funnel.) A bubble of vapor rises in the tube and fluid drains into the abdomen.
The rising bubble transfers heat to the head and the falling fluid releases gravitational potential energy as heat into the rising bubble and the abdomen. The CM drops below the pivot point and the bird bobs up. The system is thus reset; it's not quite at equilibrium, but is close enough that the process can repeat this chain of events.
The beak acts as a wick, if allowed to dip into a reservoir of water, to keep the head wet, although it is not necessary for the bird to drink on every dip.
Of course not. Research continues to unravel these unanswered questions about the amazing dippy-bird:
They have real trouble working at all in humid climates (like around the U. of Md., where I owned my first one), but can drive you bats in dry climates (aside from the constant hammering, it's hard to keep the water up to a level where the bird can get at it...). The evaporation of water from the head depends on the diffusibility of water vapor into the atmosphere; high partial pressures of water vapor in the atmosphere translate to low rates of evaporation.
If you handle your bird, clean the glass with alcohol or Windex or Dawn or something; the oil from your hands has a high specific heat, which damps the transfer of heat, and a low thermal conductivity, which attenuates the transfer of heat. Once it's clean, grasp the bird only by the legs or the tube, which are not thermodynamically significant, or wear rubber gloves, just like a real EMT.
The hat is there for show; the dippy bird operates okay with or without it, even though it may reduce the area of evaporation slightly. Ditto the feathers and the eyes.
Questions: What is negative temperature? Can you really make a system which has a temperature below absolute zero? Can you even give any useful meaning to the expression 'negative absolute temperature'?
Answer: Absolutely. :-)
Under certain conditions, a closed system *can* be described by a negative temperature, and, surprisingly, be *hotter* than the same system at any positive temperature. This article describes how it all works.
To get things started, we need a clear definition of "temperature." Our intuitive notion is that two systems in thermal contact should exchange no heat, on average, if and only if they are at the same temperature. Let's call the two systems S1 and S2. The combined system, treating S1 and S2 together, can be S3. The important question, consideration of which will lead us to a useful quantitative definition of temperature, is "How will the energy of S3 be distributed between S1 and S2?" I will briefly explain this below, but I recommend that you read K&K, referenced below, for a careful, simple, and thorough explanation of this important and fundamental result.
With a total energy E, S has many possible internal states (microstates). The atoms of S3 can share the total energy in many ways. Let's say there are N different states. Each state corresponds to a particular division of the total energy in the two subsystems S1 and S2. Many microstates can correspond to the same division, E1 in S1 and E2 in S2. A simple counting argument tells you that only one particular division of the energy, will occur with any significant probability. It's the one with the overwhelmingly largest number of microstates for the total system S3. That number, N(E1,E2) is just the product of the number of states allowed in each subsystem, N(E1,E2) = N1(E1)*N2(E2), and, since E1 + E2 = E, N(E1,E2) reaches a maximum when N1*N2 is stationary with respect to variations of E1 and E2 subject to the total energy constraint.
For convenience, physicists prefer to frame the question in terms of the logarithm of the number of microstates N, and call this the entropy, S. You can easily see from the above analysis that two systems are in equilibrium with one another when (dS/dE)_1 = (dS/dE)_2, i.e., the rate of change of entropy, S, per unit change in energy, E, must be the same for both systems. Otherwise, energy will tend to flow from one subsystem to another as S3 bounces randomly from one microstate to another, the total energy E3 being constant, as the combined system moves towards a state of maximal total entropy. We define the temperature, T, by 1/T = dS/dE, so that the equilibrium condition becomes the very simple T_1 = T_2.
This statistical mechanical definition of temperature does in fact correspond to your intuitive notion of temperature for most systems. So long as dS/dE is always positive, T is always positive. For common situations, like a collection of free particles, or particles in a harmonic oscillator potential, adding energy always increases the number of available microstates, increasingly faster with increasing total energy. So temperature increases with increasing energy, from zero, asymptotically approaching positive infinity as the energy increases.
Not all systems have the property that the entropy increases monotonically with energy. In some cases, as energy is added to the system, the number of available microstates, or configurations, actually decreases for some range of energies. For example, imagine an ideal "spin-system", a set of N atoms with spin 1/2 on a one-dimensional wire. The atoms are not free to move from their positions on the wire. The only degree of freedom allowed to them is spin-flip: the spin of a given atom can point up or down. The total energy of the system, in a magnetic field of strength B, pointing down, is (N+ - N-)*uB, where u is the magnetic moment of each atom and N+ and N- are the number of atoms with spin up and down respectively. Notice that with this definition, E is zero when half of the spins are up and half are down. It is negative when the majority are down and positive when the majority are up.
The lowest possible energy state, all the spins pointing down, gives the system a total energy of -NuB, and temperature of absolute zero. There is only one configuration of the system at this energy, i.e., all the spins must point down. The entropy is the log of the number of microstates, so in this case is log(1) = 0. If we now add a quantum of energy, size uB, to the system, one spin is allowed to flip up. There are N possibilities, so the entropy is log(N). If we add another quantum of energy, there are a total of N(N-1)/2 allowable configurations with two spins up. The entropy is increasing quickly, and the temperature is rising as well.
However, for this system, the entropy does not go on increasing forever. There is a maximum energy, +NuB, with all spins up. At this maximal energy, there is again only one microstate, and the entropy is again zero. If we remove one quantum of energy from the system, we allow one spin down. At this energy there are N available microstates. The entropy goes on increasing as the energy is lowered. In fact the maximal entropy occurs for total energy zero, i.e., half of the spins up, half down.
So we have created a system where, as we add more and more energy, temperature starts off positive, approaches positive infinity as maximum entropy is approached, with half of all spins up. After that, the temperature becomes negative infinite, coming down in magnitude toward zero, but always negative, as the energy increases toward maximum. When the system has negative temperature, it is *hotter* than when it is has positive temperature. If you take two copies of the system, one with positive and one with negative temperature, and put them in thermal contact, heat will flow from the negative-temperature system into the positive-temperature system.
Can this system ever by realized in the real world, or is it just a fantastic invention of sinister theoretical condensed matter physicists? Atoms always have other degrees of freedom in addition to spin, usually making the total energy of the system unbounded upward due to the translational degrees of freedom that the atom has. Thus, only certain degrees of freedom of a particle can have negative temperature. It makes sense to define the "spin-temperature" of a collection of atoms, so long as one condition is met: the coupling between the atomic spins and the other degrees of freedom is sufficiently weak, and the coupling between atomic spins sufficiently strong, that the timescale for energy to flow from the spins into other degrees of freedom is very large compared to the timescale for thermalization of the spins among themselves. Then it makes sense to talk about the temperature of the spins separately from the temperature of the atoms as a whole. This condition can easily be met for the case of nuclear spins in a strong external magnetic field.
Nuclear and electron spin systems can be promoted to negative temperatures by suitable radio frequency techniques. Various experiments in the calorimetry of negative temperatures, as well as applications of negative temperature systems as RF amplifiers, etc., can be found in the articles listed below, and the references therein.
Question: Does my bathtub drain differently depending on whether I live in the northern or southern hemisphere?
Answer: No. There is a real effect, but it is far too small to be relevant when you pull the plug in your bathtub.
Because the earth rotates, a fluid that flows along the earth's surface feels a "Coriolis" acceleration perpendicular to its velocity. In the northern hemisphere low pressure storm systems spin counterclockwise. In the southern hemisphere, they spin clockwise because the direction of the Coriolis acceleration is reversed. This effect leads to the speculation that the bathtub vortex that you see when you pull the plug from the drain spins one way in the north and the other way in the south.
But this acceleration is VERY weak for bathtub-scale fluid motions. The order of magnitude of the Coriolis acceleration can be estimated from size of the "Rossby number" (see below). The effect of the Coriolis acceleration on your bathtub vortex is SMALL. To detect its effect on your bathtub, you would have to get out and wait until the motion in the water is far less than one rotation per day. This would require removing thermal currents, vibration, and any other sources of noise. Under such conditions, never occurring in the typical home, you WOULD see an effect. To see what trouble it takes to actually see the effect, see the reference below. Experiments have been done in both the northern and southern hemispheres to verify that under carefully controlled conditions, bathtubs drain in opposite directions due to the Coriolis acceleration from the Earth's rotation.
Coriolis accelerations are significant when the Rossby number is SMALL. So, suppose we want a Rossby number of 0.1 and a bathtub-vortex length scale of 0.1 meter. Since the earth's rotation rate is about 10^(-4)/second, the fluid velocity should be less than or equal to 2*10^(-6) meters/second. This is a very small velocity. How small is it? Well, we can take the analysis a step further and calculate another, more famous dimensionless parameter, the Reynolds number.
The Reynolds number is = L*U*density/viscosity
Assuming that physicists bathe in hot water the viscosity will be about 0.005 poise and the density will be about 1.0, so the Reynolds Number is about 4*10^(-2).
Now, life at low Reynolds numbers is different from life at high Reynolds numbers. In particular, at low Reynolds numbers, fluid physics is dominated by friction and diffusion, rather than by inertia: the time it would take for a particle of fluid to move a significant distance due to an acceleration is greater than the time it takes for the particle to break up due to diffusion.
The same effect has been accused of responsibility for the direction water circulates when you flush a toilet. This is surely nonsense. In this case, the water rotates in the direction which the pipe points which carries the water from the tank to the bowl.
The simple answer is that they don't. Look in a mirror and wave your right hand. On which side of the mirror is the hand that waved? The right side, of course.
Mirrors DO reverse In/Out. Imagine holding an arrow in your hand. If you point it up, it will point up in the mirror. If you point it to the left, it will point to the left in the mirror. But if you point it toward the mirror, it will point right back at you. In and Out are reversed.
If you take a three-dimensional, rectangular, coordinate system, (X,Y,Z), and point the Z axis such that the vector equation X x Y = Z is satisfied, then the coordinate system is said to be right-handed. Imagine Z pointing toward the mirror. X and Y are unchanged (remember the arrows?) but Z will point back at you. In the mirror, X x Y = - Z. The image contains a left-handed coordinate system.
This has an important effect, familiar mostly to chemists and physicists. It changes the chirality, or handedness, of objects viewed in the mirror. Your left hand looks like a right hand, while your right hand looks like a left hand. Molecules often come in pairs called stereoisomers, which differ not in the sequence or number of atoms, but only in that one is the mirror image of the other, so that no rotation or stretching can turn one into the other. Your hands make a good laboratory for this effect. They are distinct, even though they both have the same components connected in the same way. They are a stereo pair, identical except for "handedness".
People sometimes think that mirrors *do* reverse left/right, and that the effect is due to the fact that our eyes are aligned horizontally on our faces. This can be easily shown to be untrue by looking in any mirror with one eye closed!
Stars, except for the Sun, although they may be millions of miles in diameter, are very far away. They appear as point sources even when viewed by telescopes. The planets in our solar system, much smaller than stars, are closer and can be resolved as disks with a little bit of magnification (field binoculars, for example).
Since the Earth's atmosphere is turbulent, all images viewed up through it tend to "swim." The result of this is that sometimes a single point in object space gets mapped to two or more points in image space, and also sometimes a single point in object space does not get mapped into any point in image space. When a star's single point in object space fails to map to at least one point in image space, the star seems to disappear temporarily. This does not mean the star's light is lost for that moment. It just means that it didn't get to your eye, it went somewhere else.
Since planets represent several points in object space, it is highly likely that one or more points in the planet's object space get mapped to a points in image space, and the planet's image never winks out. Each individual ray is twinkling away as badly as any star, but when all of those individual rays are viewed together, the next effect is averaged out to something considerably steadier.
The result is that stars tend to twinkle, and planets do not. Other extended objects in space, even very far ones like nebulae, do not twinkle if they are sufficiently large that they have non-zero apparent diameter when viewed from the Earth.
We define time travel to mean departure from a certain place and time followed (from the traveller's point of view) by arrival at the same place at an earlier (from the sedentary observer's point of view) time. Time travel paradoxes arise from the fact that departure occurs after arrival according to one observer and before arrival according to another. In the terminology of special relativity time travel implies that the timelike ordering of events is not invariant. This violates our intuitive notions of causality. However, intuition is not an infallible guide, so we must be careful. Is time travel really impossible, or is it merely another phenomenon where "impossible" means "nature is weirder than we think?" The answer is more interesting than you might think.
The B-movie image of the intrepid chrononaut climbing into his time machine and watching the clock outside spin backwards while those outside the time machine watch the him revert to callow youth is, according to current theory, impossible. In current theory, the arrow of time flows in only one direction at any particular place. If this were not true, then one could not impose a 4-dimensional coordinate system on space-time, and many nasty consequences would result. Nevertheless, there is a scenario which is not ruled out by present knowledge. This usually requires an unusual spacetime topology (due to wormholes or strings in general relativity) which has not yet seen, but which may be possible. In this scenario the universe is well behaved in every local region; only by exploring the global properties does one discover time travel.
It is sometimes argued that time travel violates conservation laws. For example, sending mass back in time increases the amount of energy that exists at that time. Doesn't this violate conservation of energy? This argument uses the concept of a global conservation law, whereas relativistically invariant formulations of the equations of physics only imply local conservation. A local conservation law tells us that the amount of stuff inside a small volume changes only when stuff flows in or out through the surface. A global conservation law is derived from this by integrating over all space and assuming that there is no flow in or out at infinity. If this integral cannot be performed, then global conservation does not follow. So, sending mass back in time might be all right, but it implies that something strange is happening. (Why shouldn't we be able to do the integral?)
One case where global conservation breaks down is in general relativity. It is well known that global conservation of energy does not make sense in an expanding universe. For example, the universe cools as it expands; where does the energy go? See FAQ article #7 - Energy Conservation in Cosmology, for details.
It is interesting to note that the possibility of time travel in GR has been known at least since 1949 (by Kurt Godel, discussed in , page 168). The GR spacetime found by Godel has what are now called "closed timelike curves" (CTCs). A CTC is a worldline that a particle or a person can follow which ends at the same spacetime point (the same position and time) as it started. A solution to GR which contains CTCs cannot have a spacelike embedding - space must have "holes" (as in donut holes, not holes punched in a sheet of paper). A would-be time traveller must go around or through the holes in a clever way.
The Godel solution is a curiosity, not useful for constructing a time machine. Two recent proposals, one by Morris, et al.  and one by Gott , have the possibility of actually leading to practical devices (if you believe this, I have a bridge to sell you). As with Godel, in these schemes nothing is locally strange; time travel results from the unusual topology of spacetime. The first uses a wormhole (the inner part of a black hole, see fig. 1 of ) which is held open and manipulated by electromagnetic forces. The second uses the conical geometry generated by an infinitely long string of mass. If two strings pass by each other, a clever person can go into the past by traveling a figure-eight path around the strings. In this scenario, if the string has non-zero diameter and finite mass density, there is a CTC without any unusual topology.
With the demonstration that general relativity contains CTCs, people began studying the problem of self-consistency. Basically, the problem is that of the "grandfather paradox": What happens if our time traveller kills her grandmother before her mother was born? In more readily analyzable terms, one can ask what are the implications of the quantum mechanical interference of the particle with its future self. Boulware  shows that there is a problem - unitarity is violated. This is related to the question of when one can do the global conservation integral discussed above. It is an example of the "Cauchy problem" [1, chapter 7].
How does one avoid the paradox that a simple solution to GR has CTCs which QM does not like? This is not a matter of applying a theory in a domain where it is expected to fail. One relevant issue is the construction of the time machine. After all, infinite strings aren't easily obtained. In fact, it has been shown  that Gott's scenario implies that the total 4-momentum of spacetime must be spacelike. This seems to imply that one cannot build a time machine from any collection of non-tachyonic objects, whose 4-momentum must be timelike. There are implementation problems with the wormhole method as well.
Finally, a diversion on a possibly related topic.
If tachyons exist as physical objects, causality is no longer invariant. Different observers will see different causal sequences. This effect requires only special relativity (not GR), and follows from the fact that for any spacelike trajectory, reference frames can be found in which the particle moves backward or forward in time. This is illustrated by the pair of spacetime diagrams below. One must be careful about what is actually observed; a particle moving backward in time is observed to be a forward moving anti-particle, so no observer interprets this as time travel.
t One reference | Events A and C are at the same frame: | place. C occurs first. | | Event B lies outside the causal | B domain of events A and C. -----------A----------- x (The intervals are spacelike). | C In this frame, tachyon signals | travel from A-->B and from C-->B. | That is, A and C are possible causes of event B. Another t reference | Events A and C are not at the same frame: | place. C occurs first. | | Event B lies outside the causal -----------A----------- x domain of events A and C. (The | intervals are spacelike) | | C In this frame, signals travel from | B-->A and from B-->C. B is the cause | B of both of the other two events.
The unusual situation here arises because conventional causality assumes no superluminal motion. This tachyon example is presented to demonstrate that our intuitive notion of causality may be flawed, so one must be careful when appealing to common sense. See FAQ article # 25 - Tachyons, for more about these weird hypothetical particles.
The possible existence of time machines remains an open question. None of the papers criticizing the two proposals are willing to categorically rule out the possibility. Nevertheless, the notion of time machines seems to carry with it a serious set of problems.
While for the most part a FAQ covers the answers to frequently asked questions whose answers are known, in physics there are also plenty of simple and interesting questions whose answers are not known. Before you set about answering these questions on your own, it's worth noting that while nobody knows what the answers are, there has been at least a little, and sometimes a great deal, of work already done on these subjects. People have said a lot of very intelligent things about many of these questions. So do plenty of research and ask around before you try to cook up a theory that'll answer one of these and win you the Nobel prize! You can expect to really know physics inside and out before you make any progress on these.
The following partial list of "open" questions is divided into two groups, Cosmology and Astrophysics, and Particle and Quantum Physics. However, given the implications of particle physics on cosmology, the division is somewhat artificial, and, consequently, the categorization is somewhat arbitrary.
(There are many other interesting and fundamental questions in fields such as condensed matter physics, nonlinear dynamics, etc., which are not part of the set of related questions in cosmology and quantum physics which are discussed below. Their omission is not a judgement about importance, but merely a decision about the scope of this article.)
This last question sits on the fence between the two categories above:
How do you merge Quantum Mechanics and General Relativity to create a quantum theory of gravity? Is Einstein's theory of gravity (classical GR) also correct in the microscopic limit, or are there modifications possible/required which coincide in the observed limit(s)? Is gravity really curvature, or what else -- and why does it then look like curvature? An answer to this question will necessarily rely upon, and at the same time likely be a large part of, the answers to many of the other questions above.