Looking at quantum gravity in a mirror

Einstein’s theory of gravity and quantum physics are expected to merge at the Planck-scale of extremely high energies and on very short distances. At this scale, new phenomena could arise. However, the Planck-scale is so remote from current experimental capabilities that tests of quantum gravity are widely believed to be nearly impossible. Now an international collaboration between the groups of Prof. C. Brukner and Prof. M. Aspelmeyer at the University of Vienna and Prof. M. S. Kim at Imperial College London has proposed a new quantum experiment using Planck-mass mirrors. Such an experiment could test certain predictions made by quantum gravity proposals in the laboratory. The findings will be published this week in “Nature Physics”.

A long-standing challenge
The search for a theory that unifies quantum mechanics with Einstein’s theory of gravity is one of the main challenges in modern physics. Quantum mechanics describes effects at the scale of single particles, atoms and molecules. Einstein’s theory of gravity, on the other hand, is typically relevant for large masses. It is widely expected that phenomena stemming from a unified theory of quantum gravity will become evident only at the so-called Planck-scale of extremely high energies or extremely small distances. The Planck-length is 1.6 x 10-35 meters: This is so small that if one were to take this scale to be 1 meter, then an atom would be as large as the entire visible Universe! Similarly, the Planck-energy is so large that even the Large Hadron Collider in CERN only reaches an insignificantly tiny fraction of this energy, and a particle accelerator would need to be of astronomical size to get even close to the Planck-Energy. This scale is also described by the Planck-mass: A piece of dust weights about that much, which is truly heavy compared to single atoms, and quantum phenomena are typically considered unobservable for such masses. The Planck-scale is therefore so remote from current experimental capabilities that tests of quantum gravity proposals are widely believed to be nearly impossible. However, physicists have now found a way to probe some predictions of quantum gravity proposals in the laboratory by looking at quantum effects in Planck-mass quantum systems.

The sequence makes the difference
In quantum mechanics it is impossible to know where a particle is and how fast it is moving at the same time. Nevertheless, it is possible to make two subsequent measurements: a measurement of the particle’s position followed by a measurement of its momentum, or vice-versa. In quantum physics the two different measurement sequences produces different experimental results. According to many theories of quantum gravity, this difference would be altered depending on the mass of the system, since the Planck-length puts a fundamental limit on measurements of distances. The team of physicists have now shown that although such modifications would be very small, they could be verified by using very massive quantum systems in the laboratory. Such an experiment could therefore test some of the proposals for quantum gravity.

Probing new theories with moving mirrors
The main idea is to use a laser pulse to interact four times with a moving mirror to probe exactly the difference between measuring first position after measuring momentum as compared to measuring momentum after measuring the position. By timing and engineering the interactions very precisely, the team have shown it is possible to map the effect onto the laser pulse and to read it out with quantum optical techniques. “Any deviation from the expected quantum mechanical result would be very exciting”, says Igor Pikovski, the lead author of the work, “but even if no deviation is observed, the results can still help in the search for possible new theories“. Some theoretical approaches to quantum gravity indeed predict different outcomes for the experiment. The scientists thus show how to probe these yet unexplored theories in a laboratory without using high-energy particle accelerators and without relying on rare astrophysical events.

More details see “Probing Planck-scale physics with quantum optics”, Nature Physics 8, 393 (2012),
and news from C. Brukner’s group and M. Aspelmeyer’s group.


Commentary: Too many authors, too few creators (Physics Today)


A few years ago, Robert Fefferman, dean of physical sciences at the University of Chicago, made an interesting remark. He mentioned that Enrico Fermi, wanting to encourage individual creativity and innovation, required his PhD students to select their problem, solve it, and submit the results for publication in their name alone. Fermi also was aware that a multiauthor paper with one famous author might receive automatic acceptance rather than a thoughtful and thorough review. Many PhD students then and since have published their theses under joint authorship with their advisers. Unfortunately, the need among grant-seeking academics to publish and be cited often grew stronger, especially during federal funding cutbacks, the most recent example being the cuts in science budgets under President George W. Bush. When applying for government grants, an applicant team’s record of many cited publications was important to confirm that the submitted proposal had significant cachet for continuing support. A vicious cycle began.
Over the years, publication lists were increasing. Some colleagues boasted more than 300 publications and one close to 800! The number of authors associated with each published article was also increasing; single-author papers had become relatively rare. Were Albert Einstein, Enrico Fermi, Richard Feynman, and other great scientists just lucky in finding simple ideas that one mind could understand and present? Or was technological creativity becoming so difficult that great teams of scientists were required to recognize and develop it?
Seeking answers, I made a cursory examination of some publication records of the last half century. I selected the eight publications listed in the table to the right and selected the first issue of each from January 1965 and from January 2011. To compare innovation over time, I included data on the first 100 patents issued to US applicants by the US Patent and Trademark Office during the corresponding periods. The data were gleaned from the office’s weekly Official Gazette.
The results seem truly astonishing. Although the data sets selected are relatively small, they show the downward trend of individual creativity. Most of the papers studied were written by authors in, or associated with, academia. A few came from government laboratories and some from industry.
It has long been evident that some professors established groups of graduate students whose main activities were often focused on publishing research results. Authorship of such articles expanded to include all members of the group despite only the peripheral or negligible contributions by some, historically referenced in an acknowledgements section. Over the years, nonacademic groups, especially in the life and pharmaceutical sciences, added to the publication proliferation; R&D directors often put their names on every paper leaving their lab. A person even slightly involved with a project would be added as an author. What began as an innovative topic of investigation became an opportunity to be published and thereby increase one’s personal citation numbers. Thus participants who simply made measurements, or converted the measurements into appropriate numbers, or kept the equipment operating were all listed as purportedly creative coauthors. What was actually the creativity of one or two authors became the work of a great many. And each such paper carried the name of the professor whose contract or grant paid for the work. Thus if the paper turned out to be important, the multitude of authors could add impressively to their CVs.
On occasion there even may have been a sinister element to the process of adding authors. For example, the author list might include a friend or colleague of the lead scientist, an indirect financial supporter, or a contractor’s technical representative. No author would ever discuss this matter publicly.
Many problems of irrelevant authorship arise with the journals themselves. Although all journals provide manuscript preparation guidelines that include some type of warning against “double publishing”—that is, repeating significant portions in another paper—only a few ask the contact author to confirm that the listed authors all contributed to the paper. Both Science and Cell, for example, do ask that all authors of an accepted paper “state their contribution to the paper,” but they do not list any criteria for actual authorship, nor whether specific types of contributors should be relegated to an acknowledgement section.
A friend of mine, a former Bell Labs physicist, defended the inclusion of his name to the end of the author queue of each paper published by his students though many of the ideas were entirely his. His reasoning was that “the graduate student should always have top billing so that his career can be advanced.” Each author’s personal list of “first author” publications was certainly increased by my friend’s unselfish generosity. It remained up to the reader to figure out whose ideas were actually being presented.
Whereas in former days, a PhD candidate during graduate school would prepare only a single paper based entirely on his or her work, the trend today is to leave graduate school with a raft of publications, considered essential for a job or postdoctoral appointment. Unfortunately, the time spent getting published often seems to be at the expense of obtaining the greater in-depth knowledge of the science itself. In the hundreds of interviews and CV reviews I have conducted over the past 25 years, I have found the presence of the basic building blocks of the science decreasing with each passing year. When a recent PhD in a physical science said that helium formed diatomic molecules, I knew we were in trouble!
The patent data shown in the table are of particular interest. The percentages for two or three inventors per invention for the most recently issued patents do not vary greatly from the percentages for 46 years earlier. Here’s why: If a listed inventor, or “innovator,” did not actually contribute to the invention, the issued patent will be void if such deception is ever discovered. The patents most easily challenged in court may well be those with extraordinary numbers of inventors.
Here’s a final Fermi-inspired question: How many of today’s tenured faculty members or research directors have never written a single-author paper?


Quantum Cheshire Cat: Even Weirder Than Schrodinger’s


Just when you thought you’d heard every quantum mystery that was possible, out pops another one. Jeff Tollaksen mentioned it in passing during his talk at the recent Foundation Questions Institute conference. Probably Tollaksen assumed we’d all heard it before. After all, his graduate advisor, Yakir Aharonov—who has made an illustrious career of poking the Schrödinger equation to see what wild beasts come scurrying out—first discovered it in the 1990s and discussed it in chapter 17 of his 2005 book, Quantum Paradoxes. But it was new to me.

The situation is an elaboration of Schrödinger’s thought experiment. You have a cat. It is either purring or meowing. It is curled up in one of two boxes. As in Schrödinger’s scenario, you couple the cat to some quantum system, like a radioactive atom, to make its condition ambiguous—a superposition of all possibilities—until you examine one of the boxes. If you reach into box 2, you feel the cat. If you listen to the boxes, you hear purring. But when you listen more closely, you notice that the purring is coming from box 1. The cat is in one box, the purring in the other. Like a Cheshire Cat, the animal has become separated from the properties that constitute a cat. What a cat does and what a cat is no longer coincide.

In practice, you’d pull this stunt on an electron rather than a cat. You’d find the electron in one box, its spin in the other. Even by the standards of quantum mechanics, this is surprising. It requires what quantum physicists call “weak measurement,” whereby you interact with a system so gently that you avoid collapsing it from a quantum state to a classical one. On the face of it, such an interaction scarcely qualifies as a measurement; any results get lost in the noise of Heisenberg’s Uncertainty Principle. What Aharonov realized is that, if you sift through the results, you can find patterns buried within them.

In practice, this means repeating the experiment on a large number of electrons (or cats) and then applying a filter or “postselection.” Only a few particles will pass through this filter, and among them, the result of the softly softly measurement will stand out.

Because you avoid collapsing the quantum state, quintessentially quantum phenomena such as wave interference still occur. So, for a Cheshire Cat, you apply the following filter: you change the sign of one term in the superposition, causing the location and spin of the electron to interfere constructively in one box and destructively in the other, zeroing out the probability of finding the electron in box 1 and zeroing out the net spin of the electron in box 2. Voilà, the electron is in box 2 and its spin in box 1.

If this leaves your head spinning, it should. The word “weak” describes not only the measurement but also my intuitive grasp for what’s really going on. The best I can do is recommend the article on weak measurement by Aharonov, Tollaksen, and Sandu Popescu in last November’s Physics Today, but be prepared to read it several times before you have the slightest idea of what they’re saying. I’ve commissioned an article about Aharonov’s work for an upcoming issue of Scientific American to collapse some of the uncertainty. In the meantime, try sitting in a different room from where your confusion is.

More see, Quantum Cheshire Cats, arXiv:1202.0631.
“In this paper we present a quantum Cheshire cat. In a pre- and post-selected experiment we find the cat in one place, and the smile in another. The cat is a photon, while the smile is it’s circular polarisation.”

Quantum theorem shakes foundations

The wavefunction is a real physical object after all, say researchers.

Eugenie Samuel Reich
17 November 2011


At the heart of the weirdness for which the field of quantum mechanics is famous is the wavefunction, a powerful but mysterious entity that is used to determine the probabilities that quantum particles will have certain properties. Now, a preprint posted online on 14 November [1] reopens the question of what the wavefunction represents — with an answer that could rock quantum theory to its core. Whereas many physicists have generally interpreted the wavefunction as a statistical tool that reflects our ignorance of the particles being measured, the authors of the latest paper argue that, instead, it is physically real.

“I don’t like to sound hyperbolic, but I think the word ‘seismic’ is likely to apply to this paper,” says Antony Valentini, a theoretical physicist specializing in quantum foundations at Clemson University in South Carolina.

Valentini believes that this result may be the most important general theorem relating to the foundations of quantum mechanics since Bell’s theorem, the 1964 result in which Northern Irish physicist John Stewart Bell proved that if quantum mechanics describes real entities, it has to include mysterious “action at a distance”.

Action at a distance occurs when pairs of quantum particles interact in such a way that they become entangled. But the new paper, by a trio of physicists led by Matthew Pusey at Imperial College London, presents a theorem showing that if a quantum wavefunction were purely a statistical tool, then even quantum states that are unconnected across space and time would be able to communicate with each other. As that seems very unlikely to be true, the researchers conclude that the wavefunction must be physically real after all.

David Wallace, a philosopher of physics at the University of Oxford, UK, says that the theorem is the most important result in the foundations of quantum mechanics that he has seen in his 15-year professional career. “This strips away obscurity and shows you can’t have an interpretation of a quantum state as probabilistic,” he says.

Historical debate

The debate over how to understand the wavefunction goes back to the 1920s. In the ‘Copenhagen interpretation’ pioneered by Danish physicist Niels Bohr, the wavefunction was considered a computational tool: it gave correct results when used to calculate the probability of particles having various properties, but physicists were encouraged not to look for a deeper explanation of what the wavefunction is.

Albert Einstein also favoured a statistical interpretation of the wavefunction, although he thought that there had to be some other as-yet-unknown underlying reality. But others, such as Austrian physicist Erwin Schrödinger, considered the wavefunction, at least initially, to be a real physical object.

The Copenhagen interpretation later fell out of popularity, but the idea that the wavefunction reflects what we can know about the world, rather than physical reality, has come back into vogue in the past 15 years with the rise of quantum information theory, Valentini says.

Rudolph and his colleagues may put a stop to that trend. Their theorem effectively says that individual quantum systems must “know” exactly what state they have been prepared in, or the results of measurements on them would lead to results at odds with quantum mechanics. They declined to comment while their preprint is undergoing the journal-submission process, but say in their paper that their finding is similar to the notion that an individual coin being flipped in a biased way — for example, so that it comes up ‘heads’ six out of ten times — has the intrinsic, physical property of being biased, in contrast to the idea that the bias is simply a statistical property of many coin-flip outcomes.

Quantum information

Robert Spekkens, a physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, who has favoured a statistical interpretation of the wavefunction, says that Pusey’s theorem is correct and a “fantastic” result, but that he disagrees about what conclusion should be drawn from it. He favours an interpretation in which all quantum states, including non-entangled ones, are related after all.

Spekkens adds that he does expect the theorem to have broader consequences for physics, as have Bell’s and other fundamental theorems. No one foresaw in 1964 that Bell’s theorem would sow the seeds for quantum information theory and quantum cryptography — both of which rely on phenomena that aren’t possible in classical physics. Spekkens thinks this theorem may ultimately have a similar impact. “It’s very important and beautiful in its simplicity,” he says.

[1] Pusey, M. F., Barrett, J. & Rudolph, T., arxiv 1111.3328 (2011).

通过访问 WordPress.com 创建免费网站或博客.