A quantum switch is one in which a state in superposition can cause an indefinite ordering of events. In the quantum switch, the order in which two quantum operations and , (considered as "black box" operations) are performed on a target system is coherently controlled by a control quantum system, This can also be seen as a particular case of "superposition of time evolution".
An experiment is set up so that we have a particle in an initial state passing through a system of gates. The first of these is the control qbit, which determines which path it takes through the system. Then we have various operators and , which change the state of the particle in a well defined way. The two operations don't commute with each other, so is not the same as . If the control qbit is in one state, , then happens before . If it is in the other state, , then occurs first, and then . Finally, we have the detector. So for the qbit in the state, the particle's final state will be , while if the qbit is in the state , then the final state will be . These differences can be measured, and thus the order in which the two events occurred can be deduced.
However, if the qbit is in a superposition, , then the final state will also be in a superposition, . This means that one cannot tell which of the two events occurred first.
This runs against our usual intuition that events happen in a fixed causal order. In particular, it seems to give a serious difficulty for Aristotelian metaphysics, which relies on a definite idea of causality.
For example, the experimental setup described here, sends a photon through a polarising beam splitter. It is then routed through either operator or operator depending on the polarisation of the beam. The operators and only affect the transverse spatial mode, and thus leave the polarisation unchanged. The two paths are recombined via another beam-splitter, reflected via mirrors to pass through the original beam splitter again, and then into the opposite operator and finally a detector. So although the polarisation state might be unaffected by the beam splitter, its momentum is not. The initial beam splitter can change the state of the incoming particle.
When discussing causality, we have to be particularly careful of which form of causality is under discussion. The article begins by stating
In daily experience, it is natural to think of events happening in a fixed causal order.
However, my version of Aristotelian causality does not state that events have causes, or that substances are caused by events, but that substances are the efficient cause of other substances. I fully accept that (the more modern notion of) event causality is ruled out by quantum physics. However, that is an argument for Aristotelian causality, since it removes a rival vision of causality. Equally, I don't think it necessary that we know what the efficient causes are of each particular substance. All we have to be certain of is that it does come from a chain of efficient causes; but which chain may be impossible for us to determine.
There is a key way in which I differ from most Aristotelians, and that is in how we ought to parametrise uncertainty. First of all, I believe that this should be done mathematically (which Aristotle himself would have shuddered at). But there is another difference as well. All uncertainty is conditional; that is to say that it is calculated based on certain assumptions. For a physical system, these might include the initial state of the system, and the nature of the laws of physics. In entanglement experiments, they might also include the results of measurements on one of the two particles, and we are left to predict the result of the other experiment given what we know. The goal is to express how certain we are that each of the possible final states could occur. The standard measure of uncertainty is the probability. This is a real number between zero and one, with zero meaning that it is definitely false, and one meaning that it is definitely true. There are other assumptions behind probability theory. Frequency distributions are governed by the same assumptions, which means that probabilities can be used to directly predict frequencies, which gives a nice link between theory (which computes probabilities) and experiment (which measures frequencies).
Among the assumptions of probability theory is that each possible outcome of the system can be expressed in terms of irreducible basis states. One can combine these basis states (i.e. the final state is either or ), but one cannot split them any further. There is, in short, a fundamental layer to reality. Probability theory further assumes that these basis states don't overlap with each other. However, this is problematic when we come to quantum physics. The outcomes of quantum physics experiments can also be expressed in terms of irreducible basis states (or Aristotlian potentia). But there is no unique basis, meaning that the states can overlap with each other. This means that probability is not valid as a measure of uncertainty in quantum physics. Instead, we need some other measure of uncertainty, which does allow the irreducible states to overlap, and this is the quantum amplitude. Quantum amplitudes are strange, in that they allow cancellations between different amplitudes. This leads to many unintuitive predictions. But nonetheless, those predictions match experiment.
So now we know how to parametrise uncertainty; we have to next ask what are we uncertain about. The question is "What is the likelihood of this particular outcome given that I know these particular facts about the system?" What we are interested in is predicting the results of experiments, and that is what calculations in quantum physics are designed to do, and they do it very well. What they don't tell us are the details of what happens between measurements. That doesn't mean that nothing happens then, but only that the framework is not sufficient to tell us. Or at least, the framework presents us with a (possibly small) number of options, but cannot select between them. But, as physicists, that doesn't really matter, since what we want to do is to use theory to predict the results of experiment, and that is done very well. Nor is it of special interest to the metaphysicist. The metaphysicist is interested in general principles rather than finer details. The theory reduces the possible paths taken by the real system to a particular class of solutions. The metaphysicist asks "Why is this class of solutions possible and not that one?" From that they draw out the general principles.
Perhaps we know the initial state of our experimental setup. We need to predict the likelihood of different outcomes which we haven't yet measured. But all we know when we complete the experiment is what the initial state is and what the final state is. If there are different ways in which the system can move from initial state to final state (and in a quantum system, there are always many different possible paths from to ), we cannot know which one was taken by the system in practice. All we can do is express how likely each option is as a quantum amplitude. Our ignorance does not constrain reality.
The amplitude thus plays two distinct roles in quantum physics. The first is to express that systems are distinguished by being in a particular state in a particular basis, but that these bases aren't orthogonal to each other. This means that for every quantum state, many properties of the system will be indeterminate. (Substances are defined by states rather than properties or predicates.) Each property is only well defined by states in a single basis, but the particle need not be in that basis. The second role the amplitude plays is as a parametrisation of our lack of knowledge of the system. In particular, in this role, they are purely epistemic and used to predict the likelihoods outcomes of experiments given a particular set of initial knowledge. The theories refuted by (for example) Bell's theorem assume that the hidden variables are classical, with uncertainty parametrised by a probability. However, if uncertainty is instead parametrised by an amplitude, then we recover the standard results of quantum physics.
There are thus two different types of superposition. The first are those similar to a photon in a superposition between two polarisation states. This superposition is in itself a well defined state in a given basis. Thus rather than saying that the photon is in an undefined polarisation, we should say that it is in a definite state in which the property we want to measure is undefined. When we measure the polarisation, we force the particle into a different basis where the observable is well defined. It will select which state according to the amplitude that describes the overlap between the initial and funal states. This type of superposition is inherent in nature; it is a representation of the actual physical state, independent of us as observers. The second type of superposition arises from our lack of knowledge of the internals of a quantum system. We measure the initial state and the final state, but can't know how it goes from one to the other. This occurs, for example, when the particle can travel down two different routes in space, but we simply don't know which one happened. The difference between quantum physics and a classical system is that we have to parametrise this uncertainty by an amplitude rather than a probability. These two types of superposition, one physical and the other epistemic, are described by the same mathematics. (Of course, we can also be uncertain about a particle's polarisation state.)
In this particular example, we make use of both types of superposition. The particle would go through one route through the system, but we cannot know which one. This gives us a superposition of the second kind (our knowledge rather than something physical). However, what we are measuring are polarisation states, which form a superposition of the first kind (something physical rather than our knowledge). So we have to write the amplitude for each possible path, and add the amplitudes together. At the end of the experiment, we can still not determine which of the two paths the particle took, so the final result is still written in a state of superposition, meaning that that particular property is indeterminate.
Passing through the qbit does not leave the complete state of the particle unchanged, but only the aspect of it that is being measured.
Thus we have a photon entering in a particular state. The beam splitter throws it into a (knowledge) superposition state. So the effcient cause of the particle state, if it goes down the the first branch would be the incoming particle combined with whatever interacted with it as it passed through the beam splitter. It then encounters process , and the efficient cause of the result is the particle that came into combined with whatever it was that the particle interacted with as it passed through . And so on. So each state along the path has a definite efficient cause. Similarly, if the particle went down the other branch, then each state would again have a deffinite efficient cause. So there is always a definite series of efficient causes linking the initial state and the final state. However, we don't know, and can't measure, which set of efficient causes happened in reality. So we have to write the final state of the system in a (knowledge) superposition of the two possibilities. But reality is not undermined by our lack of knowledge. In reality, there is always an efficient cause, even if we cannot know what it is.
So experiments of this sort don't undermine Aristotelian efficient causality, at least not my own form of it. It undermines other visions of causality, but not this one.
All fields are optional
Comments are generally unmoderated, and only represent the views of the person who posted them.
I reserve the right to delete or edit spam messages, obsene language,or personal attacks.
However, that I do not delete such a message does not mean that I approve of the content.
It just means that I am a lazy little bugger who can't be bothered to police his own blog.