Tony's Home

 

Dolphins may be mental/spiritual beings, with each sub-brain acting as a quantum computer, and both of them together interacting, through interference, to give a perspective Dolphin vision of the Many-Worlds by acting like

a device designed by Andrew Gray:

 

In the Many-Worlds Quantum Theory of Andrew Gray, in quant-ph/9804008 and quant-ph/9712037, entire Cosmic Histories are Selected over all space and time, with a probability for selection assigned to each possible history. Each entire Cosmic History is selected by calculating both the product of probabilities for each step in that history and the product of the interference factors, which measure interference with other possible histories, at each time. It is the interference factor which makes the theory intrinsically non-causal at the microscopic level. Gray views things from the point of view of a Massless Lightcone life form that perceives the whole of its space-time world line, as distinct from the local perspective of Material life forms such as humans. Gray's work is not merely theoretical. He presents, in quant-ph/9804008, a plan for a device that would effectively enable one to see into the future, or perhaps modify the future. As Michael Gibbs has noted, how easily the device can actually be constructed depends, among other things, on how easily a down converter can be built that has the desired properties. As Michael Gibbs says, perhaps a down converter that can maintain coherence can be built of layers that are no thicker than the photon wavelength.

In Gray's device,

a source emits a primary photon, which meets a beam splitter. Each route takes the photon into a down-converter, which will output two photons of half the frequency of the primary photon. One route out of each down converter sends a photon into the interference apparatus, which consists of a beam splitter which each route will hit, and a pair of detectors labelled A and B. This photon is called the interference photon, as it may interfere with itself. The other way out of each down converter sends a photon into a delayer, which is any device which can lengthen the flight time of the photon, and thus delay the time before it ultimately reaches a detector. In practice it could be a set of mirrors, a coil of fibre optic cable, or some device which can store a photon for a while before releasing it again. After the photon has been delayed for the desired time it is then released to continue on its journey. This photon is called the measurement photon, as it may be used to measure which route was taken by the primary photon. We also reflect the measurement photon back towards the interference apparatus before doing anything with it, so as to ensure that there is a time like separation between the relevant events that occur to the two photons.

If both possible paths of the measurement photon are made to converge again at a screen there will be interference between the two histories corresponding to each route the primary photon can take at the first beam splitter. If all the path lengths are correctly set, it can be arranged that in this case the probability for the interference photon to arrive at detector A is zero, and the probability for it to arrive at B is one.

On the other hand we could place a blocker in the path of one the routes the measurement photon can take. In this case the histories in which the primary photon goes one way will not reconverge with those in which it goes the other, and there will thus be no interference between the two, this will result in the interference photon having equal probability to arrive at A or B. We could potentially send several photons into this apparatus one after the other, and have all of the resultant interference photons arrive at the interference apparatus before the first of the measurement photons arrives at the measurement apparatus. In this case we could see if there was interference, with an arbitrarily large set of photons, before it is decided whether or not to observe which route each primary photon took. We would thus know in advance the future state of the blocker.

Here are the Possible Histories in the Time Machine:

History    Reading at Detectors A/B    Actual State
  1              Blocker On                On
  2              Blocker On                Off
  3              Blocker Off               On
  4              Blocker Off               Off

To calculate the total probability for each of these histories we must calculate the probabilities for the possible branchings within each history, and also any interference factors. We shall consider the general case where n primary photons are used.

Here are the Probabilities for the Histories:

History  Branching Prob.  Branching Prob. Interference     Total
        detector reading   blocker state    Factor      Probability
  1        1 - 2^(-n)          Pb             1        (1 - 2^(-n)) Pb
  2        1 - 2^(-n)         1 - Pb          0              0
  3         2^(-n)             Pb             1           2^(-n) Pb
  4         2^(-n)            1 - Pb          2^n          1 - Pb

In the case of Histories 1 and 4 the machine has correctly predicted the future, in cases 2 and 3 it has got it wrong. By adding the probabilities for Histories 1 and 4 we find that the accuracy of the machine is (1 - 2^(-n)) Pb.

If the blocker is equally likely to be on or off, Pb = 1 /2 and the accuracy becomes (1 - 2^(-n+1)) , which corresponds to an error rate of 2^(-(n+1)) . In the worst case,with Pb = 1, the error rate is still very small at 2^(-n). So as we can see, the accuracy of the time machine doubles with each extra photon used, and can in principle be made as accurate, and see as far into the future, as one desires.


Here is an analysis of Gray's machine by Jack Sarfatti:

In the diagram above, label the paths by A1 = (13458), A2 = (13468), A3 = (2759), A4 = (2769).

As Jack Sarfatti notes, in Gray's Entire Cosmic History Quantum Theory, the amplitudes are context-dependent, so you must distinguish amplitudes for blocker "on" and "off" for the future delayed choices, and you have for n = EPR pair from the down converters:

A1(on) = A2(on) = 0

A3(on) = A4(on) =(1/2)^(1/2)

so that p(A)on = p(B)on = 1/2 at the receiver interferometer in the past.

For the sending blocker "off"

A1(off) = - A3(off)

A2(off) = A4(off) = 1/2

so that

p(A)off = |A1(off) + A3(off)|^2 = 0

p(B)off = |A2(off) + A4(off)|^2 = 1

Now let's go to n = 2 EPR pairs from the down converters. The probability of a correct "on" decoding of the message from the future when Pb = 1 (blocker on) is now 3/4. In all cases, probability of a correct decoding when Pb = 0 (blocker off) is 1.

For n = 3 pairs, probability of a correct "on" decoding is 7/8 etc.

A few days after doing the above analysis, Jack Sarfatti received an analysis by Nick Herbert, about which Jack Sarfatti said to Nick Herbert: "... I think you have shown Gray's idea will not work because it is not possible to get destructive interference at receiver [A] and constructive interference at receiver [B] for the interference photon for all screen detections ... for the measurement photon with blocker off. ... The main reason for this is that one needs to integrate over all possible places where the measurement photon can be absorbed on the screen in the future. The only way Gray's scheme could work would be if one could somehow squeeze all the measurement photons only into ... small piece of the screen ... where we have destructive interference for [A] and constructive interference for [B], but this seems to [violate] diffraction constraints? ..."

In my opinion, the analysis of Nick Herbert does not show that the Gray machine will not work because I think that you might be able to use a small enough measurement screen without violating diffraction constraints or to use a small curved screen (perhaps a part of a Hyperboloid of Revolution) such that the interference pattern is constant over the entire small screen.

 


Tony Smith's Home Page

...

...