Search Blog

Wednesday, 1 October 2014

Thermodynamics and Planetary Motion

In an earlier post, I had discussed a possible derivation of Kepler's Second Law of Planetary Motion within a quantum gravity setting. Kepler's Second Law states that for a planet in orbit around another the Sun under the influence of Newtonian gravity:
the line connecting the planet to the Sun sweeps out equal areas in equal time

The basic idea involves applying Haggard and Rovelli's result about the rate of evolution of a quantum mechanical system, according to which a system in equilibrium evolves in such a way as to pass through an equal number of states in equal time intervals. Quantum Gravity (of the Loop variety) tells us that the fundamental observable in a quantum theory of geometry is the area of a surface embedded within a given spacetime.

The area swept out by a planet during the course of its motion around the Sun is far greater than $ A \gg A_p $(the basic quantum of area, where $A_p = l_p^2$ is the Planck length). However, if classical geometry as described by (classical) general relativity arises from a more fundamental quantum theory, and consistency of any theory of quantum gravity would require this, then it is natural to assume that the macroscopic area $\delta A$ swept out of by a planet in a time $\delta t$ emerges from an ensemble of quanta of Planck areas. If one could argue that planetary motion corresponds to an "equilibrium" configuration of the gravitational field, then Haggard and Rovelli's result can be applied and we obtain Kepler's Second Law as a trivial consequence.

Saturday, 12 July 2014

Transcending Bad Sci-Fi

So I was watching "Transcendence", the latest cinematic attempt to generate public hysteria about Artificial Intelligence (AI), or more specifically about Hard AI. I stopped watching around the 42 minute mark. Johnny Depp's remarkably good portrayal of a scientist dying a slow and painful death from Polonium poisoning had kept me watching despite the utter nonsense being thrown around in the name of Sci-Fi. Then I got to the point, where one of a gang of anti-AI extremists says the following whopper:
If that thing [the AI computer armed with Johnny Depp's consciousness] connects to the internet, the first thing it will do is copy itself to every networked computer in the world and there will be no way to stop it.

Take a moment to ponder this sentence. First of all, if your script centered on an AI gone rogue, hinges on whether or not the said AI successfully copies itself to "every networked computer", then you should think of a different profession that scriptwriting. I mean, seriously? THAT'S the best AI gone rogue scenario you could come up with? Second, and more seriously, this sentence suggests that the makers of the film really didn't give a crap about actually understanding the promise and perils of strong AI. Even an AI powered scriptwriting program could have come up with dozens of fantastic script ideas centered around the theme of a strong-AI gone rogue which were not in violent conflict with our basic understanding of the subject.

The problem is with the notion that said AI, which is powered by "the most powerful quantum processors on the planet", could exist independently of the cutting-edge computing hardware on which it was originally conceived. The point is simply that of all the examples of "strong"-AI (or perhaps just I) which exist in Nature all involve some sort fleshy matter built out of neurons and neurotransmitters. I am talking about brains, of course. At least considering mammalian brains as formal working examples of biological machines that exhibit strong-Intelligence, the following observation is crucial:
The software does not have an existence independent of the hardware.

In other words the "consciousness" aspect of the behavior of these strong-I machines cannot be simulated on a computer which is less complex than the brain-matter itself. Sure, one could always (in principle) take an imprint of the consciousness in brain 'X', store it on some digital memory and at some point in the future restore the consciousness by uploading the stored data into some other brain 'Y'. However, and this is crucial, while the imprint represents the individual aspects of the brain 'X's behavior, as long as it is separated from the actual brain itself it cannot exhibit any "conscious" behavior.

Coming back to the film, if the rogue AI copies "itself" (or more properly speaking creates "imprints" of itself) onto other computers of the network, those imprints would simply be fossils of the original consciousness without the advanced hardware required to sustain the computation. In other words, the threat of an AI spreading like a global digital pandemic simply cannot be realized unless and until every networked computer is as advanced as the original hardware. Given that in the film, the original hardware is described as consisting of "the most powerful quantum processors on the planet", connecting the AI to the internet would not pose any harm as long as most of the hardware on the planet was not as sophisticated as Depp's original quantum computers.

The fact that the filmmakers were unable to grasp this fact is what leads to "Transcendence" being at best a campy sci-fi movie with more in common with "Flash Gordon" that with "Blade Runner". But don't let these deep observations stop you from enjoying the rest of the film. After all, how would the AI feel if you gave up on it halfway through the movie?

Monday, 12 May 2014

Multiverse, multiverse, where art thou?

As I understand it, the multiverse concept arises as a consequence of the standard inflationary scenario which involves one or more scalar fields "rolling down" the side of a potential hill, causing an exponential increase in the "size" of the Universe soon after the Big Bang. Now the form of the potential itself varies from place to place. Though what "place to place" means, when you are talking about the time when the geometric exoskeleton of the Universe is still in the process of formation, is quite unclear to me. And because the potential varies from "place to place", different regions of spacetime inflate at different rates and generically many such exponentially inflating volumes are generated from an original patch of spacetime. This is the origin of the concept of a multiverse.

The basic assumption behind all these scenarios is that the process of growth of geometry can be correctly modeled by the inflaton-potential scenario. And this is also the Achilles heel of the multiverse paradigm. What if the inflaton-potential scenario is only an effective description of the quantum processes which lead to the formation of geometric structure? That this is the case is clear if you have come to terms with the notion that "quantum geometry" underlies "classical geometry" and things such as metric, scalar fields and potentials arise from something more primitive. The resulting picture of spacetime is that of a fluid, which emerges from the interaction of its microscopic "atomic" constituents the same way the fluid (continuous) nature of water is the result of the interactions between many many discrete $H_2 O$ molecules.

In this picture, there are no such things as "fundamental" scalar fields. What appears to be a "fundamental" scalar field is instead an order parameter describing the collective behavior of some underlying (non-scalar) degrees of freedom. Now just as the process of condensation, where water vapor turns into water, corresponds to a phase transition, the quantum geometric picture suggests that the classical continous spacetime arises due to condensation of a gas of some sort of "atoms of geometry". Such a process should be described in the language of many-body physics in terms of a transition from one phase of geometry to another. See Sabine's blog for her take on this. Also see my previous papers which talk about such a phase transition.

Anyways, if this is indeed the real picture, then the "multiverse" is simply a consequence of taking an effective description of quantum geometry far too seriously and ignoring the fact that there is a more fundamental underlying dynamics that has to be taken into account. Of course there is no reason, a priori, for ruling out that the phase transition results in the formation of "domains", where each domain describes a slightly different classical geometry in a manner similar to the formation of domains in ferromagnetism. However, there is also no reason to why these domains would correspond to separate disconnected universes, rather than one universe divided into many regions with different geometric configurations. Whether or not these domains would correspond to the size of the observable Hubble volume today depends on the details of the underlying theory. Regardless of the details, one can see that a phase transition is a finite process in time. Once it has occurred and the new phase of geometry has emerged the resulting domains will not undergo exponential expansion because, after all, the "exponential expansion" was the phase transition itself!

In short, from this perspective the "Multiverse" is a mirage, a result of sloppy reasoning which ignores the true dynamics of geometry. There is much more to be said, but this seems enough for now!

Friday, 5 July 2013

The Measurement Problem, Part 1

The Problem


The measurement problem becomes a problem only when we neglect to specify the nature of the observer's Hilbert space. Postulates I (Systems are described by vectors in a Hilbert space) and II (Time evolution occurs via some given Hamiltonian for a particular system) are fine in that regard. These two postulates deal only with the description of a quantum system. It is the third postulate (Measurement leads to collapse of state vector to an eigenstate) where there is a problem.

A measurement is said to occur whenever one quantum system - the "observer" - described by a Hilbert space ( $ H_{O} $ ) interacts with another system described by a Hilbert space ($H_{S}$). The complete Hilbert space of the system ("observer" and the "observed") is given by:
$$ H_{O+S} = H_{O} \otimes H_{S} $$
To actually realize the dichotomy between an "internal" and "external" observer, the size of the observer's Hilbert space, given by its dimension ($dim(H_O)$), must be comparable to ($dim(H_S)$) - the dimension of the Hilbert space corresponding to the system under observation. Instead, what we generally encounter is ($dim(H_O) \gg dim(H_S)$) as is the case for, say, an apparatus with a vacuum chamber and other paraphernalia which is being used to study an atomic scale sample.

In this case the apparatus is not described by the three states ($\{\ket{ready}, \ket{up}, \ket{down}\}$), but by the larger family of states ($\{\ket{ready;\alpha}, \ket{up;\alpha}, \ket{down;\alpha}$) where ($\alpha$) parametrizes the "helper" degrees of freedom of the apparatus which are not directly involved in generating the final output, but are nevertheless present in any interaction. Examples of these d.o.f are the states of the electrons in the wiring which transmits data between the apparatus and the system.

The initial state of the complete system is of the form:
$$\ket{\psi_i} = \ket{ready;\alpha} (\mu \ket{1} + \nu \ket{0} )$$
When $ H_O$ interacts with $ H_S$ in such a way, that a measurement is said to have occurred, the final state of the composite system can be written as:
$$\ket{\psi_i} = \ket{up;\alpha} (\mu_{up} \ket{1} + \nu_{up} \ket{0}) + \ket{down;\alpha} (\mu_{down} \ket{1} + \nu_{down} \ket{0})$$
In a complete self-consistent theory, one would hope that all paradoxes regarding measurement could be resolved by understanding unitary evolution of the full Hilbert space ($H_{sum}$). This is not quite the case. Consider the case when the system being observed is a spin-1/2 object with a two dimensional Hilbert space ($H_{sys}$) a basis for which can be written as ($\{ \ket{0}; \ket{1} \} $). The Hilbert space of the observing apparatus ($H_{obs}$) is large enough to describe all the possible positions of dials, meters and probes on the apparatus. Let us assume that ($H_{obs}$) can itself be written as a tensor product:
$$H_{obs} = H_{pointer} \otimes H_{res}$$
For some poorly understood reason, when ($N_{obs} \rightarrow \infty$), an interaction between the two systems - observer and subject - causes the state of the subject to "collapse" to one of the eigenstates of the operator (or "property") of the subject being measured ($\ket{\psi_{sub}} \rightarrow \ket{\phi^i_{obs}}$).

When QM was first invented, it was understood that the measuring apparatus is a classical system requiring an infinite number of degrees of freedom for its complete description. Thus the ``collapse'' that occurs is because of something that happens at the interface of the classical measuring apparatus and the quantum system being observed. This ad-hoc separation of the classical from the quantum came to known as the "Heisenberg cut" (or "Bohr cut" depending of your reading of history). Since the quantum description of systems with even a few degrees of freedom appeared to be a great technical feat in those early days, physicists didn't have much reason to worry about systems with large ($N \gg 1$) dimension Hilbert spaces.

Mechanisms for Wavevector Collapse


To address the lack of understanding of state vector collapse in QM, and to get a grasp on the description of systems with large Hilbert spaces, first the many-worlds interpretation (MWI) and later the consistent histories or decoherence framework was constructed.

Monday, 11 February 2013

Thermal Time and Kepler's Second Law

In a fascinating recent paper (arXiv:1302.0724), Haggard and Rovelli (HR) discuss the relationship between the concept of thermal time, the Tolman-Ehrenfest effect and the rate of dynamical evolution of a system - i.e., the number of distinguishable (orthogonal) states a given system transitions through in each unit of time. The last of these is also the subject of the Margolus-Levitin theorem (arXiv:quant-ph/9710043v2) according to which the rate of dynamical evolution of a macroscopic system with fixed average energy (E), has an upper bound ($\nu_{\perp}$) given by:

\begin{equation}
\label{eqn:margolus-levitin}
\nu_{\perp} \leq \frac{2E}{h}
\end{equation}