Monday, June 17, 2019

Loose associations, binary thinking, time-space, neutrinos, and counting

I was thinking about the tree structure of syntax diagrams (that I discussed in my previous post) and began to wonder about the matter of binary decisions. By binary decision I mean having two choices only, i.e., branch to the left or branch to the right. In the most fundamental expression of such a dualism, mystics must inevitably confront the paradox of God as the spiritual light, but also the cause of nature's black deeps (paraphrasing Carl Jung). You encounter this tension of the pairs of opposites throughout the temporal world, that is to say, the world where we find ourselves conscious and proceeding through time moment by moment. There is no change where there is no time, and anything material, at least anything with mass, experiences time.

For example, neutrino particles (the Standard Model of physics, incorporating the electromagnetic, weak and strong nuclear forces or interactions, classifies all the subatomic particles known to date; there is the familiar electron, proton and neutron, but there are many others, including the electrically neutral and difficult to detect neutrino) must have some small mass and experience time since it apparently does change. The neutrino can be observed to interact in three different "flavors," i.e., electron, muon, tau (the flavors are simply the type of charged lepton it appears with in an experimental observation), and in fact oscillates among those three states (or the probability of catching it interacting with a particular lepton changes periodically with distance) as it travels, which involve the propagation of its composite mass states at slightly different rates.

The propagating neutrino mass states (there are three of those, different from the interaction flavor "state", i.e., the mass states are the true neutrino, the flavor states simply a convention) are energy-momentum four-vectors, rank-1 tensors really. Before Einstein and Special Relativity we conceived of physical objects occupying space, e.g., we could bring two objects into contact and that contact would end where the "space" occupied by each met. We had a vague notion of an abstract "space" in which objects existed, but that notion really required a body of reference (following Einstein's 1921 Princeton lectures on Special and General Relativity).

You may recall from high school algebra that we could construct Cartesian coordinate systems where we indicate the position of points (and objects consisting of the loci of such points) in a three-dimensional grid with three axes, π‘₯, 𝑦, and 𝑧.  A line connecting two points in this grid could be described by an interval π‘ ² = ∆π‘₯² + ∆𝑦² + ∆𝑧², where the "∆π‘₯" notation means "the difference in position between the two points in relation to the π‘₯-axis." Time was thought to be independent of space, a separate concept. We would speak about events occuring at different Cartesian coordinates simultaneously, without a lot of thought about what that implied. Einstein realized that there are no instantaneous effects across distance, the speed of propagation of effects being limited to the speed of light. Accordingly, it became clear that only events had physical reality, the time and the location of the event varying (mathematically, not as opinion) by observer frame of reference.

By 1895, Henri PoincarΓ© commented that the recent results (1887) of the Michelson-Morley experiment, which failed to show any variation in the speed of light with respect to the translational velocity of the Earth, implied a principle of relativity which prevented a moving observer from detecting any difference in the laws of physics compared with another observer at rest (there are a number of qualifications here, but we will leave it in a simple form).  You might have expected, contrarily, that if space was a substance, an aether through which light travelled, like an ocean wave through water, that it might travel a bit slower going upstream in a river as it were, or in the case of the Earth moving in its orbit around the Sun, light might propagate a little slower moving against the direction of motion of the Earth.

In 1904 Hendrik Lorentz summed up his recent work including PoincarΓ©'s contributions, on transforms from one frame of reference to another, consistent with the Maxwell electromagnetic equations implying the constancy of the speed of light in all inertial systems. He proposed the Lorentz transformation still used today, time dilation and contraction of bodies along the direction of motion. Interestingly, the contraction concept was originally introduced by Oliver Heaviside (you can find a copy of this work from The Electrician, 1888, on pg 511 of the Heaviside Electrical Papers, Vol. II, published 1894). Heaviside had analyzed the movement of a charged spherical conductor and found that the charge distribution could only remain static only if the sphere shortened in the direction of motion, becoming an oblate spheroid by a factor  (1 −𝑣²⁄𝑐²)¹⁄² .

For our context, we are interested in the fact that these new ideas required our old view of a separate three-dimensional space and one-dimensional time be replaced by a four-dimensional space-time continuum, which Einstein presented in a unified way in his 1905 papers. Our three-dimensional coordinate equation above becomes instead an invariant space-time interval 𝑑𝑠² = 𝑐²π‘‘𝑑² - 𝑑π‘₯² - 𝑑𝑦² - 𝑑𝑧², connecting points in a space-time coordinate system (we replaced the ∆π‘₯² notation with 𝑑π‘₯², which means about the same thing, a difference, albeit infinitesimal, between two points). Though this space-time interval is measured to be the same by all observers, moving clocks tick more slowly and objects shrink in the direction of their motion, i.e., as we alluded to previously, different observers might not agree about time and location of events, but they can all agree on the space-time interval. The Lorentz transformation permits, e.g., an observer at rest to calculate the time experienced on a moving object. For example, the proper time 𝜏 on the moving object, say a rocket headed away from the Earth (say along the π‘₯-axis) would be π‘‘πœ = 𝑑𝑑 ∕𝛾, where 𝛾 = 𝟷  ∕ (𝟷 −  π›½² ) and  π›½ = 𝑣 ∕𝑐 (the velocity we observe for the rocket, divided by the speed of light).

𝑑𝑠² = 𝑐²π‘‘𝑑² - 𝑑π‘₯² - 𝑑𝑦² - 𝑑𝑧²  = (𝑐𝑑𝑑 )² [𝟷 −  π›½² ]  =  (𝑐𝑑𝑑  ∕ 𝛾)²   = (π‘π‘‘πœ)²

That is, we observe time in the moving clock frame as 𝑑  = π›Ύπœ, their clock tick intervals appear longer than ours by a factor 𝛾, i.e., their clock appears to be running slowly in comparison with our clock at rest. Though we may not agree on clock times or the lengths of objects, we all calculate 𝑑𝑠² to be the same number, just as we may rotate three-dimensional vectors in Euclidian space without changing their length.

The time-space interval example above used the position (or position-time) four-vector, π‘₯ᡘ  = {π‘₯⁰, π‘₯¹ ,π‘₯², π‘₯³}where the first component is time (multiplied by the speed of light), 𝑐𝑑, and the remaining three variables are the usual Cartesian π‘₯,𝑦,𝑧 axis coordinates (numbered 𝟷,2, 3 superscript to indicate a contravariant vector). We wanted to look at another type of four-vector though.  Above we mentioned the energy-momentum four-vectors that describe the three mass states that comprise a propagating neutrino particle. They look like π‘α΅˜  = {𝑝⁰, 𝑝¹, 𝑝², 𝑝³}. The 𝑝⁰ component is now 𝐸, the relativistic energy and the remaining three components are the momenta on the π‘₯,𝑦,𝑧 axes. What we wanted to note was that the neutrino mass states, 𝜈₁,𝜈₂, and 𝜈₃, each propagate as an energy-momentum four-vector, together initially, but because their relative masses differ slightly they travel at slightly different velocities and, amazingly, are believed to separate after a sufficient distance.

We may simplify the analysis by considering only two of the four-momenta (we are not concerned with relativistic frames of reference here, so the 𝛾 factor will be omitted), described by the relativistic energy relation, 𝐸 =  (𝑝² + π‘š²)¹⁄² (the square root of the squared momentum and mass). 𝐸 is one of the 𝐸₁ and 𝐸₂ and the rest mass π‘š is one of π‘š₁ and π‘š₂ in turn. Using the binomial expansion that can be reduced to 𝐸ⱼₖ =  π‘β±Όβ‚– + π‘šβ±Όβ‚–² ∕ 2𝑝ⱼₖ for each 𝐸ⱼₖ and π‘šβ±Όβ‚–.

The difference between any two mass state energies becomes π›₯π‘š² ∕ 2𝑝 (𝑝 here is half the sum of the two momenta, 𝑝 =  (𝑝ⱼ  + 𝑝ₖ )/2) and the relative velocity difference then π›₯𝘷 = π›₯π‘š² ∕ 2𝑝² or  π›₯𝘷 = π›₯π‘š² ∕ 2𝐸², 𝑝 and 𝐸 being very close for the highly relativistic neutrino (it propagates at very near the speed of light). If the size of the neutrino propagating mass states are roughly 𝜎 (the length of a packet) then the distance at which the mass states separate is 𝐿 = 𝜎 ∕ π›₯𝘷. We estimate that neutrinos arriving from a supernova at the center of our Milky Way galaxy might arrive with their mass components separated by up to 41 meters, a detectable difference of about 137 ns (billionths of a second) apart. As far as we know though, none of the current supernova detection experiments are set up to detect separated packets.

Resuming our original thread, the neutrino has mass, therefore it cannot travel at the speed of light, by Einstein's equation for relativistic energy, writing in a form with the rest mass and 𝛾 factor,  πΈ = π›Ύπ‘šπ‘²  = π‘šπ‘² ∕ (1  − π‘£² ∕𝑐²)¹⁄².

You see in that equation that if v, the velocity of the neutrino, were to equal the speed of light, c, in the denominator, the v2 divided by c2 would equal 1 (since a number divided by itself is 1), and since that is subtracted from 1 beneath the radical, the result of the subtraction will try to go to zero. The denominator of a fraction is not allowed to be zero because that operation is not defined. a/b = c implies a = bc and there is no number c that could be multiplied by zero to get a non-zero number a. 

However, it you graph y = 1/x as x approaches zero (becomes very small in absolute value) from negative infinity (the left, e.g., -3, -2, -1, -0.9, ....-0.000001...) and as x approaches zero from the right or positive infinity (e.g., 3, 2, 1, 0.9, 0.00001...) y goes to either negative or positive infinity (the red line in the graph heads straight up to infinity or straight down to infinity as it nears the y axis):


So that tells you that E, the energy of a relativistic particle (or any particle with mass), would go to infinity as the particle approached the speed of light. Since there isn't infinite energy available (at least not since the moment of Creation at the Big Bang), particles with mass are not allowed (by the laws of this universe) to reach the speed of light (the neutrinos are pretty close though, but have a very tiny mass). If a particle travels at the speed of light, it cannot change because time for the particle does not exist (a change is an event in time that occurs when something passes in time from one state to another).

Light, on the other hand, is propagated by photon particles, which have no mass and travel only at the speed of light. So the photons from the Big Bang moment of Creation have not changed in the intervening billions of years separating us from the Big Bang? Well, space has expanded under their feet as it were. The photons we detect today as CMB (cosmic microwave background) were emitted at Recombination, about 380,000 years after the Big Bang, when hydrogen and helium atoms are formed, taking up most of the free electrons, making it possible for photons to free stream without many collisions at an equivalent temperature of about 4000 K, with a wavelength of around 3.60 millionths of a meter. When they are detected today, because the Universe has been expanding over the intervening billions of years, they have been stretched out to a wavelength of about 5279 millionths of a meter, a frequency of about 57 GHz (your cell phone frequency is about 1.9 GHz and microwaves lie between 0.1 to 1000 GHz, hence the term cosmic "microwave" background).

Aside from the stretching they are beyond time as it were. Recall from our earlier discussion that we observe time in a moving clock frame as 𝑑  = π›Ύπœ, clock tick intervals on a moving object lengthening by a factor 𝛾, where  π›Ύ = 1∕ (1  − π‘£² ∕𝑐²)¹⁄².

You can see we get the same problem of the fraction as a whole going to infinity as the velocity equals the speed of light (and photons, having no mass, always travel at the speed of light in vacuum), which you may interpret as time being frozen more or less for the thing travelling at the speed of light (if it takes infinity to reach the next clock tick time has stopped).

GPS satelllites move at 14,000 km/hr and have to correct for relativistic time dilation of 7 ΞΌs/day (slower satellite clock). (They also have to correct for the General Relativity opposing effect of the gravitational redshift from the Earth's gravitational field making clocks back on the Earth's surface relatively slower than the satellite clock, i.e., clocks in a "gravitational well" run slower.)

If we are able to send people to nearby star systems someday and reach significant percentages of the speed of light, the travellers (astronauts) will find that people back on Earth have aged more than they have. For example, if they travel 10 light years (round trip and neglect the year or more accelerating at both ends) at 80% of the speed of light they will age 7.5 years and their friends and family back on Earth will have aged 12.5 years (at 0.8 speed of light they would cover 10 light years in 12.5 years and their clocks are at 0.6 of "normal" so they would age only 7.5 years).

I have glossed over the idea of a quantity going to infinity, proposing that once we see a quantity headed in that direction we can safely envision a progression conceptually reaching infinity, i.e., one foot placed after the other in a never-ending march, counting each step (as I suggested above when pointing to the graph of 1/x heading up or down to infinity since we could see it was "headed that way").

We assume that we have always had the idea of adding more things to increase the number of things in hand. I've got one apple and if you give me another, why then of course I have two then. In fact though it takes more than two years for human children to grasp that the difference between "one thing" and "more than one thing" can be more abstractly understood as the successor function which generates all of the numbers with which we count, i.e., the natural numbers (1, 2, 3...some include 0). A child can proudly perform the counting script they have been taught, naming, say, one toy fish, two toy fish, three and so on. However, typically the 2.5 year old child who has just "counted" the toys in that way will hand you one fish if asked for one, but give you an arbitrary handful if asked for any number other than one!

After additional months of experience the child slowly, in a stepwise fashion, learns to understand "two" then "three." Sometime after this comes the great leap forward, where the child grasps implicitly the induction definition of natural numbers, i.e., that each word in the counting routine actually defines how many things you are considering and that each successive count adds one to the number of things (in your set) and that this can be continued indefinitely, with no upper bound (see Evolutionary and developmental foundations of human knowledge, by Marc Hauser and Elizabeth Spelke).

Although a chimpanzee can laboriously learn to associate a number symbol with a particular number of objects, they never (at least not after 20 years of training on one particular subject) progress to the understanding of the successor function, i.e., chimps cannot learn that a new number symbol means that one has been added to the previous set of items. It appears though (from research done by Spelke and others) that humans and some non-human primates both draw on a core neurophysiological basis for (1) representing the approximate cardinal values (about how many items are present) of large groups of objects or events and (2) representing the exact number of object sets or events when there are only a small number of individual units.

It appears that the uniquely human capability to construct the natural numbers (i.e., use the successor function) relies first on the core perception of one versus many, then mapping other number words to larger numerosities, then noticing that the progression in the language (the words representing numbers) of the counting routine corresponds to increasing the cardinal value of the set, the number of units in hand. This (and other research) suggests that natural language ability is involved in the human leap from those core perceptions shared by some non-human species to the natural number concepts unique to humans.

There has been some controversy about extending the concept of infinity, at least in the context of mathematics, namely, the Brouwer-Hilbert controversy about the foundations of mathematics at the beginning of the twentieth century.

L.E.J. Brouwer did not believe that the rules of classical logic laid out by Aristotle have an absolute validity independent of the subject matter to which they are applied. For example, Aristotle defined the Law of the Excluded Middle, which reasonably from our experience in life states that any proposition is either true or it is not true, e.g., Socrates is either a mortal or he is not a mortal, he cannot be something in between the two.

The claim of formal logic is that this law (of the Excluded Middle) applies simply because it is an accepted rule of logic, not because we have seen examples which permit us to infer that it is true in a specific case (e.g., the case of whether Socrates is mortal or not). Brouwer objected to making such an automatic claim via logic when offering a formal proof in mathematics. Brouwer wanted to see a proof that constructed specific examples (actual mathematical entities) rather than simply claiming one or the other of two contradictions must necessarily be true.

This may seem like rather abstract contentions among mathematicians, but if you go with Brouwer (and his intuitionist stance) then you are not allowed to extend presumptions to the infinite (which would cramp our style in the discussions earlier). For example, the induction axiom of mathematics states that if a mathematical proposition P(n) is true for n = 0, and if, for all natural numbers n, P(n) being true implies that P(n + 1) is true also, then P(n) is true for all natural number n. You will recognize our successor function from our earlier discussion. The so-called "animal instinct" here is that it must be true since you can conceive of marching forever, one foot placed after another in a never ending march, thereby defining infinity.

The alternative notion would be Georg Cantor's aleph-null, a completed infinity all at once without laying out the steps leading there. Well, strictly speaking aleph-null or aleph-naught represents any countable infinite set, for example, the natural numbers N. The real numbers, R are also infinite, but not countable. Cantor developed his famous diagonal argument to prove that R, the set of real numbers was not countable, though it was infinite.

Cantor's diagonal argument for showing a set is uncountable goes like this: Enumerate (list) all of the members of the set T of infinite sequences of binary digits (ones or zeros). No matter how you list them there will always be a member of the set that you miss, because you can draw a diagonal slash from left top corner to right bottom infinity corner, pull out the string of digits selected by your slash, then complement each of the digits you obtained, i.e., if there is a "1" replace it by a "0" and if a "0" replace by "1." The sequence you end up with cannot have been in the list because it differs from every string in the list by the nth digit:


In the above example the diagonal slash pulls the red digits 01000101100... and complements each of those to get s = 10111010011... You can see that s cannot have been in the list because you have made it differ by one digit in each listed sequence by complementing the slashed sequence. Therefore it is impossible to count, i.e., enumerate or list all of the numbers (each sequence represents a number in the set) in the set because every time you complete your list, a new unique number pops up in the diagonal slash!

David Hilbert designed a formal definition of mathematics where no intuitive notion about "reality" or actual examples or objects was necessary, but rather just rigorous definitions of symbols and the operations you could apply to them. Hilbert believed you could find a rote procedure for manipulating the symbols of his formal mathematics such that you could decide automatically whether a particular theorem expressed in his symbols was consistent, in effect putting all mathematicians out of work. This would make use of the Law of the Excluded Middle also, by assuming that such an automatic proof machine could decide if any arbitrary string of symbols was a correct theorem or was not a correct theorem (either it was or it was not, proof by contradiction accepted).

In 1900 Hilbert presented a number of questions to the international congress of mathematicians. Questions one and two were (1) was the system of mathematics he offered complete and (2) was it consistent,

A mathematical proof system, a set of axioms, is complete if any statement within its formal language may be proven or its negation proven using only the axioms. Such a system is consistent if it is impossible to construct a valid argument that is inconsistent, i.e., impossible to construct a statement from its axioms which is both true and false. Questions 1 and 2 were answered by Kurt Godel in 1930, proving that undecidable propositions may be constructed in any minimum arithmetic system).

The third question, was the system of mathematics decidable (the so-called Entscheidungsproblem) was answered shortly thereafter by Alan Turing and independently by Alonzo Church. Turing created the concept of an automatic computation machine and proved that there cannot be a general process for determining whether a given formula of mathematics in the symbolic logic of the system is provable within the system. Turing's mental concept used in the proof was rapidly developed into physical digital computers that we use today.

It remains a bizarre paradox that Turing's work, which implied that mathematics really required a mind, something not a machine, was soon used to "support" the premise that the human mind is a kind of computer. John von Neumann, by some accounts the most intelligent human who ever lived (where intelligence means the capacity to do the things measured on IQ tests, e.g., use memory and manipulate symbols, concepts, not the same thing as wisdom), helped design several of the initial digital electronic computers in the 1940's. He left notes for an incomplete book setting out his thoughts about human brains and computers.

Neumann thought of the computer, whether analog representing numbers by variable physical quantities like voltage produced by electronic circuits representing equations or by the presence of absence of a marker (digital) producing an input stream of pulses and operating on them to produce an output stream of pulses, as devices that performed arithmetic operations on numerical data under the control of logic. Oddly, von Neumann, who was well aware of the distinction between manipulation of symbols (as we mentioned regarding Hilbert and Brouwer above) and the interpretation of those symbols by a human, tacitly assumed that computers manipulated numbers, rather than symbols. Neumann assumed that brains compute (becoming part of the philosophy of mind), but offered no justification for that assertion.

Computers are designed by humans to manipulate symbols which are subsequently interpreted by humans, but it does not appear (to me) that brains perform arithmetical operations on numerical data. David Berlinski offers the analogy that some people are able to accept without consideration the thesis that the human mind is like a computer but would balk at the suggestion that the human mind is like an abacus, though the fact is that there is no difference fundamentally between an abacus and a Turing machine or the digital computers which were developed from Turing's conception. They are all mechanical devices which when manipulated by humans produce symbolic output of use when interpreted by a human. However, as physicist Lee Smolin (a theoretical physicist with contributions in the field of quantum gravity) has observed, neuroscience "is a field that is as bedeviled by outdated metaphysical baggage as physics is. In particular, the antiquated idea that any physical system that responds to and processes information is isomorphic to a digital programmable computer is holding back progress."

That is a good transition back to our original thread, the discussion of the pairs of opposites in the temporal versus the eternal (since perhaps the most fundamental dualism is the Light vs the Darkness speaking in spiritual terms). I suppose some might propose that the middle is not really excluded, i.e., that things are not really one state or another (particularly at this moment in history).