Tuesday, April 13, 2021

Fat heads, deadly force incidents, decay of civilization

The latest outrage in the US news (April 2021) is of a policewoman “accidentally” shooting a suspect during a routine outstanding-warrant traffic stop because the young man had the unfortunately common instinct (i.e., a not-well-thought-out action) of people of color that if they decide they do not want to be arrested they can simply leave the scene. The officer said she thought she had a taser instead of her service automatic pistol. She could be heard yelling “Taser! Taser!” on the body-cam audio as she frantically thrust her arm into the suspect’s open car with pistol in hand and pulled the trigger (rather like yelling, “Caress! Caress!” while punching someone in the nose).

Aside from the stupidity involved in actually making such a mistake (or the stupidity in expecting anyone to believe it), why would such out of control action (as shown on the police body-cam) be appropriate in responding to an attempt to flee by an unarmed otherwise docile suspect with a minor outstanding warrant and no immediate crime in progress? Why not return to the squad car and use the radio to bring in other patrol cars to look for the suspect car and proceed to the location associated with the license plate? Or if you really felt the need to shoot something, why not stand back and shoot out the tires on the suspect car, which was not moving. I think a jacketed slug would penetrate a tire, don’t think they are allowed to use hollow-points (because they balloon on impact and destroy a huge amount of body tissue, organs). Maybe they should use some kind of quick-acting tranquilizer projectiles since you often see an officer empty an entire clip (or two) of jacketed 9-mm rounds into suspects without immediate response. The usual defense in a questionable use of deadly force event is that you can’t second-guess the response of an officer in danger, but there clearly was no danger to the officer in this case. This officer was a 26-year veteran reportedly (veteran in what capacity?). What kind of training and management results in this type of officer incompetence?

I will be watching the Chauvin trial defense to see if they now claim that Chauvin actually thought his knee was a stethoscope when he used it to crush Floyd’s neck into the pavement for 9 minutes, strangling him last year, unabashed by a crowd of onlookers who pleaded with him that Floyd appeared not to be able to breathe (and eventually quietly telling Chauvin that Floyd’s eyes had rolled back in his head and he had become completely inert, like a fish on the dock, paraphrasing a martial artist familiar with the effects of choke holds, who had been among the horrified group watching the murder). Like the taser-shot suspect above, Floyd had also decided he did not want to be arrested (after police were called when he passed a counterfeit twenty dollar bill; why no media questions about where the counterfeit bill originated or how it was arranged for Floyd to circulate it?).

Aside from better psychiatric screening, training and management for police, we might want to educate (rigorously, frequently) public school students in every year of attendance that it is a serious health risk, as well as poor citizenship, to resist arrest. You may not respect the man (or woman), but you must respect the badge. To the extent fatalities are associated more with non-white suspects, this might be part of the problem, i.e., the combination of a loose-cannon officer and a group that is more often inclined to resist arrest or flee when ordered to submit to arrest. Would Chauvin have kept the knee applied with full body weight on the neck of a large white suspect who had struggled violently against three or more officers for several minutes during an attempted arrest? I would be surprised if Chauvin’s past history of complaints of excessive force (I assume he has such a history, but perhaps I make the usual statistical error in interpreting human behavior) was entirely confined to black suspects.                                                            
Public radio (my only other airwaves choice is a fascist evangelical maniac bemoaning the loss of the Anointed One, The Last, hopefully, Trump) this morning interviewed one of the seemingly endless supply of ad hoc experts (I suspect the PBS staff simply take turns being the interviewee) about the suspension of Johnson and Johnson COVID vaccinations after a few cases of a rare type of cerebral blood clot occurred in patients following the shot. By the way, I assert that the vaccinations are not “jabs”, no matter how often the media ineptly and inappropriately attempts to coin a new “word.” One has to wonder if they wanted to say “pricks” but this proposal was immediately nixed by the radical feminists for obvious reasons.

From the first “um...eh...ahem...um” self-conscious self-important vocalizations preceding what should have been simple responses (think, “So Mr. Expert, why is it not dark during the day?”… “um...eh...ahem...um...there is um...eh...ahem...um...a thing called a sun in the sky!”), I had the uncontrollable urge to beat him about the head with a rolled-up newspaper. One of the first things I would do as world dictator/philosopher-king is to have a newspaper-wielding Guardian appointed in every media group (that I could text to have this constructive criticism applied to the person speaking at the time).

While I am at it (i.e., detailing my planned agenda as “Daltamesh, Bringer-of-Order-to-Chaos”), I would have every executive who presided over the transfer of American high-technology to third-world country manufacturing sites in order to avoid paying American workers a proper wage and now complains that those nations have “stolen” our technology (and by “our” they refer to the irreplaceable talented American engineers who created the technology, not the parasites they worked for) dragged out of their offices, de-pantsed and beaten with sticks daily by angry mobs of obese welfare mothers (it is admittedly physiologically difficult for a woman not to become obese under the best of conditions, but in particular while stranded at home with a brood to mother and no way to pass the time) who might have had jobs otherwise.

Daily outrages aside, I have been studying neuroscience in recent weeks, related to a question about the role of a curious structure (looks like a potato chip more or less, one chip in each hemisphere, like Lays chips, you apparently cannot make just one), the claustrum, in the brains of mammals. The claustrum is reciprocally connected with all cortical areas. Analogously, you might wonder why all the telephone lines of a city passed through the local police department, if you are old enough to remember that wires used to connect people to telephone communications and that the communications of citizens of bygone days were not routinely monitored by the government. The latter belief occasionally did occur back then, but the powers-that-be used to put those believers into hospitals and disable them with neuroleptic medication. Thorazine and its descendant chemical compounds of the anti-psychotics had the eventual frequent side effect of causing the patient to involuntarily snap at invisible flies in the air, making loud sucking sounds (there are other symptoms, potentially fatal, but this one caught my attention), so were termed “neuroleptics.”

To be labeled as insane now, the beliefs of a person must be more unrealistic (or more inconvenient). When I read the DSM (Diagnostic and Statistical Manual of Mental Disorders) some years ago there were criteria for diagnoses of Axis-I disorders, e.g., 293.81 psychotic disorder with delusions. A mental health counselor I ran into (no, it was not a clinical appointment, grin) at a Starbucks c. 2008 told me that she thought those DSM formal symptom clusters were too restrictive. That should have come as no surprise to me given the widespread herd-democracy zeitgeist (no standard for truth or understanding other than the instant requirements of one’s own narcissism, i.e., free reign to whim). Nowadays, if you exhibit more unacceptable beliefs than merely thinking the government is watching you one way or another (which, of course, it is), the mind-killing medication is still prescribed (the “you are only sick if you believe you are” cognitive behavior approach has not worked out well for the therapists in Axis I severe cases). However, the patients are humanely housed in alleys in cardboard box structures, or they are sent to prisons, rather than confining them in hospitals (thanks to, I kid you not, the graduate thesis of some student in the late 50’s or early 60’s advocating that approach, labeled “community care” in best Orwellian form).                                                        

At any rate, cortex is Latin for “bark,” i.e., the cerebral cortex is a layer of gray matter or neuronal cell bodies seen to be a 3-mm thin bark covering the cerebrum or average 1345 cubic centimeter volume of the brain (the “trunk” covered by this “bark,” which, now that I think about it, probably is indeed a substance more akin to wood than neural tissue in much of the population).                                                                                                   

I recently watched portions of a Ken Burns television production telling of the life of Hemingway. I cannot watch anything but scraps of a Ken Burns work, because my blood pressure is worsened by his habit of distorting facts either directly or through the numerous hand-waving spastic proxies available in academia in after the 1960s. In honor of Hemingway,  I wanted to try replacing my habitual parentheticals by brief independent sentences (perhaps this mode of speech was required while forced as a child by his eccentric mother to occasionally wear a little girl’s dress and serve tea, if we can trust Burns in this datum). However, that (me avoiding parenthetical remarks) is obviously impossible. My prose (and the generating thought) is as convoluted as the human cortex.

The cortex must fold into gyri (hills) and sulci (valleys) in order to fit into the limited volume of the cranium (interesting parenthetical: if your nerve axons were the same diameter as those of the great squid, your brain would not fit through a barn door). Hoffman 1988 (“On the Evolution and Geometry of the Brain in Mammals”) provides an equation (numbered 10 in the cited paper) giving the approximate empirically observed relationship between the total unfolded area of the cortex (you could, like a crumpled blanket, flatten out the hills and valleys of the cortex, at least theoretically) and the inner area of the cranium. For an approximate 1000 square centimeter available cranial inner surface, that equation would suggest the unfolded cerebral cortex would be around 3184 square centimeters:

log (unfolded area) = (1.25)log(inner cranial area) – 0.247        (Equation 10 from Hoffman 1988)
log (1000) is 3 (we use base 10 log). (3)(1.25) is 3.75. Subtract 0.247 to obtain 3.503. Take 10 to that power to obtain the unfolded area, i.e., 10^(3.503) =  3184
Hoffman gives the typical unfolded surface area of human cortex as 2430 square centimeters, implying that actual average inner cranial area is a little less than the 1000 square centimeter value we used for the sake of approximation in our calculation above (I guess Anatole France was not the only pinhead). What we are calling here “average inner cranial area” is really just the surface area of the main volume of the brain. That is the part you see zombies eating in the zombie-apocalypse movies so ironically popular these days of consciousness-deniers and AI-believers.

I note, regarding “Artificial Intelligence,” that the 2016 most influential computer scientist, Michael I. Jordan, has been attempting for years, apparently with little success, to make people understand that computers have not become intelligent in any human sense (no high-level reasoning or thought, cannot formulate and pursue goals). Rather, computers, particularly with machine-learning technologies (software), can provide pattern-recognition capabilities at massive scale (these are the “AI” processes used by search engines, for example, and best-exemplified by the “Family Feud” game show, where contestants compete to return the most common string of words for a particular cue). I assume Jordan is afraid (of the vicious thought police who run the media and universities now) to publicly admit that current digital computers can never become conscious.

Jordan originally studied psychology (as did I). In 1971 I took Intro to Psychology at the University of Texas in a huge class of 150 students. I inadvertently ended up at the top of the class, but was normally unconcerned with top grades for their own sake (though I had been pleased about my National Merit Scholarship results my senior year of high school and the college offers that came to me as a result). We were told not to use the word “mind.” The psychology textbook included a picture of a grotesquely fat cat (it was recognizable as a cat only by the label, being a shapeless blob of white fur, with impatient little eyes) in the section on unrestricted reward-seeking behaviors (that picture would now be censored as “shaming” I assume). I must admit that is hypocritical of me to criticize lack of control, given that my general philosophy in my younger days was, “if some is good, more is better.” That being said, at least I can see my error clearly (one of the few benefits of aging).                                                                                                                                                                

Returning to brains, you could obtain the approximate cortex volume if you extracted a brain, discarded the cerebellum and brain stem, then submerged the brain in a bucket of water and measured the volume of water displaced. If the brain did not sink, you would have to attach a low-volume but dense weight to the brain to get it below the surface of the water. I believe the brain probably has fat-like density (some brains more so than others) and fats tend to float. For example, a symptom of pancreatic cancer is production of foul-smelling floating feces consisting of undigested fat. The pancreas normally produces enzymes that help break down fats in the digestive process; I studied for the USMLE (US Medical Licensing Examination) a few years ago, but only worked through sample tests, since I have not attended medical school.

A marginally less-nauseating inference (that a brain might float, minus brain stem and cerebellum) could proceed from considering that the density of paraffin, a hydrocarbon waxy solid at room temperature is less dense than water, so should float (as does butterfat on the more watery fluid in unhomogenized milk, or so I was told by a milkmaid many years ago, though I did not as a rule date milkmaids because they were well aware that there was no need to buy the cow if you could get the milk for free). Knowing that brain axons (which comprise much of the brain white matter, approximately 35% of the human brain volume being white matter, from Hoffman Fig. 6) are sheathed in myelin, which contains lipid, i.e., insoluble hydrocarbons or colloquially, simply fat, we might reason that a brain should float also.

It is not so much that humans have larger brains (fat or otherwise) than other mammals. An elephant, for example, has a larger brain (as I recall, but my recollection is not as good as elephants, which, as those of us who were educated by the high-quality cartoons of the early 60’s know, never forget...it is a pity that in 2021 there are fewer and fewer elephants around to forget anything and more and more misery for them to forget...I identify with them in that sense). As Hoffman points out, large organisms need large brains to manage the increased somatic (e.g., more muscle fibers to control by more motor neurons, larger surface area to sense) and vegetative (e.g., a stomach the size of a bedroom would have more smooth muscle and associated enervation to control mixing wave peristalsis, more sympathetic enervation to operate the huge number of gastric secretion sites) demands of life on a grand scale. Well, a large scale anyway (a grazing brontosaurus might not appear grand to everyone). There is accordingly an allometric power law relationship between animal body weight P and brain weight E, E= bP^a, where b and alpha are allometric parameters fit to the observed data. In logarithmic form log E = log b + a log P and a log-log graph of brain weight as a function of body weight is a straight line more or less. One would therefore want to compare brain size among similarly sized animals if trying to draw conclusions about relative intelligence (I suppose one could skip the work and ask public radio to appoint an imbecile to discuss the matter).
As it turns out though, once (over evolutionary time) the mammalian lines began to increase brain size beyond that required simply to manage their locomotion and viscera at a given body weight, it appears that groups of columnar cortical neurons comprising cortical areas of particular function, could be combined or duplicated and modified to introduce new capabilities in the particular organism. You can view this as evolution in a mosaic pattern, with some regions changing dramatically and others being conserved more or less unchanged over evolutionary time. The gene responsible for a particular “circuit” (keeping in mind that brains are not analogous in any respect to our electronic digital computers) could then occasionally be incidentally duplicated (these duplications are not uncommon). Selective pressures might then drive modification of the duplicate in a new direction, e.g., for example, there is evidence that the birds with increased ability to hear and reproduce sounds, e.g., parrots, evolved through this process and that humans, though on an evolutionary path that diverged from birds long ago, experienced something similar (there being an analogous genetic duplication and modification compared to primates that are less capable of speech, e.g., Republican baboons, or Democratic Capuchins).
I guess some readers might have the feeling that I am simply a prig who believes himself to be smarter than everyone else (and perhaps also, from the Latin se amare sine rivali, “to be fond of one's self without a rival,” i. e. to be alone in esteeming one's self). Arrogance can be some kind of compensation for other inferiority or just the same kind of narcissism I complain about in the population at large. That is not my motivation. What angers me (and motivates my biting comments) is seeing a process in progress which is destroying civilization (never mind that the same evil opposed me throughout life). The instant brainwashed response is, “who are you to say what is desirable in that regard. We are all equal.” It is difficult to proceed from there, since only a fool would say such a thing.

A desirable world would be one in which none of my complaints would apply, a smaller population, a smarter population (this is the only atmosphere in which true democracy can survive), a culture that did not reward greed, a culture that valued achievements that elevate the race of man (every contribution, no matter how small, works for that good). What are elevated characteristics? Honor, honesty, kindness, generosity, courage, intelligence, pride in working toward what is good for oneself and others—these attributes cannot be taught, they must emerge in the experience of a higher level of experience, one that is freely offered to all conscious beings. The mystery of conscious life, in every instant, is testament enough to a transcendent purpose, but seen only by those who wish it, paraphrasing somewhat from Confessions of St. Augustine, Book X, Chapter VI,  10.

The present level of technology and science was built by a civilization, that of America and the Western tradition that gave birth to America, that recognized these characteristics (and the higher experience from which they sprang) as desiderata. The current decaying culture rejects those aspirations (aside from rejecting any standards) and specifically their metaphysical source.

What will be the future for a densely populated world, one with deteriorating environment from climate change and the poisonous accumulation of a planetary biomass of humans, where the average intellectual capability of men has decreased significantly and the young have been taught that they are merely animals with no prime good other than to acquire what wealth and power they may? The proposed “Humanist Manifesto” as replacement for the true metaphysical inner experience upon which all good is founded does not seem to be working so well, judging by daily events. We find themselves in a sinking ship culturally, yet continue to haul in more buckets of water as it were (another substance comes to mind, but I will refrain from scatology). The history of mere biological evolution on Earth is a history of the law of death, i.e., the simple but brutal struggle of organisms for limited resources. It is not inner good that is driving our retrogression, but something diametrically opposed.

Thursday, April 1, 2021

Can I write a quick blog post?

 I have never succeeded in the past in blogging quickly. I feel the time is right. The wind is sporadically blowing my internet cell towers in the distance, effectively modulating the signal and ruining my bandwidth and consistency, so I must be quick. The powers that be enjoy nothing not so much as seeing me carefully write for an hour and then erase all of my work and present a blank page with note, "something went wrong." Not that I don't provide for this cruel trick by saving externally into gedit notepad occasionally (one burned twice wise).

I have been thinking about the insanity of homo sapiens, if it is still that species. We discovered that specialization by talent could create better products for societies of men. Then some artisans (originally all work was art, talent and creativity combined) thought they could offer more of their work if they sought out investment. That reasonable idea was perverted, as is always the case with anything human at this point, into a casino where parasites bet (literally) on the value of the certificates of investment in businesses. The parasites chosen to head those now mindless conglomerates are not the artists of the particular trade, but more of those who bleed out the work of others to simply accumulate wealth. They in turn are told their duty is to maximize the certificates of investors, rather than to help improve the products of the company that started the operation, the living that provided to workers, the benefit to society of the well-made offerings.

Then you have the politically correct gestapo controlling the media. I watch a NOVA report on krill death in the oceans (the primary food for many species). Not once to the address the cause---the Earth has become a rotting piece of fruit circling the Sun, overrun by an apparent species of bipedal maggot. Aside from my possibly hyperbole, you would have to be brainwashed or an idiot (and often the two occur together in the output of American colleges and universities these days) not to see that 7 billion humanoids and the filth and pollution they generate is no different from the situation in a petri dish when equally stupid bacteria continue multiplying until they end up embedded in their own waste and die.

Oh well, I am old and cannot continue forever (as Google seems to be reminding me occasionally regarding plans for my account should I disappear--for one reason or another). I am pleased about one thing lately. I did look into the NASA operation of space launches and found they somehow carry out successful operations, the actual placement of the project into orbit, based on distributed work across many locations. It is really a testimony to good management and high competence, even though much of the technology was inherited from earlier days of operations.

Got my 2nd Pfizer shot last week and pfeel fpretty good about it, though the second one made me feel like every orthopedic injury and other malady was enraged and trying to kill me by production of pain for about 10 hours. What price survival? Now I am protected...DOH!! Forgot about the busily mutating COVID-19 virus. I am beginning to wonder if God is simply fed up with us and has decided to drop the hammer---half the US assaulted by hurricanes and tornados, blizzards, heat waves and droughts and wildfires on the other parts...and a virus that mutates far more quickly and effectively than anything I have seen before (despite the frightened assurances otherwise).

The idiot media (see earlier comments) trumpets (I still feel nausea when I type that sequence of letters t-r-u-m-p) the 2 trillion cost of President Biden's proposed infrastructure plan while saying nothing about the cost of removing the legitimate taxes on the wealthy and corporations after that imbecile actor Reagan began the plan to destroy government (as other than an assistant to the oligarchy). The state of the infrastructure and living conditions, wages in America is a disgrace compared to any other Western nation (if they can still be so termed when their core cultures have been obliterated by our exported PC multicultural baloney). I wonder if the serfs in feudal Europe felt equally aligned with their feudal masters?

I am living in a nut house or some kind of experiment in devolution of primates. Welcome to my nightmare. Good night, and good luck.

Friday, February 19, 2021

Congratulations NASA, perseverance pays off it appears; other musing

 Upon successful landing of the Perseverance Mars rover this week I felt slightly guilty about my recent dig (January 21) that NASA “can no longer field its own hardware, but ... have a diverse workforce.” As a former engineering professional, I had an inside view of a large (civilian) technology organization. Perhaps that was a somewhat corrupted view to the extent that GTE Network Systems relocated to El Paso in 1978 primarily to achieve the shutdown of the company without having to fight the union workers in Albuquerque, NM and San Carlos, CA (GTE subsequently broke up into Verizon and Sprint, not sure if they still are the local phone company in California). I gradually became aware of this plan as floor-level supervisor of the electronic quality engineering audit program to assure the telecommunications equipment manufactured there met official specifications before being shipped.

Higher management could not convince me to stop my people from failing out-of-specification equipment (leading to an amusing battle where I for a time erected a 10-foot chain-link fence around the quality engineering audit area in order to prevent raiders from removing sample lots to the shipping area) and finally told me point-blank quality no longer mattered since the plan was to cease operations. My own superiors, legitimate engineers, left the company and I was forced onto the second-shift gulag in another position in order to permit the shipping goals to be met without further argument on the basis of quality audits. 

After a time, I also left GTE Network Systems and eventually worked for design startups rather than manufacturing facilities, enjoying the highly-motivated highly-talented small-team atmosphere without the layers of parasitic management and HR. As for HR, I recall an amusing interchange with an uncle of mine c. 1977, a pleasantly dull fellow with a corporate appearance who naturally ended up in Human Resources; he  had wondered who would hire me as I made the transition from rock and roll musician after 1975. I laughed at him and told him that anyone who actually had to create something or provide a service requiring talent would find me useful. 

Transitioning from telecommunications to software engineering, I worked, for example on the AMBI Voice-Data-Terminal design, which, using a proprietary multitasking operating system base (I worked for the designer of that operating system), sent and received email over telephone lines (I was hired to improve their communications software at AMBI because I had background in telecommunications as well as computer programming at the hardware level), and had an electronic calendar, notepad and terminal interface to mainframes in 1985.

At any rate, it is not clear how much of the work resulting in the apparent success of the Perseverance mission thus far is actually to be attributed to NASA directly. After Reagan initiated the destruction of government in the 1980s (and the transformation of the American middle-class into self-loathing serfs bizarrely allied with their greedy masters), NASA found itself with increasingly reduced funding and, quoting from a 2012 report, the country found itself “living on the innovation funded in the past.” NASA had a program executive and scientist in Washington, D.C. for the Mars 2020 mission, but the JPL associated with Caltech very likely did the heavy lifting. The seven instruments and experiments of the rover are each developed by different academic institutions (but one included Los Alamos national laboratory).  

The Perseverance was launched from Earth using a ULA (United Launch Alliance, a joint venture between Lockheed Martin and Boeing, founding members of the old military-industrial complex) Atlas 541 rocket with first stage still using  a Russian RD-180 engine burning kerosene (RP-1) and LOX (liquid oxygen). They were hoping the BE-4 engine developed by Jeff Bezos’ Blue Origin company could replace the Russian equipment (it must be difficult to talk about sanctioning someone while you are dependent on them for your space launches), but apparently there was some controversy about the government obtaining Amazon free shipping and the replacement has not yet occurred.

The Perseverance was tucked inside a large payload fairing at the top of the long rocket. With the 5 meter diameter payload (first digit “5” in the Atlas 541 designation) sitting on top of a 3.8 meter diameter by 32.5 meter long Common Core Booster, it appears similar to a bacteriophage:

The “4” in Atlas 541 indicates 4 solid rocket boosters were attached to the base of the rocket (a design taken from Warner Brothers Cartoons as I recall, the famous Acme Rocket design used on the Space Shuttle launches). The “1” in  Atlas 541 indicates a single engine on the Centaur second stage. 

I was surprised to see that US Department of Energy provided the radioisotope thermoelectric generator (MMRTG) providing power on the Martian surface for the Perseverance (don’t see any photovoltaic cells in the pictures so maybe they did not bother with that as a secondary supply, the Sun being somewhat dimmer as far away as Mars, though there is not much air pollution, yet), since that uses plutonium-238, which for a long time after the 1964-1979 detente period, was hard to obtain in the US since we curtailed our production of nuclear weapons materials. 

I became aware of the shutdown of production reactors in 2020 when I was researching nuclear reactors in connection with physics articles I wrote concerning neutrino physics. The first experiment to really establish the existence of the neutrino was conducted at the Savannah River site (Aiken, South Caroline) nuclear weapons reactors. This Reines and Cowan experiment culminated in a 1956 congratulatory telegram from Reines to Wolfgang Pauli, the physicist who had invented the idea of the neutrino in 1930 to explain the problem of the continuous beta-decay energy spectrum. In beta minus decay a neutron in a radioactive nucleus transforms to a proton, emitting an electron, aka beta particle, which should have been observed with a single energy spike, but instead was observed with a continuous range of energy as if it was sharing the process energy with an unseen neutral particle, later shown to be the electron anti-neutrino. It had not been thought possible to detect a neutrino, the neutrino largely ignoring most matter, the Sun and the Earth appearing as fine crystal does to light (i.e., crystal is transparent to photons and matter is more or less transparent to neutrinos).

Well, congratulations to NASA, whatever their exact degree of participation. They surely would have taken most of the blame if there had been a problem in what is without a doubt a project with innumerable (even Cantor could not count them) ways to fail. 

NASA and space travel being on my mind, I read a 2012 report earlier today discussing nuclear propulsion. The figure there (below) gives the transit time for various nuclear-electric propulsion velocity change capability:

One way of doing nuclear propulsion is by using the heat from a fission reactor to generate electricity, which then is used to accelerate ions and eject them from the spaceship thrusters (engine) at very high velocity. Recall Newton’s Third Law of Motion, whenever one body exerts a force on a second body, the target body feels a force equal in magnitude and opposite in direction, i.e., the rocket is pushed in the direction opposite to the departing ion stream, just as the bottle rocket is pushed away from burning  propellant shooting out the back. A nuclear reactor is heavy and the thrust is relatively low compared with chemical engines, so it must be operated continuously to achieve the desired delta-V (change in velocity). It therefore takes a long time to get up to speed and a long time to decelerate at the destination. So you see the flight transit times estimated for various length missions (that does not take into account ion propulsion advances in the last 8 years though, I have a NASA facility relatively near my location out in the desert and they have their own private high-power electric lines, so make of that what you will). 

What caught my interest in the figure though was the label “Interstellar Precursor.” A tear involuntarily came to my eyes as I read this. Interstellar, that means “between the stars.” That implies a destiny beyond a an overpopulated box of bipedal mammals (a specific mammal came to mind but I censor myself for once) with increasingly low intelligence and nasty temperament. Why such a personal connection for me with that idea of interstellar travel? 

I knew a man in 1962, in or out of the body, I cannot say (some may recognize my style here as that of Paul 12:3 in  his Second Epistle to the Corinthians). But he went out to investigate a strange object late one night in the southwest desert and perhaps experienced things which it is not allowed to relate. He began to dream of things to be, sense the thoughts of others, and to live more or less a life parallel to the one here on this workaday planet. He suffered much though--as usual such men must be reminded of their mortality and human frailty, but gold is tried in the fire, and acceptable men in the furnace of adversity (Ecclesiasticus).    

My recent work has shifted to pure mathematics after devoting the last four years mostly to quantum mechanics and then neutrino physics (though that necessarily included a significant amount of applied mathematics). I just published a tutorial on automated theorem proving with the Isabelle Proof Assistant Isabelle Proof Assistant Tutorial. As a friend observed, “well, at least you stay busy.”

Thursday, January 21, 2021

Yesterday, the inaugaration of Joe Biden as President, brought a feeling of relief and hope, despite my general pessimissm about the longterm survival of America. Yes, now we have a man in the office (rather than a parody of a man), but tens of millions prefer a parody of a man, identify with lack of integrity, morality or intelligence (I was appalled to see one supporter of the exiting clown prince yell, "He is one of us"). China is correct to control social media. When you have a population of largely emotion-driven humans without much in the way of higher principle or intelligence, it is madness to connect them instantly to one another and amplify every metaphorical emission of the gas of decay.  Competing (in the race to destruction) we have the grotesque perverts (I do not mean sexually, since adult consensual sex is strictly the business of the participants) who infected our universities from the 1960's onward apparently preaching that white, Christian, Western Civilization should apologize for its achievements and open the gates to every barbarian horde that demands a more pleasant place to live. Yes, America began with people leaving Great Britain seeking a better life, but they went to an unsettled, wild land and tamed it, displacing the sparse population of natives, but that has always been the way of evolution--until now where the superior group is being taught to move aside voluntarily for that which could normally not compete.

It could have been so different. America would welcome applicants for citizenship who offered something and who did not overwhelm the present capability to bring in more population. The days of trying to populate a wilderness are long passed. There are too many people here now (and everywhere on Earth for that matter). It is a zero sum game, but that is not the worst of it. Already we begin to see infrastructure failures, more power failures, etc. as the marginally competent are given jobs. NASA can no longer field its own hardware, but hey, they have a diverse workforce. The national lab system, along with the US military, actually employs foreign nationals. I wonder if I am living in a madhouse where the inmates have taken over.

Well, if the essential goodness of a single man in a single office can overcome all of the above, then President Biden has a chance. I think of Lot trying to find smaller and smaller groups of good men...

I am old and tired, but persist in my research and writing. I published my best work in neutrino physics, Solar neutrino parameters and then returned to formal mathematics, resuming work with the automated proof assistant, Isabelle. I am in the process of writing a tutorial on using Isabelle for classic mathematics, proving that the square root of a prime number (the problem was originally that of the Pythagoreans, i.e., how to deal with the diagonal in a unit square, which turns out to be an incommensurable number) cannot be a rational number. There are many ways to go on that. I am following one of the routes taken by the Isabelle community, where the square root of two is assumed to be rational and then a contradiction is proven, i.e., that p must divide both m and n of the ratio square root of p = m/n with m and n positive integers n non-zero and coprime, but p is prime so cannot be 1 yet gcd(m,n) must be 1 if they are relatively prime.

I was a little unhappy to see Isabelle using properties of real numbers in the proof, which is required apparently because abstract properties involving division demand a field and the integers cannot be a field since they do not have a multiplicative inverse. Isabelle does take the approach of deriving most properties of numbers from basic principles, e.g., the Peano axioms, rather than stating axioms or postulates. In that way, they build a rock-solid structure that rests upon formally proven natural deduction steps and continues in the same manner.

...and so it goes, recalling one of my late wife's (Cheri) common utterances.

Saturday, September 26, 2020

September 26, 2020, fear and loathing on planet of the apes? Never thought I would see a parallel to Clodius (the Publius Claudius Pulcher of the factions in 59-50 BC Rome when Caesar was away in Gaul) here in United States of America. United. "United?" About a third of the US population is anti-intellectual, narcissistic, insolent, overweight and with indications of worrisome repressed sexual tensions (I haven't the heart to discuss projection with these folks, many of whom have been my colleagues many years ago in the music business, but it is embarrassingly obvious that their vile slander against well-known and respected public servants comes from within themselves rather than without), i.e., a perfect breeding ground for Clodius and his ilk. Another third are anti-Western civilization indoctrinated and busily gnawing away the foundations of the American experiment, one that produced the most advanced society up to now, technologically and socially (in history, how many times have you seen a nation fight for the peace of all? or generally greet other races and cultures with some acceptance in a world where genocide was always the rule sooner or later and still is, that being the natural tendency of animals). 

What about the remaining third of the population? That contains many relatively affable folk who just want to do what they are capable of as best they may, whatever the profession, and enjoy friends and family in peace. Within that group is a smaller percentage of truly noble persons, many of whom I have had the pleasure of meeting and interacting with over the years. I owe my life in fact to many of those unsung heroes, the ones who go against the grain to do what is right, that being the definition of integrity really, i.e., not having oneself contaminated by the crowd, but maintaining an intact core of self, self-guidance and that deeper well-spring of humanity that cannot be explained in evolutionary terms (despite the convolutions of the self-hating types like Dawkins attempting to mislead many)

A small group of intellectuals created the American founding documents (John Adams in particular), for the first time a revolution guided by the lessons of history guided by transcendent philosophy and nobility of character. Adams knew well the danger of the population taking direct control of the reins of government and engineered protections that worked well for a time. Those protections relied on a minimum fraction of the population being of higher integrity and capability and somehow reaching the positions in government. We have unfortunately reached a heretofore unseen low in that regard both in the relevant population and the individuals that population elects to power, directly or indirectly. One can only hope that a Marcus Aurelius will emerge from the US military ranks if the situation deteriorates drastically in the next few months.

There is of course a basic absurdity in much of the organization of humans. One can surmise that the beings who appear to have been monitoring our progress are at a loss to explain how most of the capability of a society is wasted providing resources for a parasitic few, to the point that even the infrastructure of the civilization begins crumbling (not to mention why humans breed uncontrollably like bacteria in a Petri dish, seeming not to notice they are overgrowing their environment, with the inevitable outcome of widespread disease and largescale die off). It is necessary for progress that the level of mankind be increased, but the evolutionary trends are opposite to that now and for many centuries past. With the CRISPR technology one nation or another will begin to "upgrade" their genome, but without proper guidance (yes of course, I mean men who think and feel as do I, grin) those changes may eliminate much of the genius and nobility of man at his best. 

These considerations make me fear that my work (physics primarily the last four years), passing on my hard-won understanding to other students in various venues, e.g., Neutrino Physics, or Neutrino computer code, articles, is simply "fiddling while Rome burns." Well, we persevere, feeling blessed that we are able to study and write in a subject we had not been formerly able to pursue. We implemented C++ computer code in Python/NumPy lately in order to graph the flavor transition probability of muon neutrinos at 1300 km baselines in the range of 0.3 to 10 GeV (including the effect of their passage through Earth mantle), using the exact equations originated by Zaglauer and Schwarzer c. 1988 and in particular their 2019 implementation in C++ (we wrote software in C back in the 80's and are inclined to agree with many that C++ is too extensive, though useful if you adopt a subset of its features) by FermiLab physicist Stephen J. Parke (along with Denton, Barenboim, and Terne). I have read much of Parke's work, going all the way back to his 1986 paper "Resonant Solar Neutrino Oscillation Experiments", in which he gave a detailed treatment of the recent suggestions by Mikheyev and Smirnov which turned out to be the likely solution of the solar neutrino deficit problem first exposed by Ray Davis experimental work assisted by John Bahcall development of the Standard Solar Model. We graph the performance of the Parke equations in their C++ form, our implementation in Python/NumPy and an approximation from the T2K experiment:

The T2K code (green) continues higher at 0.3 GeV (far left of graph), but it is known to be less accurate at that level. We spent many hours yesterday troubleshooting our ZS code in order to obtain pixel-to-pixel agreement with the Parke code. The error turned out be a single parenthesis that our aging eyesight did not properly view as enclosing the entire denominator of a long equation in fraction form. It was enjoyable to be writing C (well, C++) code again and compiling, after some 33 years. Like bicycling, you never really forget how to code, you just fall down more often, perhaps.

Monday, August 5, 2019

We came in with a Bang

I was alluding to the Big Bang, the creation event that the atheist scientist community really hated to have to swallow. I happened to be reading the description of the 1978 Nobel Prize awards, which included Arno Penzias and Robert Wilson for their 1965 discovery of the CMB, the cosmic microwave background, that simmering remnant of the explosion of space-time some 13.8 billion years ago (using the 2018 Planck report figure), and found the Nobel site (hopefully written in 1978) described it "as tempting to assume that the universe was created by a cosmic explosion...though other explanations are possible." In 1978 I considered it well-established that the Big Bang was the origin of the Universe, but as I say, some had to be dragged kicking and screaming to that conclusion. They managed to save their position (atheist, PC, etc.) by creatively proposing that our universe is only a single bubble in a huge froth of universes, the "multiverse." Besides being conveniently unfalsifiable, the concept also provides some justification for the relative lack of progress in theoretical physics for the last half century (or more if you demand equivalence to the discovery of quantum mechanics and relativity at the turn of the twentieth century) in that it can now be claimed (with a straight face) that it is really impossible to discover more about nature now because the remainder of what happened is simply random turns of the dials on each bubble that pops into the Multiverse (this goes well with a string theory diet).

In any case, after my previous post I began to feel guilty about my brief characterization of CMB photons at a single frequency equivalent to 2.725 K temperature. The CMB photons are described by a spectrum, a curve describing about how much power (equivalent to about how many photons there are) at each of a range of frequencies, so it is a little misleading to speak about a single frequency. I accordingly dropped my other work (a physics study I have spent two years on at this point) to add a little more to my discussion.

I was surprised (again, and why is it that most surprises are undesirable? must be my life's experience or the deterioration of this world during my lifetime) to find that the equations and presentations of the data related to blackbody radiation (the CMB radiation is almost exactly that of a blackbody spectrum equivalent to a perfect radiator at 2.725 degrees Kelvin) are ambiguous at best, and possibly flatly wrong on occasion, depending on the author (there are many different forms for the equations and units involved, e.g., intensity at unit area vs integrated over a hemisphere). It is not that the mathematics and physics involved is new, Max Planck got the ball rolling back in 1901 or so, when he invented what became known as the Planck constant, h, in order to avoid the ultraviolet catastrophe. Scientists up to that time had made very accurate measurements of the frequencies of radiation emitted from hot objects, in particular those constructed to be the equivalent of a blackbody. By blackbody I mean an object that does not reflect any electromagnetic energy, like a mirror reflects, instead absorbing everything that comes its way and in turn emitting a very precise spectrum of energy related to its own temperature. At the time they created hollow cavities,  termed "hohlraum", with a tiny hole out of which they could measure radiation intensity and character inside. Almost all the radiation of the hohlraum is trapped inside, bouncing around internally, which has the effect that its internal heat is solely related to its own temperature rather than through exchange processes with the environment by which it might otherwise come to equilibrium.

A few had offered equations to characterize the blackbody radiation, but they had the undesired quality of blowing up a short wavelength, i.e., creating an ultraviolet catastrophe. Planck laboriously (I have discovered through the accounts of many scientists that scientific work requires a lot of effort, which makes me feel a bit better about my own self-inflicted pain in that area) came up with the idea that the radiation could not simply be any number on the real number line, but rather would have to come in integer multiples of this odd constant, 6.62607004e-34 Joules seconds (I have one of his 1901 papers and see he got very close to this present value of h, he giving 6.55e-27 erg second; an erg is 1e-7 Joule). He hoped for some time that someone would find a way to explain this in some other way, but quantum mechanics took off within a few years, changing everything Planck and every other scientist had known in the way of a world view (along with special and general relativity, most people are aware Einstein created that new area, but less aware that he only received the Nobel Prize for his work in creating quantum mechanics, he having found immediate use for Planck's constant in describing the photoelectric effect).

I'll see if I can somehow type the Planck law equation of interest to me (it is a real pain in the neck to work without LaTex math or its online version MathJax):
𝐈(Ξ½,T) = (2β„Žπ‚³ / c²) (1 / exp(β„Žπ‚/kT) - 1)
Whew that is ugly. Well anyway. That equation gives you the radiation intensity as a function of frequency (that is the Ξ½, a Greek letter nu) and time (per second) in, depending on what constant units you use, J or W per m^2 (meter squared) per steradian (a chunk out of the surface of a sphere, picture the radiation emerging from the eyes of angry Superman stuck in a big ball, say a hohlraum, and cones of death ray hit the surface). The famous Planck's constant is the β„Ž, which is privileged to have its own Unicode symbol up here apparently. exp means "exponential", i.e., Euler's number raised to the power of the stuff in parentheses next to it. I would have liked to put a "B" subscript on the k to signal that this is Boltzman's constant, put that was perversely impossible (every letter other than b had a subscript when I looked at the menu here). T is temperature in Kelvin absolute degrees. The frequency Ξ½ is in Hz (cycles per second). I wrote some computer code (I work in a Jupyter notebook environment, where I can type perfectly typeset mathematics in one place and active Python computer code in another, with all the scientific software available for that environment, SciPy, NumPy, SymPy, Matplotlib, etc.---first time in years I have felt like computers were actually fulfilling some of their promise, despite Gates' constant attempts to sabotage anyone's efforts to accomplish that...the Borg operating system, as it is known in the world-wide computing community) to graph the result of a spectrum of frequencies from 100 MHz to 1000 GHz input to that equation above:

And there you have it (hoot, there it is?). I encountered so many errors and misuse of the Planck equations in the literature that I intentionally used the same spectrum covered in a figure in a 2011 paper by P.J.E. Peebles (The natural science of  cosmology, 2011) as a cross check on my methods. Peebles, by the way, was one of the scientists who Penzias and Wilson shared their discovery with in 1965; they published a letter in 1965 Astrophysical Journal, vol. 142, p.419-421 and Peebles, along with his collaborators, published "a possible explanation for the observed excess noise temperature [of the antenna]" You have to consider that this was an age when men in science conducted themselves like gentlemen, with integrity and understatement. It is probably difficult to understand for anyone of the current era. These folks knew they had probably found the simmering heat of the Creation, but described it in clear, non-hyperbolic terms as the Bell Telephone scientists that they were (they had used a re-purposed piece of satellite communication equipment, a giant horn, I kid you not, used to communicate with a couple of early telecommunications satellites). Bell Telephone Labs, that is another loss of that age. Amazing discoveries that came out of their work, most of it freely shared with the world. I was just reading Nyquist's 1924 paper (published in a Bell technical journal) on Certain Factors Affecting Telegraph Speed the other day. That was the work that in many ways started the field of communications theory, his name still used today as the term for the minimum sample frequency to prevent aliasing (the Nyquist criterion). I should add that "the horn" was a microwave horn, not something you might find in the Alps. At high frequencies electromagnetic fields can be guided through suitably configured plumbing as it were.

In any case, I wanted to make it clear that there were more frequencies involved in the CMB than my June 17, 2019 post discussed. I had mentioned 57 GHz. You can see that the peak of the spectrum is around 160 GHz. Penzias and Wilson's 1965 article specifically dealt with the 7.35 cm flux, 4080 MHz or 4.080 GHz (their horn was tuned to that frequency, like trumpets with air, there are favored notes in electromagnetic radiation for the particular construction), which is pretty far out on the left tail of the distribution I graphed above.

Monday, June 17, 2019

Loose associations, binary thinking, time-space, neutrinos, and counting

I was thinking about the tree structure of syntax diagrams (that I discussed in my previous post) and began to wonder about the matter of binary decisions. By binary decision I mean having two choices only, i.e., branch to the left or branch to the right. In the most fundamental expression of such a dualism, mystics must inevitably confront the paradox of God as the spiritual light, but also the cause of nature's black deeps (paraphrasing Carl Jung). You encounter this tension of the pairs of opposites throughout the temporal world, that is to say, the world where we find ourselves conscious and proceeding through time moment by moment. There is no change where there is no time, and anything material, at least anything with mass, experiences time.

For example, neutrino particles (the Standard Model of physics, incorporating the electromagnetic, weak and strong nuclear forces or interactions, classifies all the subatomic particles known to date; there is the familiar electron, proton and neutron, but there are many others, including the electrically neutral and difficult to detect neutrino) must have some small mass and experience time since it apparently does change. The neutrino can be observed to interact in three different "flavors," i.e., electron, muon, tau (the flavors are simply the type of charged lepton it appears with in an experimental observation), and in fact oscillates among those three states (or the probability of catching it interacting with a particular lepton changes periodically with distance) as it travels, which involve the propagation of its composite mass states at slightly different rates.

The propagating neutrino mass states (there are three of those, different from the interaction flavor "state", i.e., the mass states are the true neutrino, the flavor states simply a convention) are energy-momentum four-vectors, rank-1 tensors really. Before Einstein and Special Relativity we conceived of physical objects occupying space, e.g., we could bring two objects into contact and that contact would end where the "space" occupied by each met. We had a vague notion of an abstract "space" in which objects existed, but that notion really required a body of reference (following Einstein's 1921 Princeton lectures on Special and General Relativity).

You may recall from high school algebra that we could construct Cartesian coordinate systems where we indicate the position of points (and objects consisting of the loci of such points) in a three-dimensional grid with three axes, π‘₯, 𝑦, and 𝑧.  A line connecting two points in this grid could be described by an interval π‘ ² = ∆π‘₯² + ∆𝑦² + ∆𝑧², where the "∆π‘₯" notation means "the difference in position between the two points in relation to the π‘₯-axis." Time was thought to be independent of space, a separate concept. We would speak about events occuring at different Cartesian coordinates simultaneously, without a lot of thought about what that implied. Einstein realized that there are no instantaneous effects across distance, the speed of propagation of effects being limited to the speed of light. Accordingly, it became clear that only events had physical reality, the time and the location of the event varying (mathematically, not as opinion) by observer frame of reference.

By 1895, Henri PoincarΓ© commented that the recent results (1887) of the Michelson-Morley experiment, which failed to show any variation in the speed of light with respect to the translational velocity of the Earth, implied a principle of relativity which prevented a moving observer from detecting any difference in the laws of physics compared with another observer at rest (there are a number of qualifications here, but we will leave it in a simple form).  You might have expected, contrarily, that if space was a substance, an aether through which light travelled, like an ocean wave through water, that it might travel a bit slower going upstream in a river as it were, or in the case of the Earth moving in its orbit around the Sun, light might propagate a little slower moving against the direction of motion of the Earth.

In 1904 Hendrik Lorentz summed up his recent work including PoincarΓ©'s contributions, on transforms from one frame of reference to another, consistent with the Maxwell electromagnetic equations implying the constancy of the speed of light in all inertial systems. He proposed the Lorentz transformation still used today, time dilation and contraction of bodies along the direction of motion. Interestingly, the contraction concept was originally introduced by Oliver Heaviside (you can find a copy of this work from The Electrician, 1888, on pg 511 of the Heaviside Electrical Papers, Vol. II, published 1894). Heaviside had analyzed the movement of a charged spherical conductor and found that the charge distribution could only remain static only if the sphere shortened in the direction of motion, becoming an oblate spheroid by a factor  (1 −𝑣²⁄𝑐²)¹⁄² .

For our context, we are interested in the fact that these new ideas required our old view of a separate three-dimensional space and one-dimensional time be replaced by a four-dimensional space-time continuum, which Einstein presented in a unified way in his 1905 papers. Our three-dimensional coordinate equation above becomes instead an invariant space-time interval 𝑑𝑠² = 𝑐²π‘‘𝑑² - 𝑑π‘₯² - 𝑑𝑦² - 𝑑𝑧², connecting points in a space-time coordinate system (we replaced the ∆π‘₯² notation with 𝑑π‘₯², which means about the same thing, a difference, albeit infinitesimal, between two points). Though this space-time interval is measured to be the same by all observers, moving clocks tick more slowly and objects shrink in the direction of their motion, i.e., as we alluded to previously, different observers might not agree about time and location of events, but they can all agree on the space-time interval. The Lorentz transformation permits, e.g., an observer at rest to calculate the time experienced on a moving object. For example, the proper time 𝜏 on the moving object, say a rocket headed away from the Earth (say along the π‘₯-axis) would be π‘‘πœ = 𝑑𝑑 ∕𝛾, where 𝛾 = 𝟷  ∕ (𝟷 −  π›½² ) and  π›½ = 𝑣 ∕𝑐 (the velocity we observe for the rocket, divided by the speed of light).

𝑑𝑠² = 𝑐²π‘‘𝑑² - 𝑑π‘₯² - 𝑑𝑦² - 𝑑𝑧²  = (𝑐𝑑𝑑 )² [𝟷 −  π›½² ]  =  (𝑐𝑑𝑑  ∕ 𝛾)²   = (π‘π‘‘πœ)²

That is, we observe time in the moving clock frame as 𝑑  = π›Ύπœ, their clock tick intervals appear longer than ours by a factor 𝛾, i.e., their clock appears to be running slowly in comparison with our clock at rest. Though we may not agree on clock times or the lengths of objects, we all calculate 𝑑𝑠² to be the same number, just as we may rotate three-dimensional vectors in Euclidian space without changing their length.

The time-space interval example above used the position (or position-time) four-vector, π‘₯ᡘ  = {π‘₯⁰, π‘₯¹ ,π‘₯², π‘₯³}where the first component is time (multiplied by the speed of light), 𝑐𝑑, and the remaining three variables are the usual Cartesian π‘₯,𝑦,𝑧 axis coordinates (numbered 𝟷,2, 3 superscript to indicate a contravariant vector). We wanted to look at another type of four-vector though.  Above we mentioned the energy-momentum four-vectors that describe the three mass states that comprise a propagating neutrino particle. They look like π‘α΅˜  = {𝑝⁰, 𝑝¹, 𝑝², 𝑝³}. The 𝑝⁰ component is now 𝐸, the relativistic energy and the remaining three components are the momenta on the π‘₯,𝑦,𝑧 axes. What we wanted to note was that the neutrino mass states, 𝜈₁,𝜈₂, and 𝜈₃, each propagate as an energy-momentum four-vector, together initially, but because their relative masses differ slightly they travel at slightly different velocities and, amazingly, are believed to separate after a sufficient distance.

We may simplify the analysis by considering only two of the four-momenta (we are not concerned with relativistic frames of reference here, so the 𝛾 factor will be omitted), described by the relativistic energy relation, 𝐸 =  (𝑝² + π‘š²)¹⁄² (the square root of the squared momentum and mass). 𝐸 is one of the 𝐸₁ and 𝐸₂ and the rest mass π‘š is one of π‘š₁ and π‘š₂ in turn. Using the binomial expansion that can be reduced to 𝐸ⱼₖ =  π‘β±Όβ‚– + π‘šβ±Όβ‚–² ∕ 2𝑝ⱼₖ for each 𝐸ⱼₖ and π‘šβ±Όβ‚–.

The difference between any two mass state energies becomes π›₯π‘š² ∕ 2𝑝 (𝑝 here is half the sum of the two momenta, 𝑝 =  (𝑝ⱼ  + 𝑝ₖ )/2) and the relative velocity difference then π›₯𝘷 = π›₯π‘š² ∕ 2𝑝² or  π›₯𝘷 = π›₯π‘š² ∕ 2𝐸², 𝑝 and 𝐸 being very close for the highly relativistic neutrino (it propagates at very near the speed of light). If the size of the neutrino propagating mass states are roughly 𝜎 (the length of a packet) then the distance at which the mass states separate is 𝐿 = 𝜎 ∕ π›₯𝘷. We estimate that neutrinos arriving from a supernova at the center of our Milky Way galaxy might arrive with their mass components separated by up to 41 meters, a detectable difference of about 137 ns (billionths of a second) apart. As far as we know though, none of the current supernova detection experiments are set up to detect separated packets.

Resuming our original thread, the neutrino has mass, therefore it cannot travel at the speed of light, by Einstein's equation for relativistic energy, writing in a form with the rest mass and 𝛾 factor,  πΈ = π›Ύπ‘šπ‘²  = π‘šπ‘² ∕ (1  − π‘£² ∕𝑐²)¹⁄².

You see in that equation that if v, the velocity of the neutrino, were to equal the speed of light, c, in the denominator, the v2 divided by c2 would equal 1 (since a number divided by itself is 1), and since that is subtracted from 1 beneath the radical, the result of the subtraction will try to go to zero. The denominator of a fraction is not allowed to be zero because that operation is not defined. a/b = c implies a = bc and there is no number c that could be multiplied by zero to get a non-zero number a. 

However, it you graph y = 1/x as x approaches zero (becomes very small in absolute value) from negative infinity (the left, e.g., -3, -2, -1, -0.9, ....-0.000001...) and as x approaches zero from the right or positive infinity (e.g., 3, 2, 1, 0.9, 0.00001...) y goes to either negative or positive infinity (the red line in the graph heads straight up to infinity or straight down to infinity as it nears the y axis):

So that tells you that E, the energy of a relativistic particle (or any particle with mass), would go to infinity as the particle approached the speed of light. Since there isn't infinite energy available (at least not since the moment of Creation at the Big Bang), particles with mass are not allowed (by the laws of this universe) to reach the speed of light (the neutrinos are pretty close though, but have a very tiny mass). If a particle travels at the speed of light, it cannot change because time for the particle does not exist (a change is an event in time that occurs when something passes in time from one state to another).

Light, on the other hand, is propagated by photon particles, which have no mass and travel only at the speed of light. So the photons from the Big Bang moment of Creation have not changed in the intervening billions of years separating us from the Big Bang? Well, space has expanded under their feet as it were. The photons we detect today as CMB (cosmic microwave background) were emitted at Recombination, about 380,000 years after the Big Bang, when hydrogen and helium atoms are formed, taking up most of the free electrons, making it possible for photons to free stream without many collisions at an equivalent temperature of about 4000 K, with a wavelength of around 3.60 millionths of a meter. When they are detected today, because the Universe has been expanding over the intervening billions of years, they have been stretched out to a wavelength of about 5279 millionths of a meter, a frequency of about 57 GHz (your cell phone frequency is about 1.9 GHz and microwaves lie between 0.1 to 1000 GHz, hence the term cosmic "microwave" background).

Aside from the stretching they are beyond time as it were. Recall from our earlier discussion that we observe time in a moving clock frame as 𝑑  = π›Ύπœ, clock tick intervals on a moving object lengthening by a factor 𝛾, where  π›Ύ = 1∕ (1  − π‘£² ∕𝑐²)¹⁄².

You can see we get the same problem of the fraction as a whole going to infinity as the velocity equals the speed of light (and photons, having no mass, always travel at the speed of light in vacuum), which you may interpret as time being frozen more or less for the thing travelling at the speed of light (if it takes infinity to reach the next clock tick time has stopped).

GPS satelllites move at 14,000 km/hr and have to correct for relativistic time dilation of 7 ΞΌs/day (slower satellite clock). (They also have to correct for the General Relativity opposing effect of the gravitational redshift from the Earth's gravitational field making clocks back on the Earth's surface relatively slower than the satellite clock, i.e., clocks in a "gravitational well" run slower.)

If we are able to send people to nearby star systems someday and reach significant percentages of the speed of light, the travellers (astronauts) will find that people back on Earth have aged more than they have. For example, if they travel 10 light years (round trip and neglect the year or more accelerating at both ends) at 80% of the speed of light they will age 7.5 years and their friends and family back on Earth will have aged 12.5 years (at 0.8 speed of light they would cover 10 light years in 12.5 years and their clocks are at 0.6 of "normal" so they would age only 7.5 years).

I have glossed over the idea of a quantity going to infinity, proposing that once we see a quantity headed in that direction we can safely envision a progression conceptually reaching infinity, i.e., one foot placed after the other in a never-ending march, counting each step (as I suggested above when pointing to the graph of 1/x heading up or down to infinity since we could see it was "headed that way").

We assume that we have always had the idea of adding more things to increase the number of things in hand. I've got one apple and if you give me another, why then of course I have two then. In fact though it takes more than two years for human children to grasp that the difference between "one thing" and "more than one thing" can be more abstractly understood as the successor function which generates all of the numbers with which we count, i.e., the natural numbers (1, 2, 3...some include 0). A child can proudly perform the counting script they have been taught, naming, say, one toy fish, two toy fish, three and so on. However, typically the 2.5 year old child who has just "counted" the toys in that way will hand you one fish if asked for one, but give you an arbitrary handful if asked for any number other than one!

After additional months of experience the child slowly, in a stepwise fashion, learns to understand "two" then "three." Sometime after this comes the great leap forward, where the child grasps implicitly the induction definition of natural numbers, i.e., that each word in the counting routine actually defines how many things you are considering and that each successive count adds one to the number of things (in your set) and that this can be continued indefinitely, with no upper bound (see Evolutionary and developmental foundations of human knowledge, by Marc Hauser and Elizabeth Spelke).

Although a chimpanzee can laboriously learn to associate a number symbol with a particular number of objects, they never (at least not after 20 years of training on one particular subject) progress to the understanding of the successor function, i.e., chimps cannot learn that a new number symbol means that one has been added to the previous set of items. It appears though (from research done by Spelke and others) that humans and some non-human primates both draw on a core neurophysiological basis for (1) representing the approximate cardinal values (about how many items are present) of large groups of objects or events and (2) representing the exact number of object sets or events when there are only a small number of individual units.

It appears that the uniquely human capability to construct the natural numbers (i.e., use the successor function) relies first on the core perception of one versus many, then mapping other number words to larger numerosities, then noticing that the progression in the language (the words representing numbers) of the counting routine corresponds to increasing the cardinal value of the set, the number of units in hand. This (and other research) suggests that natural language ability is involved in the human leap from those core perceptions shared by some non-human species to the natural number concepts unique to humans.

There has been some controversy about extending the concept of infinity, at least in the context of mathematics, namely, the Brouwer-Hilbert controversy about the foundations of mathematics at the beginning of the twentieth century.

L.E.J. Brouwer did not believe that the rules of classical logic laid out by Aristotle have an absolute validity independent of the subject matter to which they are applied. For example, Aristotle defined the Law of the Excluded Middle, which reasonably from our experience in life states that any proposition is either true or it is not true, e.g., Socrates is either a mortal or he is not a mortal, he cannot be something in between the two.

The claim of formal logic is that this law (of the Excluded Middle) applies simply because it is an accepted rule of logic, not because we have seen examples which permit us to infer that it is true in a specific case (e.g., the case of whether Socrates is mortal or not). Brouwer objected to making such an automatic claim via logic when offering a formal proof in mathematics. Brouwer wanted to see a proof that constructed specific examples (actual mathematical entities) rather than simply claiming one or the other of two contradictions must necessarily be true.

This may seem like rather abstract contentions among mathematicians, but if you go with Brouwer (and his intuitionist stance) then you are not allowed to extend presumptions to the infinite (which would cramp our style in the discussions earlier). For example, the induction axiom of mathematics states that if a mathematical proposition P(n) is true for n = 0, and if, for all natural numbers n, P(n) being true implies that P(n + 1) is true also, then P(n) is true for all natural number n. You will recognize our successor function from our earlier discussion. The so-called "animal instinct" here is that it must be true since you can conceive of marching forever, one foot placed after another in a never ending march, thereby defining infinity.

The alternative notion would be Georg Cantor's aleph-null, a completed infinity all at once without laying out the steps leading there. Well, strictly speaking aleph-null or aleph-naught represents any countable infinite set, for example, the natural numbers N. The real numbers, R are also infinite, but not countable. Cantor developed his famous diagonal argument to prove that R, the set of real numbers was not countable, though it was infinite.

Cantor's diagonal argument for showing a set is uncountable goes like this: Enumerate (list) all of the members of the set T of infinite sequences of binary digits (ones or zeros). No matter how you list them there will always be a member of the set that you miss, because you can draw a diagonal slash from left top corner to right bottom infinity corner, pull out the string of digits selected by your slash, then complement each of the digits you obtained, i.e., if there is a "1" replace it by a "0" and if a "0" replace by "1." The sequence you end up with cannot have been in the list because it differs from every string in the list by the nth digit:

In the above example the diagonal slash pulls the red digits 01000101100... and complements each of those to get s = 10111010011... You can see that s cannot have been in the list because you have made it differ by one digit in each listed sequence by complementing the slashed sequence. Therefore it is impossible to count, i.e., enumerate or list all of the numbers (each sequence represents a number in the set) in the set because every time you complete your list, a new unique number pops up in the diagonal slash!

David Hilbert designed a formal definition of mathematics where no intuitive notion about "reality" or actual examples or objects was necessary, but rather just rigorous definitions of symbols and the operations you could apply to them. Hilbert believed you could find a rote procedure for manipulating the symbols of his formal mathematics such that you could decide automatically whether a particular theorem expressed in his symbols was consistent, in effect putting all mathematicians out of work. This would make use of the Law of the Excluded Middle also, by assuming that such an automatic proof machine could decide if any arbitrary string of symbols was a correct theorem or was not a correct theorem (either it was or it was not, proof by contradiction accepted).

In 1900 Hilbert presented a number of questions to the international congress of mathematicians. Questions one and two were (1) was the system of mathematics he offered complete and (2) was it consistent,

A mathematical proof system, a set of axioms, is complete if any statement within its formal language may be proven or its negation proven using only the axioms. Such a system is consistent if it is impossible to construct a valid argument that is inconsistent, i.e., impossible to construct a statement from its axioms which is both true and false. Questions 1 and 2 were answered by Kurt Godel in 1930, proving that undecidable propositions may be constructed in any minimum arithmetic system).

The third question, was the system of mathematics decidable (the so-called Entscheidungsproblem) was answered shortly thereafter by Alan Turing and independently by Alonzo Church. Turing created the concept of an automatic computation machine and proved that there cannot be a general process for determining whether a given formula of mathematics in the symbolic logic of the system is provable within the system. Turing's mental concept used in the proof was rapidly developed into physical digital computers that we use today.

It remains a bizarre paradox that Turing's work, which implied that mathematics really required a mind, something not a machine, was soon used to "support" the premise that the human mind is a kind of computer. John von Neumann, by some accounts the most intelligent human who ever lived (where intelligence means the capacity to do the things measured on IQ tests, e.g., use memory and manipulate symbols, concepts, not the same thing as wisdom), helped design several of the initial digital electronic computers in the 1940's. He left notes for an incomplete book setting out his thoughts about human brains and computers.

Neumann thought of the computer, whether analog representing numbers by variable physical quantities like voltage produced by electronic circuits representing equations or by the presence of absence of a marker (digital) producing an input stream of pulses and operating on them to produce an output stream of pulses, as devices that performed arithmetic operations on numerical data under the control of logic. Oddly, von Neumann, who was well aware of the distinction between manipulation of symbols (as we mentioned regarding Hilbert and Brouwer above) and the interpretation of those symbols by a human, tacitly assumed that computers manipulated numbers, rather than symbols. Neumann assumed that brains compute (becoming part of the philosophy of mind), but offered no justification for that assertion.

Computers are designed by humans to manipulate symbols which are subsequently interpreted by humans, but it does not appear (to me) that brains perform arithmetical operations on numerical data. David Berlinski offers the analogy that some people are able to accept without consideration the thesis that the human mind is like a computer but would balk at the suggestion that the human mind is like an abacus, though the fact is that there is no difference fundamentally between an abacus and a Turing machine or the digital computers which were developed from Turing's conception. They are all mechanical devices which when manipulated by humans produce symbolic output of use when interpreted by a human. However, as physicist Lee Smolin (a theoretical physicist with contributions in the field of quantum gravity) has observed, neuroscience "is a field that is as bedeviled by outdated metaphysical baggage as physics is. In particular, the antiquated idea that any physical system that responds to and processes information is isomorphic to a digital programmable computer is holding back progress."

That is a good transition back to our original thread, the discussion of the pairs of opposites in the temporal versus the eternal (since perhaps the most fundamental dualism is the Light vs the Darkness speaking in spiritual terms). I suppose some might propose that the middle is not really excluded, i.e., that things are not really one state or another (particularly at this moment in history).

Tuesday, June 13, 2017

Natural Language Processing

I have been studying natural language processing lately (otherwise known as computational linguistics). I began with NLTK (Natural Language Toolkit), an open source natural language processing tool kit. This is a superb guide to practical computational linguistics featuring a free comprehensive textbook (which is frequently used for a single semester course in natural language processing at advanced undergraduate or postgraduate level) and software package running in a Python environment on Windows or Linux.The field covers a wide range, but an example readily available to many people these days is the process by which your smart phone accepts vocal commands from you. This involves segmenting the phonemes (the individual pieces of spoken words, nominally involving a consonant and a vowel), putting breaks in the incoming stream of language sound you make and then attempting to match those with words from a lexicon (large list of possible words).
This is no easy task, but it is followed by the even more challenging pursuit of meaning, attempting to map what you have spoken to actions the phone can take, including the object of such an action. For example, if you commanded your phone "search for Mexican restaurants in Las Cruces" the phone would look for a command in that string of sounds, a command it recognizes. If it successfully recognized "search for" then it would branch in its processing logic to objects of such a command, i.e., what you want to search for.

This would require tagging each word in the utterance (what you just said to the phone) to identify the command and its object(s). The phone would have to recognize that "(Mexican) restaurants" is the search object.

Here is a look at the result of a natural language processor tagging the text string of the utterance we are discussing (we will ignore the details of how the sounds you made became this text stream):

>>> grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> parser = nltk.parse.EarleyChartParser(grammar)
>>> text = nltk.word_tokenize("Look for Mexican restaurants in Las Cruces.")
>>> nltk.pos_tag(text)
[('Look', 'NN'), ('for', 'IN'), ('Mexican', 'JJ'), ('restaurants', 'NNS'), ('in', 'IN'), ('Las', 'NNP'), ('Cruces', 'NNP'), ('.', '.')]

Notice how each word in the utterance (what you said to the phone) has now been tagged with a part of speech label (which we refer to as simply "tag").  'IN' means "preposition."  'NN' means "singular noun," 'NNS' means 'plural noun,' and 'NNP' means 'proper noun' (typically the name of a person or place). These grammatical tags are taken primarily from the Brown Corpus, a landmark publication by Brown University in 1964, featuring over a million words of running text or edited prose published in the United States in 1961.

Before I ran the parser on our target utterance I had to give it a grammar (you can see that I loaded the atis.cfg grammer in the Python IDLE session above (Python is a computer programming language frequently used in science; IDLE is an integrated development environment, i.e., a windowed application that makes it easier to write and test code). The ATIS grammar, developed by Microsoft Research, was extracted from a treebank of the DARPA ATIS3 training sentences. Sentences are typically parsed into treelike structures. Well, I will see if a picture is worth a thousand words here and show you a tree parse diagram of the sentence we are working with (from a parse done later to correct mislabelling of the verb):

It does appear somewhat like an upside down tree, where the tree's root is at the top and its branches become developed as it proceeds down the page, the inverse of an oak tree rooted at the ground and branching above. A treebank (in the context of computational linguistics) is a database of such syntactic or parse trees. Such a treebank can be analyzed to discover typical patterns of syntax, i.e., the way the different parts of speech are normally organized in sentences of a particular language. For example, English sentences typically have the subject first (going left to right) and it usually is a noun or noun phrase. The predicate, the part to the right of the subject typically, is formed around a verb. We form sentences without having to think much about it, having brains that are evolved to learn and process language (I will agree with Noam Chomsky on this and may say more about it later), but it is difficult to program a machine to do this. One of the ways to construct a computer program that will parse sentences is to analyze a treebank and produce rules of grammar that describe the frequent patterns in the treebank (like subject/NN-->predicate/VP).

A context free grammar (CFG) is often used to formally present the rules of grammar in a form a computer can process. For example, S --> NP VP, which tells us that the symbol 'S' on the left, symbolizing 'sentence' can be produced by a noun phrase (NP) followed by a verb phrase (VP). There are many different ways to form sentences (understatement). In fact, that is one of the things that distinguishes human speech from animal communications (well, it used to), i.e., that each utterance is potentially unique, never previously said, created by putting together the blocks of language by the rules of grammar to accomplish the communication of a potentially novel thought. So you really want a computer to help grind through huge treebanks of sentences labelled with their parts of speech (POS) tags and generate grammar rules as much as possible (since the rules will be lengthy, i.e., many lines of the kind of production rule I just showed you above).

DARPA, the Defense Advanced Research Projects Agency (of the United States government), has done linguistics research among other things.  For example, they created the TCP/IP protocol that we use to communicate over the Internet---that protocol was designed to assure an email got to its destination even if a city or two was destroyed by a nuclear attack, TCP/IP being able to try different routes if a particular city disappears). They have also been working on true artificial intelligence (not the chicanery promoted as AI by many software folks and companies, which I won't name since I am blogging on their platform), but they abruptly went "black" on the subject after 2012, now only presenting this effort as using mammalian brain structural hints to create advanced computer chips. Their actual intent is to create true mammalian brain intelligence, which I will prove by reproducing one of their press images from 2012 (which seems to have been removed) describing the SyNAPSE project (to alarm those of you who have watched the Terminator movie series):

DARPA was interested in machine reading and other computational linguistics subjects and produced the ATIS3 training sentences which Microsoft used to produce the ATIS grammar that I gave to the Earley parser I used to analyze the "look for Mexican restaurants in Las Cruces" sentence above. The Earley parser is a chart parser that uses a set of grammar rules (as just discussed) in a dynamic programming environment, trying to predict which of the grammar rules to use next as it moves from left to right across a sentence trying to match up rules with the words and POS tags it encounters. It is important to predict which rule to use next because to simply scan through the entire CFG grammar file for each word of each sentence might take a prohibitively long time. The ATIS grammar I used above has about 5,235 lines (rules).

Well, some of you who have persisted in the grueling task of reading my entire post may be wondering if I noticed that the Earley parser mislabelled the verb 'look' in the sentence. Yes, I did. So I had to obtain a more robust computational package (I am sure I could have gotten better results with NLTK had I spent more time teaching classifiers, but I was in a hurry), my simple sentence being a somewhat unfair target, being a command to a machine and missing a subject (that being understood by most humans to be 'you,' i.e., the person or thing being commanded).

I got hold of a language processing pipeline from the Natural Language Processing (NLP) group at Stanford University and ran their coreNLP pipeline as a local server, using http post to request parsing of the target sentence from my IDLE Python console (assisted in that http post process by use of a nice Python package called Requests, advertised as 'the only non-GMO HTTP library for Python, safe for human consumption' by its ingenious author, Kenneth Reitz, and some interface Python code written by a postgrad AI student at Berkeley, Smitha Milli). The Stanford NLP software is industrial strength but did churn for a minute or two to produce a correct parse:

c:\stanfordcnlp>java -cp "c:\stanfordcnlp\*" -Xmx1500m edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 60000 -annotators tokenize, ssplit,pos,depparse,parse
[main] INFO CoreNLP - Starting server...
[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000

Then I started a Python IDLE session and requested service from the Stanford server locally:
ActivePython (ActiveState Software Inc.) based on
Python 2.7.10 (default, Aug 21 2015, 12:07:58) [MSC v.1500 64 bit (AMD64)] on win32
>>> from pycorenlp import StanfordCoreNLP
>>> nlp = StanfordCoreNLP('http://localhost:9000')
>>> text = ('Look for Mexican restaurants in Las Cruces.')
>>> properties = {'annotators': 'tokenize,ssplit,pos,depparse,parse', 'outputFormat': 'json'}
>>> output = nlp.annotate(text, properties)

...and the server response was:

>>> print(output['sentences'][0]['parse'])
    (VP (VB Look)
      (PP (IN for)
          (NP (JJ Mexican) (NNS restaurants))
          (PP (IN in)
            (NP (NNP Las) (NNP Cruces))))))
    (. .)))

So, we got the proper tagging and parse of our sentence. I wanted to see a tree visual of this so I laboriously manually entered into NLTK the CoNLL2000 IOB tag lines corresponding to the parse from the Stanford NLP parse:

>>> chunkTest = """
look VP B-VP
for IN B-PP
restaurants NNS B-NP
Mexican JJ I-NP
in IN B-PP
Cruces NNP I-NP
. . O
>>> nltk.chunk.conllstr2tree(chunkTest, chunk_types=['VP', 'NP', 'PP']).draw()

...and obtained the following visual presentation of the Stanford parse:

CoNLL2000 was the 2000 Conference on Computational Natural Language Learning (CoNLL-2000). Chunking captures larger pieces of a sentence, grouping POS tagged words into chunks like VP (verb phrases), NP (noun phrases) and PP (prepositional phrases). The data file format they used was IOB, which you can see above were each line of a sentence with a word, a POS tag and an IOB chunk tag specifying 'B' for beginning of a chunk, 'I' for 'in a chunk', and 'O' for 'out of a chunk, i.e., not a recognized chunk type.' 

I had better close this post, since I am getting some strange edit behavior and may be exceeding the size limits here. Stay tuned though---I intended to talk more about what is is machines are doing when they process language.