Chaos Manor View, Monday, May 16, 2016
“This is the most transparent administration in history.”
Liberalism is a philosophy of consolation for Western Civilization as it commits suicide.
Under Capitalism, the rich become powerful. Under Socialism, the powerful become rich.
Under Socialism, government employees become powerful.
It’s amusing: the new York Times came out with attacks on Trump based on interviews with his former girl friend and one of his female executives, whereupon his former girl friend shows up at Fox News and denies it all. It would be amusing if there were not so much at stake; but it is instructive and probably illustrates just what America’s newspaper of record will do in the campaign. Journalistic integrity seems to have played a rather small part in this story.
Mitt Romney, the losing Republican candidate against Mr. Obama in 2012, is frantically seeking someone to run against Mr. Trump. He’ll even try it himself. Since he knows quite well that neither party cares for him – after all, he lost against a President with rather high negatives, getting far fewer votes than Mr. Bush got in his reelection campaign, it’s a puzzlement: the more votes he gets, the more likely that Hillary will win, saddling America with at least four more years of Obama’s leading from behind, intervening with too little and too late, and smarmy foreign deals, and forty years of a liberal Supreme Court. He knows this, Nelson Rockefeller cut the ticket against Goldwater in 1964; Romney will apparently try to go him one better. We will see how the Republican Establishment behaves in this crucial election.
Some of the smartest people I know think it won’t matter. Romney is demonstrating his irrelevance. I voted for Romney because the alternative was Obama. Some anti-establishment Republicans stayed home in 2012, thus giving four more years of Obama and Depression. Romney wants to double down, which may tell you something about the New Class.
[In case you missed it yesterday]
In the May-June issue of The American Conservative. I heartily recommend it as an honest assessment by an astute observer and thinker. His invocation of Djilas and The New Class in explaining the mess we have got ourselves into, and his analysis of just what it means to be an American conservative, is worth the time of every thoughtful American, conservative or liberal.
Announcing Hardbound Edition: There Will Be War, Volumes I & II. The first two volumes of the 1980’s anthologies bound together in a hardbound edition. Obviously these are available as eBooks for considerably less, but if you want them as a book, this is your opportunity. From the official description:
“Created by the bestselling SF novelist Jerry Pournelle, THERE WILL BE WAR is a landmark science fiction anthology series that combines top-notch military science fiction with factual essays by various generals and military experts on everything from High Frontier and the Strategic Defense Initiative to the aftermath of the Vietnam War. It features some of the greatest military science fiction ever published, such Orson Scott Card’s “Ender’s Game” in Volume I and Joel Rosenberg’s “Cincinnatus” in Volume II. Many science fiction greats were featured in the original nine-volume series, which ran from 1982 to 1990, including Robert Heinlein, Arthur C. Clarke, Philip K. Dick, Gordon Dickson, Poul Anderson, John Brunner, Gregory Benford, Robert Silverberg, Harry Turtledove, and Ben Bova. 33 years later, Castalia House has teamed up with Dr. Pournelle to make this classic science fiction series available to the public again. THERE WILL BE WAR is a treasure trove of science fiction and history that will educate and amaze new readers while reminding old ones how much the world has changed over the last three decades. Most of the stories, like war itself, remain entirely relevant today. This omnibus edition contains THERE WILL BE WAR Volumes I and II. Volume I is edited by Jerry Pournelle and John F. Carr, and features 23 stories, articles, and poems. Of particular note are “Reflex” by Larry Niven and Jerry Pournelle, the original “Ender’s Game” novella by Orson Scott Card, “The Defenders” by Philip K. Dick, and a highly influential pair of essays devoted to the then-revolutionary concept of “High Frontier” by Robert A. Heinlein and Lt. General Daniel Graham. Volume II is edited by Jerry Pournelle and features 19 stories, articles, and poems. Of particular note are “Superiority” by Arthur C. Clarke, “In the Name of the Father” by Edward P. Hughes, “‘Caster” by Eric Vinicoff, “Cincinnatus” by Joel Rosenberg, “On the Shadow of a Phosphor Screen” by William Wu, and “Proud Legions,” an essay on the Korean War by T.R. Fehrenbach.”
‘They resent historical accounts such as those Klehr and I produced that present archival documentation of the CPUSA’s totalitarian character and its devotion to promoting Soviet victory over the United States in the Cold War.’
The Cold War against the USSR is over, but it remains in many American institutions.
I don’t agree with all of this, but it is worth your time reading it:
The Smallest Minority
An interesting last post from a blogger calling it quits for now.
Why Machines Should Learn From Failures
Science is biased toward success. But to build reliable artificial intelligence, looking to scientific failures is important too. (journal)
May 6, 2016 8:33 a.m. ET
It’s often said that some of life’s most valuable insights stem from failures. The same might hold true for machines.
Scientists at Purdue University and Haverford College devised an algorithm that can learn to predict new crystal recipes based on its analyses of not just chemical reactions that yielded other crystals, but also chemical reactions gone wrong. They reported their findings in a study published in the journal Nature this week.
Although the study focuses on chemical applications, Alex Norquist, the study’s lead researcher, said in an interview the approach has the potential to liberate large amounts of potentially powerful information that’s been traditionally ignored.
“In science we fail, and we fail a lot. We fail more than we ever really succeed, but the scientific literature is really biased toward success. We pretty much only tell each other about the successes,” the Haverford College chemist said. “But failures contain really valuable information. We wanted to create a mechanism by which we could learn from [that].”
In an age when machines are increasingly being used to help scientists and companies make decisions, looking to forgotten data sources could serve up unexpected wins—and open up new avenues of research.
Here are edited excerpts from the conversation with Dr. Norquist.
WSJ: What effect does biasing data toward successes have on the machine-learning algorithms we’re hoping to use to make new scientific discoveries?
Dr. Norquist: If all or most of the data is success, then the model won’t really know where failures are going to come in. That paints a very different picture from what we see in reality. As we remove this bias, it opens up a lot more information. The approach we’re using can be generalized to a lot of different types of science. The more that we don’t bias the data that we look at, the better our understanding will be.
WSJ: In which other areas might this be useful?
Dr. Norquist: Our approach is designed to help us get to the end stage more quickly by making the materials discovery component faster. A lot of the initial machine-learning work was done by pharmaceutical companies working on drug discovery. We’re always looking for new materials that have better properties, better batteries, better photovoltaic [cells].
WSJ: Why is it important that our machines learn both from successes and failures?
Dr. Norquist: Knowing what to do is just as important as knowing what not to do. It’s only when we look to both that we’re really able to see the boundaries between successes and failures. Really understanding those boundaries and why those boundaries are as they are [is] where the real power in these failures comes from. For example, if nearly all reactions whose temperatures were above 130 degrees Celsius fail, we know to keep the temperature lower than that level. It tells us where we strayed into a bad neighborhood.
WSJ: How difficult is this to do?
Dr. Norquist: The main thing is accessibility. Most of failures in science often times just exist in lab notebooks on a shelf somewhere. It’s hard to get at them.
WSJ: What steps are you taking to liberate the failures data?
Dr. Norquist: We have made our [data] publicly accessible. We invite anybody who wants to contribute their own to the project.
WSJ: Data is proprietary. Companies guard their information. Why would they make data publicly available?
Dr. Norquist: The way that science works is that we all rely on the experiments of others. The old saying is that you can save a week in the lab by spending an hour in the library.
WSJ: Is there anything particularly difficult about teaching machines using failures data, apart from getting access to that information?
Dr. Norquist: Not really. The algorithms don’t care.
Natural Selection: Dawkins’ Weasel and Martin’s Monkey
Dear Jerry –
Greetings, and I hope you are doing well.
I’ve been doing a bit of exploration which I thought you’d find interesting.
I expect you remember about 30 years ago when Richard Dawkins came up with his Weasel program as a demonstration of the power of random variation and natural selection. While his results were striking, I’ve never come across any sort of systematic exploration of the Weasel’s performance.
Looking into this, I’ve developed my own program, which I call (with all due seriousness and self-importance) Martin’s Monkey. This approach replaces Dawkins’ fecund weasel with A Monkey At A Typewriter. Instead of breeding generations of offspring, the monkey simply tries to copy a line of text. Alas, being a monkey and easily distracted, it makes random errors, with a probability P of a mistake for each keystroke. For this exercise I’ve arbitrarily picked a P of .01, a 1 in 100 chance of making an error. The typewriter has only 27 working keys, 26 caps and a space. After each line of text, the result is compared with a target text, and if the monkey’s output is closer to the target text, it becomes his new standard. “Closer” is simply distance on a 27-element ring. So if C is desired, B and D have an error rating of 1, A and E have value of 2, space and F are 3, Z and G are 4, etc. The error values are simply summed over the characters of the text to produce an overall error value. The Monkey starts with a random text, and the target text is also randomly selected for each run. Each attempt from start to termination by the Monkey is a generation.
First, of course, the baseline. Let’s say the character set size is M, and the number of characters is N. If the text lines were randomly generated and you terminated the process upon a perfect match, you’d expect a match in ((M/P)^N)/2 generations. In this case, with M = 27, and an error probability of .01, a random process will require (2700^N)/2. This sort of exponential gets crazy very quickly, of course. For N = 20, it’s about 2 x 10^68, for N = 40 it’s 9 x 10^136, for N = 60 it’s 4 x 10^205, and for N = 80 it’s 1.6 x 10^274.
So, how does the Monkey do? Well, I’d like you to think about this for a bit, if I may. What would you expect? Just as importantly, how do you think it should respond to increasing N? I only ask this because, unless you’ve invested a bit of energy in thinking about it, the real results won’t mean much. For what it’s worth, I’d expected some sort of exponential with an exponent smaller than the random version, but not that much smaller. Aggregates of random processes often exhibit square root behavior – would that seem a reasonable starting point? Exponent equals N/2? Just suggesting. Think about it.
Ready? If you just got impatient and didn’t spend any time, that’s OK, but I thought I’d give you the chance.
I ran a series of simulations using the Monkey, and the results are included in the attached graph (Weasel.png). Each point represents the mean of 100,000 runs using random start and target sequences. I’ve got a decent machine and the compiler I use produces fast code, but it still took 3 days to produce the data. I also kept track of the minimum and maximum number of generations for each N, and they are pretty consistently within a 25% to 400% band around the mean, particularly for N greater than 15 or so. I’ve attached this as Weasel Full.png, if you’re interested.
And the results are pretty startling. For 80 characters it only takes about 28,000 generations. Compared to 10^274 that’s what you might call decent efficiency (I’m willing to define any efficiency greater than a google as “decent”). Even more interesting is the trend with increasing N. It’s actually not linear (it’s slightly concave upwards, and I can explain part of that if you wish – it’s a variant of the Birthday Paradox) but the contrast with an appreciable exponential is noticeable.
As Dawkins pointed out 30 years ago, this is not a model for biological evolution. Genetic mutations come in all sorts of scales, up to and including chromosome duplications and deletions, and the duplication and elimination of long stretches of DNA within a chromosome is well-known. Changes in regulatory genes will presumably have enormous consequences. The model has no equivalent of neutral mutations. Fitness functions are not anything like as simple as presented here. The point is simply to look at the effects of random variation combined with selection for any beneficial change, no matter how small.
With that said, I find the results fairly remarkable, and food for thought if you ever are tempted to dismiss “mere randomness” as a possible driver of evolution.
On the other hand, natural selection can’t see where it is going; there is no design that it has in mind. As Fred Reed once said, it is not obvious that a random lump of inorganic dancing atoms will evolve to write Shakespeare’s plays, perform Swan Lake, go to the Moon, and build the Trump Towers.
“We did end up with a monopoly.”
Freedom is not free. Free men are not equal. Equal men are not free.