Settled Science and the Munchhausen Trilemma


Chaos Manor View, Wednesday, August 26, 2015

Very hot today in Los Angeles. Pounding away on fiction, but it’s not easy. Typing continues difficult. I have dozens of suggestions regarding Dragon, and one day I’ll implement it and try, but the weather and time pressure both argue against trying a whole new way to “write”; the last time I tried dictation it was a flop; I got so concerned with the way I was composing sentences, and waiting for them to appear on the screen, that after a while I was more worrying about the writing details than about what I dictated. Of course that’s something of what is happening now.

As it happens, yesterday over in another conference (SFWA) where I spend too much time, a new member asked for advice on career management. I answered:

I don’t think there has ever been better advice given than that of Mr. Heinlein:

To be a writer you must write.

I will add, until you are established as a writer, you would do well not to spend a lot of time talking about writing or listening to others talk about writing in the hopes that you will learn some secret formulae. You won’t. Randall Garrett was fond of saying he knew no professional writers who got there through workshops or discussing writing with other beginners. I do, but not many.

To be a writer, you must finish what you write.

I will add that there is something sadly amusing about the “writer” who always has an unfinished manuscript to inflict on his friends.

Do not rewrite unless instructed to do so by someone who is going to buy it.

This was probably the most controversial, and most badly misunderstood, of Heinlein;s dicta. He did not mean write first draft and never rewrite; he meant that the rewrite is part of finishing and it should be done and over. Don’t rewrite finished work. You will do much better to work on something new.

Send your work to someone who can buy it, and start on something else. Keep that up. Keep writing, finishing, and sending to editors.

Basically that’s it.

The magic is in doing the writing. For story tellers it takes a while to make writing automatic so you can concentrate on the story, not on how you tell it.

And there are nine and sixty ways of constructing tribal lays…

As to career management, it used to be that you sold to the magazines, got the cover, graduated to a novel, etc. Now there are alternatives, many discussed here. But before you manage a career in writing you have to write, and the best way to learn that is to write, finish what you write, send it to someone who can buy it, and don’t rewrite unless someone who will buy it tells you to. Obviously there are stories that if rewritten can be made better, but a better investment is to do a new story. Then another. Then one more. Finishing each.

After a while the writing comes easier and you can concentrate on what you want to say, not on how to say it.

Of course you may be well past needing that advice.

Jerry Pournelle

The point being that if you have to think about what you are doing, rather than on what you are trying to say, you have a severe handicap; and that’s what I am trying to overcome. I’m getting there but it’s slower than I like. But then it took longer than I like just to feed myself…

For some reason I cannot fathom, the Word grammar program does not like the first sentence in that paragraph. I give up on why it thinks it is bad grammar. If it be, then so be it. Oh. I see. It wants a proper verb. Ah well, it’s clear enough.

Anyway we will continue the discussion of philosophy of science. To summarize my views, which are derived entirely from Sir Karl Popper and St. Thomas Aquinas:

Science has become a very useful way of discovering truth about the world. To most of the world, “reason” and “science” are essentially synonymous.

Science has strict rules. The most fundamental rule is that no theorem or hypothesis is scientific if it cannot be falsified. It does not mean that “I saw a man who wasn’t there” cannot be true, but it is not a scientific truth because there is no conceivable way to falsify it.

We may act as if scientific theories (those which can be falsified) were true, but always with the understanding that they may someday be falsified.

This can lead to conflicts of theories, and sometimes does. An example is the late Petr Beckmann’s theory of entailed aether, as opposed to Einstein’s Theory of Relativity; they both, as I understand it, “explain” all the relevant data; where they make different predictions, falsification of either requires experiments we cannot perform. That leads to wildly different possibilities, but we cannot choose among them given the present state of observations. There is an overwhelming consensus in favor of Einstein, but there is no crucial experiment to choose between them at this time.

When conflicting theories lead reasonably to disparate courses of action the situation becomes critical, in particular if the different actions have high cost; this is the situation in which we find ourselves regarding global warming, with the added problem that there are mutual assertions of falsifications of the different theories, as well as conflicting claims of the validity of certain evidence.

Some statements may be true, but are not scientific because there is no way to falsify them. My prediction that unrestricted capitalism will lead to the sale of human flesh in the market place is “scientific” in that it could be falsified, but it also rests on the non-scientific assumption that the sale of human flesh – or baby parts – is not morally acceptable. “Ethicists” and religious leaders may or may not agree on that assumption, but their disagreements cannot be settled by any scientific process I am aware of. At some point you are faced with “good” and “evil”, and it is meaningless to say that good is better than evil because good’s gooder. There are those (I am among them) that say that certain morality systems lead to a “better” way of life than others, and there are many examples, but this not science; one reason why education needs to include the liberal arts, but this goes far afield of this discussion.


Regarding philosophy of science


Just now catching up on the latest blog post. Last couple days were busy writing/recording/editing the weekly Osborn Cosmic Weather Report. So I want to respond to some talking points.

1) Astronomers certainly did NOT pounce upon Doppler shift uncritically, after Hubble’s discovery — more like throwing a firecracker into an ant’s nest. I didn’t go into the details of the history because I could have written a book about it. Many books HAVE been written about it. And like it or not, the bulk of the demonstrable evidence that we have today lands on the side of large-scale expansion. Note I said LARGE-SCALE. It’s long been known that localized inhomogeneities were required even to develop the galaxies we see, let alone clusters, superclusters, and the other structures we’re still discovering, like “walls” and “bubbles.” So this is no new thing. I will say that we’re still working on how it all came about, but we know it did, because we see the results.

Hubble’s discovery and subsequent others produced an uproar in the community, with huge infighting about the validity of the results between the “Steady-State-ers” and the “Expansionist Universe-ers.” This is in fact a close parallel between a similar and more or less concurrent, long-running controversy in geology between the concept of uniformitarianism and catastrophism, where uniformitarianism can be likened to steady state and catastrophism could be likened to big bang/expansion. Geologists now think that the reality seems to be a blending of the two, a kind of uniformitarianism punctuated by episodes of catastrophism; is it then so surprising that cosmology is proving to be the same?

Moreover in no wise are astronomers/cosmologists/astrophysicists favoring a particular model over another, as evidenced by the large number of theories/models that are put forward. (My friend, physicist Dr. W, and I have discussed the whole “dark matter/dark energy” concepts several times; neither of us is disposed to care for either one, and are inclined to think that it will eventually be disproven. But right now it does seem to explain observations.) My entire point was that these things are indeed being considered, but just because we seem to find a data point that is in conflict with current theory does not mean we automatically throw out the baby with the bath water and start over from scratch.

Also note that I am not saying that any theories would be “knocked out if new theories were accepted.” Obviously Newtonian physics was not “knocked out” by relativity theories, nor quantum mechanics, nor any of the rest. In fact what we find is that Newtonian physics is what the others reduce to in the everyday world. Quantum mechanics devolves to Newtonian physics as the scale increases from subatomic to macro world. Relativity devolves into Newtonian physics at increasingly lower sublight speeds. Et cetera. This is what a proper “new theory” SHOULD do — reduce to the established, observable ways/models when “ordinary world” initial conditions are plugged in. What is happening, however, is that this thrust experiment is contradicting the “ordinary world” model, which has been demonstrably proven correct over centuries (and arguably millennia) of observation. And THAT is what experienced scientists take issue with.

2) I think some may be confusing the difference between the universe and the models we have of the universe. When new, unexplained data is discovered, obviously this is coming FROM the universe, and it is the MODELS that must be adjusted to try to see if the new data can be explained. It isn’t that we’re trying to shoehorn the universe to fit our theories. We are looking to see if this new evidence has uncovered something that needs to be added, something we didn’t know about before. It is a MODIFICATION of our theories/models, not changing the universe, that is occurring. This usually requires several iterations, and not infrequently does in fact require the model to be reduced to its basic components and rebuilt, or occasionally thrown out altogether and replaced.

Think about it like this: You want to race cars, and you want to win. You’re on a budget constrained by other factors — house payment, credit card payments, food bill, kid in college, etc. So which is easier and more economical, which fits into your budget better: Take the stock car already in your garage and modify it to juice it up, or throw out the stock car and start building an Indy race car from the pavement up? You’re going to start with your stock car and modify it, then you’re going to race it and see if you win. If you don’t win, you keep modifying the stock car until you’ve reached the limits of what the frame will handle. If you’re still not winning, you scrap the stock car and start work on an Indy car design.

In this analogy, your budget constraints are the body of existing observations. Your stock car is existing science and its models. Winning in this case means your model correctly predicts the observations; juicing up the stock car represents the modifications to existing theory you have to make to try to predict the data. The Indy car design is when you can’t get existing theory to match observation, so you scrap the theory and construct another. But you still have those budget constraints! The new model has to accurately predict, not just the new observations, but all the old ones too. It has to be “drivable on the road,” as it were. Sort of like a Transformer that goes from Indy car to your mom’s sedan and back.

3) String theories: there are in fact five basic string theories. (And while I’m about it, let me point out that there is a difference between a cosmic string and a superstring. Here I refer to superstrings.) Each theory was developed by a different researcher or group of researchers, and each one accurately predicts some of the observable data — but no one superstring theory predicts ALL of the observable data. Nor, so far, can they be made to do so.

This is a case where the scientists dropped back and punted. It wasn’t exactly that they scrapped the stock car, but they definitely were pulling Indy car concepts into the modifications! (To continue my racecar analogy, I’d say they kept the frame but put in a new engine and more aerodynamic body.)

Unable to get their superstring  models to wrap around the whole problem, they made a fundamental realization that relates back to that “new theories should reduce to the older forms” comment I made earlier: They realized it was very likely that the five different superstring theories were actually special cases of an overarching theory. So they instead created a new theory/model, called M Theory. And this, so far, DOES accurately predict all of the observable data, though again it may possibly not be the simplest way to do so; Occam’s Razor and all. But it’s the best we’ve come up with so far.

(This is a case where Dr. W might be more up on the latest developments than I am, since it falls more into the realm of particle/quantum physics in which he specializes than the astronomy/astrophysics in which I specialized. I did study M theory and the related stuff in order to write both Extraction Point with Travis S. Taylor, and my Displaced Detective series. And I’ve tried to stay up on what’s going on with the theory — I get asked about it a lot at SF cons. I don’t claim to be an expert in M theory by any means.)

4) Quasars: given that, in recent years, we’ve been able to image the distant galaxies in which quasars are embedded, and we have been able to generate models of the mechanism that predict observational data, it’s going to be rather hard to argue away the notion that they are indeed embedded in galaxies.

As for proper motion, that is still in debate. Proper motion is not, contrary to what you might think, immediately obvious to the observer, especially when we are looking at extragalactic objects. Why? It’s complicated — because the Earth is making a truly spectacular gyration through the universe: it is spinning on its axis, revolving around the Sun, following the Sun in its orbit about the galactic center, and moving with the galaxy as it orbits the center of mass of the local cluster, which is in turn orbiting the center of mass of the local supercluster, which is experiencing linear motion through the universe…and then there is precessional motion of all of that, and more. All those motions have to be determined as accurately as possible, and then SUBTRACTED FROM THE MEASUREMENTS of the apparent proper motion of any given object. Only then can we say that the object MAY be experiencing true proper motion.

Current studies of quasar proper motion seem to be indicating that there is an inadvertent systemic error in the reduced measurements (as well as a couple of other things occurring within individual quasars) that, if corrected properly, will remove most if not all of the purported proper motion. Or to put it more simply, we may have an error in our estimate of the motions we ourselves are making, which is causing an apparent motion of the studied objects, when there really is little or none. The jury is still out on that, but legitimate research is ongoing.

Also consider that we currently have a nice spectrum of galaxy “types” or morphologies, ranging from “ordinary,” to interacting, to Seyfert/BL Lacertae/radio galaxies, to quasars. These in general range nicely from nearby, to a little farther out, to pretty far out, to way the hell over there. There’s a whole lot of observational evidence that quasars are embedded in galaxies, and that they have a lifetime that takes them through several morphologies, it’s going to be hard to disprove. Note I didn’t say impossible. There are arguments for other kinds of Doppler shifting, such as relativistic gravitational. But, “With great power comes great resp–” no, sorry, wrong quote. “Extraordinary claims require extraordinary evidence.”

Now, somebody reading all this is bound to be thinking that I’m just one of these “accepted science” conspirators who are trying to stifle anything new. Not so. I have a brain, I use it most days, and I am trained to be a skeptic. (Blonde hair notwithstanding.) If I were into “accepted science” then I would not be posting guest blogs like this:

No, I sit down and look at the data in the light of what I know. I look at the models and decide if they make sense, or if they are off in the weeds someplace. If it all lines up, and if I can take the data, feed it into the model, and predict more data, and that prediction is demonstrably correct by collecting the additional data, then I conclude that the model is correct insofar as we understand the science to this point in time. If it does not, I conclude that the model is wrong, and possibly the theory behind it as well, depending on whether I can determine if it was just a poorly-constructed model or if the problem with it is more fundamental.

This is not simply going along for the ride because someone else says so. And this is the way science is supposed to work. Does it always work like this? No, it doesn’t. Because scientists are human too, and we can get hidebound and attached to our pet theories. (Go read up on William Thomson, Lord Kelvin’s successes, as well as his failed predictions, if you don’t believe me. And he was as “established” as they come.) But it does so more often than not, and especially in my chosen fields, I’m pleased to say.

Stephanie Osborn

“The Interstellar Woman of Mystery”

It is clear that we are at the edge of observational accuracy, and possibly many statements which appear to be falsifiable are in fact not so with present equipment. It would not be the first time.

And I will repeat my own view: the extraordinary claim of reactionless drive needs considerable evidence that it exists, since it falsifies a fundamental principle of Newtonian physics, as well as being incompatible with Relativity.


More on Beckmann and Einstein
<<Jerry P I commend to you Petr Beckmann and his Einstein Plus Two…>>
And I commend to you Tom Bethell’s book, Questioning Einstein: Is Relativity Necessary? (2009), explicating Beckmann’s theory, and putting it into the whole historical context of the development and testing of relativity theory, and the wider and continuing question of the nature and existence of the “ether”.
Bethell has been a contributing columnist and/or editor of National Review, The American Spectator, Harpers, and other intellectual periodicals, and he is a Hoover fellow. He specializes in whistle blowing on politically correct orthodoxies, so of course he is personal non-grata with the elite establishment, which in my book is one of his strongest credential.
Among the many thoughtful and trenchant pieces of his I’ve clipped and saved, was a two part swipe (in the June and July/August American Spectator at the cancer research mafia that has deflected so many tens of billions of dollars of taxpayer money into unproductive reinforcement of the established paradigms that retroviruses (and now faulty genes) cause cancer, while shunting aside the fact that virtually all solid tumors consist of cells that contain more than the two chromosomes of normal cells: this phenomenon is called aneuploidy and it has been known since the 1960s, yet practically no research has been done on the replication errors that must lie at the heart of it. How many lives have been cut short and/or blighted because of this waste of funds and scientific talent?
Bethell worked closely with Beckmann and with his colleague and collaborator, physicist Howard Hayden (who wrote the introduction to Bethell’s book), and Bethell did extensive research of his own, drawing on papers of Einstein that have only recently become available, and also on the papers of Nobel Laureate Albert Michelson, who designed the interferometer that was used in the classic Michelson-Morley experiment, and who went on to design and conduct the Michelson-Gale experiment in 1924 that conclusively established that there was indeed an ether – a gravitational ether detectable against the earth’s rotation – a finding that has been partially replicated in passing by the Brilet-Hall experiments of 1979, which ironically were focused on finding the same kind of ether (detectable against the frame of the earth’s orbital motion) that the Michelson-Morley experiment failed to find in the first place (Brillet-Hall predictably repeated that original failure).
Einstein himself was one of the chief encouragers to Michelson, then at the University of Chicago, to conduct the Michelson-Gale experiment, which involved constructing an apparatus that spanned an area of some 50 acres, and he traveled to Chicago and met with Michelson for that purpose. Einstein had also begun to recognize as early as 1911 that his General Theory of Relativity REQUIRED a gravitational ether, regardless of the fact that his Special Theory of 1905 had dispensed with it. Bethell quotes Einstein thus {p182}:
“In a article published in 1911, ‘On the Influence of Gravitation on the Propagation of Light,’ Einstein acknowledged that the constancy of the velocity of light is ‘not valid in the formulation which is usually taken as the basis for the ordinary [special] theory of relativity.’ The velocity of light in the gravitational field ‘is a function of the place,’ Einstein said. Light rays ‘propagated across a gravitational field undergo a deflexion.'”
Einstein may thus be said to have backtracked on his premature discarding in the Special Theory of Relativity of the ether principle that presumes that some medium is necessary for the propagation of waves, whether they are light quanta or gravitational quanta, and to have anticipated, not only Beckmann, but the Michelson-Gale experiment.
None of this casts any shadow of doubt on Einstein’s theory of General Relativity, except that it suggests that it ought to have been called Einstein’s Theory of Gravitation, dropping the relativity moniker altogether. However, it is clear from the body of evidence reviewed by Bethell that Einstein’s Special Theory is both irrelevant to practical modern physics and pernicious in its paradoxical implications. The Special Theory is irrelevant because it applies only to inertial (constant velocity) frames of reference, yet we live in a universe of accelerations. But for that, the Michelson-Gale experiment of 1924 would have falsified special relativity since light was found to travel at different speeds depending on the beam’s orientation with respect to the rotation of the earth.
The 1971 Hafele-Keating experiments transporting atomic clocks around the world in opposite directions also appear to contradict Special Relativity, which asserts that time slows down for an object moving with respect to the observer, which would mean that the airplane clock would appear to be faster than the Naval Observatory clock on the ground, but that the reverse would be true too if the airplane clock were taken to be the fixed observer, but the interpretation of the results (which were consistent both with General Relativity and with Beckmann’s theory) required the postulation of an inertial clock at the center of the earth with which the times of the other clocks could be compared.
Because we live in a universe of accelerative forces such as gravity, the Special Theory may well be unfalsifiable, which would make it a metaphysical, not a scientific hypothesis, in Popperian terms. Certainly no one has ever observed the predicted dilations of space, or the mutual speeding up of clocks from the points of view of two observers moving relative to each other, or of the corresponding relative buildup of masses in both of the relative frames of reference as their relative velocity approached the speed of light. Science fiction has had fun with many of these paradoxes, but the Special Theory of Relativity, properly understood, gives us no reason to suspect that any of these phenomena are features of our universe.
The Special Theory of Relativity is also pernicious, not only because it gives rise to incomprehensible paradoxes that suggest that our whole conception of physics is wrong, but also because the second postulate of the Special Theory, that the speed of light is a constant independent not only of the source but of the observer permeates the thinking of modern physicists as a dogma, even though it is ignored in practice, and for good reason. For example, if the speed of light were always constant in our universe, the concept of simultaneity would dissolve into meaninglessness and there would be no way to synchronize clocks, nor could the GPS satellite system be made to work: it does work, of course, but only because a fixed temporal frame of reference is presumed.
The main problem with the Special Theory is that in order to preserve Einstein’s dogmatic postulate of the constancy of the speed of light independent of both source and observer, and its relativity implications, the mathematics of the General Theory of Relativity had to be unduly complicated. And, as you note, Beckmann’s work has demonstrated that the gravitational phenomena with which the General Theory is concerned can be accounted for in classical Newtonian terms, without all the mystification and paradoxes. However, Hayden notes in his introduction that the actual mathematics of overlapping gravitational fields (e.g. taking into consideration where the balance points lie between the gravitational fields of the earth, the moon, the sun, etc.) can still be quite complex, and Beckmann himself never got around to working those out.
John B. Robb

Thank you for the summary. I have many times recommended Tom Bethell’s book and possibly should have done so again; there are other works on modern aether theory as well. Google “Is Einstein necessary”… Relativity was “confirmed” by the bending of light rays in a gravitational field; there are other explanations which do not require tensor calculus. Whether understanding the universe requires mathematics of that order of difficulty I cannot say; I confess that I hope not. Tom leaves out the math, which is perhaps wise.


Subject: Epistemology and the Münchhausen trilemma

I had an email about this in my drafts. Since you’ll discuss epistemology, where do you stand on the Münchhausen trilemma? How do we overcome the balkanization of epistemology? Why don’t we teach epistemology in high school? I think that and general semantics (per Alfred Korzybski) would solve many problems.

◊ ◊ ◊ ◊

Most Respectfully,

Joshua Jordan, KSC

Percussa Resurgo

Probably but I have only so much time. What is the Munchhausen trilemma?

Jerry Pournelle

Chaos Manor

Epistemology and the Münchhausen trilemma

I’ll keep this very short, relatively speaking; this is an outlined response:

The Münchhausen trilemma is the crux of epistemology. Anyone who studies epistemology soon becomes aware that every tendency in the epistemology has weaknesses — including science — making it fallible. No tendency e.g. authority, faith, science, empiricism, logic, rationalism, idealism, constructivism reveals truth because you come to a point of infinite regress. John Pollock describes it best:

“… to justify a belief one must appeal to a further justified belief. This means that one of two things can be the case. Either there are some beliefs that we can be justified for holding, without being able to justify them on the basis of any other belief, or else for each justified belief there is an infinite regress of (potential) justification [the nebula theory]. On this theory there is no rock bottom of justification. Justification just meanders in and out through our network of beliefs, stopping nowhere.” You find no solid truth that everything is built upon and it becomes more like a ball of ants crossing a river. Now you have to make a choice; you have three options — all of these undesirable. You must face the Münchhausen trilemma.

The story is named after the story Baron Münchhausen, who pulled himself out of quicksand by his own hair. The Münchhausen tribesman is the how we answer the question “How do I know this is true?” When we ask ourselves this, we provide proof but then we need proof that our proof is true and so on with subsequent proof. We can deal with this problem in three ways:

1. We create a circular viz. coherent argument where theory and proof support each other. X is true because of Y; Y is true because of X.

This is dangerous because the arguments can be logically valid i.e.

their conclusions follow from their premises. It is an informal fallacy. The main problem is one already believes the conclusion and, anyway, the premises do not prove the conclusion in this way.

Therefore, the argument will not persuade — well, it might persuade non-critical thinkers. This is where guys like Hitler do really well.

Coherentism is the approach.

2. We agree on axioms. Consider the conch shell in Lord of the Flies; the group agreed on the axiom that whomever held the conch had the right to speak. In epistemology, we might agree that a series of statements is true even though we cannot verify this. Most social organizations seem to run on this approach. Some more esoteric organizations even mention a “substitute” for something that was “lost” and this is a reference to the axiomatic tendency of the organization. Foundationalism is the approach.

3. We accept infinite regress. We realize that each proof requires further proof, ad infinitum. This view rejects the fallacies and weaknesses of the previous two choices, while accepting reality. Infinitism is the approach.

While most scientists I speak with relegate this as an “exercise in abstract philosophy”, I think they’re wrong. I think they’re not comfortable with the weaknesses of their paradigm and I confirmed this when discussing the weaknesses of empiricism and rationalism — the constituents of science. I normally get the “science is settled” or “if it isn’t science it’s not worth knowing” arguments and we reach an impasse (their axioms). Despite their inability to prove their claims, that is the only criticism I’ve entertained involving the trilemma.

◊ ◊ ◊ ◊ ◊

Most Respectfully,

Joshua Jordan, KSC

Percussa Resurgo

Having had only an undergraduate training in philosophy with some help from the Jesuits, I may have missed something, but I could have sworn that “The Munchhausen trilemma” was not commonly, or indeed at all, referred to in the late 40’s and early 50’s; it may be well known today by that name, but this has not always been so. As to the identity of the Baron, thank you, but I have had some previous exposure to the nobleman’s exploits. His name is also used in psychological diagnoses, although that is rather modern; before the need for appellations to use in insurance claims, we were content to use less colorful diagnostic terms than Munchhausen By Proxy. But that’s a different story

I was not aware that a common philosophical problem/criticism had been given the Baron’s name; I suspect this is due to the popularity of certain movies.

I am not ready to undertake the rather long task of providing instruction in philosophical principles; I simply have to make do with Sir Karl Popper, and the rather mundane notion that if two psychiatrists, a nurse, and the wardboys tell you that you are not covered with bees, you might as well stop brushing them off your coat. We can insist that statements about the world are not science if they cannot be falsified. As to what is truth; we can agree that statements that can be falsified but have not been may be acted on as if verified, even though full verification is not possible. Of course this can lead to having to treat two different views of reality as true if they do not generate falsifiable statements that conflict with each other. Rather like Beckmann and Einstein.

Excuse my brevity. It is painful to type while staring at the keyboard.

Jerry Pournelle

Chaos Manor

I appreciate you taking the time to respond at some length; especially considering that it’s not easy to type right now.

I did some research on it and the term was coined in 1968 in reference to Karl Popper’s trilemma of dogmatism vs. infinite regress vs.

psychologism. Popper, in his 1935 publication, attributed the concept to Jakob Fries. However, the trilemma of Fries is slightly different from the Münchhausen trilemma. You can read more here if you’re


I understand that your time is limited; perhaps you could publish our exchange on the Münchhausen trilemma? My understanding of the trilemma is limited but it is important to me. For me, this is the bottom of the pile and it’s something I spent a good part of my life searching for and I want to popularize it as much as possible along with General Semantics, arete, Bloom’s Taxonomy, cheaper energy, and the Classical Trivium. =)

As an aside, under the Jesuits, you may know Agrippa’s trilemma — presented by Sextus Empiricus and attributed to Agrippa the Skeptic by Diogenes Laertius (not the Cynic). However, Agrippa’s trilemma has five — not three — choices.

◊ ◊ ◊ ◊ ◊

Most Respectfully,

Joshua Jordan, KSC

Percussa Resurgo

The map is not the territory. Science can provide us with better maps – if we follow the rules – but they are only maps.


Hello Jerry,

Reader James had a nice comment on your post for 24 August re Settled Science about the uncanny precision of planetary temperature measurements, over century time frames, when an instrumentation calibration lab, under controlled conditions, would be hard put to duplicate it over 24 continuous hours.

Without critiquing every ‘a’,’and’, and ’the’ of his post it sounded spot on to  me.

Coincidentally, the pooh-bah’s of climate science from around the world made the following announcement this month:  July 2015 was the hottest month ever, since records began in 1880.

Here is a quote from NOAA’s official announcement ( ):

“The combined average temperature over global land and ocean surfaces for July 2015 was the highest for July in the 136-year period of record, at 0.81°C (1.46°F) above the 20th century average of 15.8°C (60.4°F), surpassing the previous record set in 1998 by 0.08°C (0.14°F). As July is climatologically the warmest month of the year globally, this monthly global temperature of 16.61°C (61.86°F) was also the highest among all 1627 months in the record that began in January 1880. The July temperature is currently increasing at an average rate of 0.65°C (1.17°F) per century.”

In the spirit of James’ comment, are the people producing such drivel stupid enough to believe it themselves?  Do they REALLY believe that we have had a planet wide instrumentation system in place since 1880 and a 135 year data base of its output that would allow us to list the 1627 months since 1880 in rank order of the temperature of the entire planet for each month?  In spite of the fact that there is AFAIK no universally agreed upon method of even CALCULATING the temperature of the planet for a given month?  And if there IS a cookbook procedure, do they really believe that the planetary instrumentation system provided sufficient coverage and precision over the entire 1627 months to justify their proclamation of an anomaly of 0.08 C for a specific month to be a ‘record’ ?

Back when I was a Navy tech and we were faced with some incredible feat of technological wizardry our typical response was “Modern science knows no limitations!’.  That would appear to be especially applicable to ‘Modern Climate Science’.

Bob Ludwick

I am at a loss to explain why they cannot tell us the formula for “the temperature of the Earth”. I suspect they are afraid they would be laughed at.


if you want more of something, subsidize it
Dr. Pournelle,
A case demonstrating your point: How Carbon Credit Program Resulted In Even More Greenhouse Gas Emissions



Turning Atmospheric CO2 into Carbon Nanofibers
Dr. Pournelle,
Regardless of what one believes about Climate Change, an economic process for manufacturing carbon nanofibers from atmospheric CO2 is pretty cool stuff. Projected cost is $1000/ton of nanofibers.


If the system produces a product worth more than the cost of making it, I would assume it will be capitalized soon enough. I’m too lazy to do the numbers. But I suspect the CO2 entering the atmosphere each year far exceeds the amount of carbon fiber you can sell, so if it be actually needful it may have to be subsidized, but that’s better than bankrupting ourselves.


Security Theater, er Theatre

“Toddler’s Minions ‘fart blaster’ not allowed on flight as it has a trigger”

Well don’t we all feel SO much safer now?

“Will there ever again be an England?” -Anon



I dare not answer that…


: Celebrating George Orwell’s birthday

A group of Dutch artists celebrated George Orwell’s birthday on June 25th by putting party hats on surveillance cameras around the city of Utrecht.

“If you want any discipline to shape up, first get it laughed at.”

– Paul Harvey




Solar Minimum as Dangerous as Solar Maximum


by Mitch Battros – Earth Changes Media

In a new study just published in the scientific journal Geophysical Research, charged particles from various sources is amplified near the Earth’s equator. Brett A. Carter, lead author from Boston College Institute for Scientific Research provides evidence indicating smaller geomagnetic events occurring in equatorial regions, are amplified by the equatorial electrojets.

The article is well beyond my expertise, but is very interesting. The climate modelers tend to secrecy about such matters.






Freedom is not free. Free men are not equal. Equal men are not free.




The Science is Settled

Chaos Manor View, Monday, August 24, 2015

“Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded—here and there, now and then—are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.

“This is known as ‘bad luck’.”

– Robert A. Heinlein


After this great glaciation, a succession of smaller glaciations has followed, each separated by about 100,000 years from its predecessor, according to changes in the eccentricity of the Earth’s orbit (a fact first discovered by the astronomer Johannes Kepler, 1571-1630). These periods of time when large areas of the Earth are covered by ice sheets are called “ice ages.” The last of the ice ages in human experience (often referred to as the Ice Age) reached its maximum roughly 20,000 years ago, and then gave way to warming. Sea level rose in two major steps, one centered near 14,000 years and the other near 11,500 years. However, between these two periods of rapid melting there was a pause in melting and sea level rise, known as the “Younger Dryas” period. During the Younger Dryas the climate system went back into almost fully glacial conditions, after having offered balmy conditions for more than 1000 years. The reasons for these large swings in climate change are not yet well understood.


I have been brooding over the mess with the Hugo Awards all weekend, and you do not need to tell me that thinking about the subject is a waste of time, particularly since I have no stake whatever in it. So the less said about it all, the better. My other excuse is that it’s been hot. And maybe I confess to a bit of laziness.

I’ve also been thinking about more important matters. The problem is that typing is painful. Less so with this Logitech K360 keyboard, but I still must use two-finger typing and stare at the keyboard rather than what I am typing, and I still look up to discover to my horror a flood of red wavy lines – words I have had to fix. At least I no longer often hit the alt key and the spacebar simultaneously (at least not very often) which causes me sometimes to lose everything I have written.

I need to write an essay on the philosophy of science as I understand it. That’s what they call epistemology in universities nowadays: the study of how we know what we know, and how well we know it. I “took” Philosophy of Science from Gustav Bergmann at the University of Iowa when I was an undergraduate there in its golden days. Bergmann was one of the former members of the Vienna Circle who fled to the United States before WW II, and had worked with Karl Popper.

I never met Karl Popper, although I wish I had. Popper seems to me to have made as concise a statement of how science works as has ever been done. You can never “prove” an empirical scientific (as opposed to a logical) statement or hypothesis. We can never know Truth as science. What we can do is falsify statements, or attempt to; those that have not been falsified may be treated as true, always reserving the possibility that they will some day be falsified. Statements that cannot be falsified by any means whatever are simply not scientific. In some philosophical realm they may be “true” but they are not scientific and are not the business of the scientists.

This does not seem radical today, but when first put forth by Popper it exposed Freudianism and other such “sciences” to the criticism that, since they could explain everything and thus could not be falsified, they in fact explained nothing and were not science.

This is a simplification of a rather complex subject. Those who want to know more will have little difficulty in finding discussions.

The essence is that if you cannot falsify a statement it is not science; and if an experiment gives evidence of the falsification of an hypothesis, the hypothesis is false. You may not diddle with it to make it fit the facts, you must make a new – and falsifiable – hypothesis that covers all the facts. Adding non-falsifiable modifications to your theory in order to cover the new facts is right out. That may seem obvious now, but it was not always accepted, and is not actually universally applied now.

With that introduction we consider the extraordinary evidence situation.

The Extraordinary Evidence Fallacy

Dear Jerry:
You wrote in your View for August 19, 2015:

“Extraordinary claims require extraordinary evidence.”

I’ve read that Carl Sagan popularized the saying. This philosophical claim turns out not to be true.
The criterion of extraordinary evidence is frequently raised by those arguing against the existence of God, or against the reality of miracles such as the resurrection of Christ. William Lane Craig discusses the fallacy regularly in his debates and podcasts.
To get the flavor of the counter argument, consider Craig’s response to Lawrence Kraus during their debate at North Carolina State University on March 30,2011 Craig said:

Now he [Kraus] says, “Extraordinary claims require extraordinary evidence. David Hume’s argument against miracles is sound.” Here, what you need to understand is that that claim is demonstrably false. It is not true. Hume didn’t understand the probability calculus. It wasn’t yet developed in his day. His argument neglects the crucial probability that we would have the evidence which we do if the miracle in question had not occurred. And that factor can completely balance out any intrinsic improbability that you think might occur in a miracle. In any case, why think that a miracle like the resurrection is intrinsically improbable? I think what’s improbable is that Jesus rose naturally from the dead. But, of course, that’s not the hypothesis. The hypothesis is that God raised Jesus from the dead. And you can’t show that that’s intrinsically improbable unless you’re prepared to argue that the existence of God is improbable. And Dr. Krauss isn’t doing that tonight. That’s not the debate topic, as he explained. The topic tonight is, “Is there evidence for God?,” and so we’re not assessing the prior probabilities of whether or not God’s existence is intrinsically probable or not. And so I think the approach that I’m taking tonight is right in line with probability theory and does show that, given the facts that I’ve laid out, God’s existence is more probable than it would have been without them.

Read more:

In a podcast on 8/3/2014 Craig elaborated:

So this slogan, I think, is simply demonstrably false. In fact, it is contradicted all the time when we believe highly improbable, perfectly natural events have occurred because we have good evidence for them – not miraculous or extraordinary evidence but ordinary evidence. But it would be very, very improbable that we would have this sort of evidence if the event had not taken place. So this first claim is nothing more than a slogan that the unbeliever can use to dismiss any evidence that you present. He can use it as a slogan and simply say that is not extraordinary enough for me to believe. It really tells us more about his personal psychology and skepticism than it does about the value of the evidence we are presenting.

Read more:

Thus, one man’s extraordinary claim can be another man’s mundane assumption. Much depends on one’s criterion for incredulity.
Carl Sagan frequently claimed that “The Cosmos is all that is or was or ever will be.” To him this was a truism. Yet I consider Sagan’s claim an extraordinary philosophical leap of faith. I wonder “How does he know?” Our two worldviews were far apart.
Those interested in pursuing this issue will find much more on the Web, of course.
Best regards,
–Harry M.

To be clear, this concept was first published by Laplace, who said “The weight of evidence for an extraordinary claim must be proportioned to its strangeness.” Sagan popularized this concept, and it seems intuitively true. If I tell you that the sun will not rise tomorrow, that is a falsifiable statement, and therefore might be said to be scientific; but if I then tell you the sun did not rise, and I seem to be the only person to have noticed that, could I be said to have falsified the hypothesis that the sun will rise tomorrow and every day thereafter per omnia secula seculorem? Or would you demand more evidence?

If two psychiatrists, a nurse, and an orderly all tell you that you are not covered with bees, you may as well stop trying to brush them off your coat, to quote some psychiatric book I read fifty years ago.

Similarly for the various reactionless drives: if someone claims he has one, is that sufficient evidence? Obviously if he claims he is married we tend to believe it; if he claims he is happily married, we may accept that on his say-so; but if he claims he is happily married to a talking gorilla, most of us would require somewhat more evidence.

Similarly, the Apostles understood well that their claim to have seen the Master, and fed Him a bit of boiled fish, was extraordinary; and those who recorded it took care to identify the witnesses. It is not a scientific claim, and you may doubt the evidence for the Resurrection; people have done so for a thousand years. But then statements about miracles are not scientific hypotheses, and are not subject to the rules of science. By definition a miracle is exceptional and takes place outside science. That does not mean there are no miracles, or that no one has ever observed one. The keepers of Lourdes claim to have extensive documentation of a very great many of them.

Incidentally I have credentials from a major university that state that I do “understand the probability calculus”, but I’m having trouble understanding Craig’s point. Given the existence of God, all things are possible; but the calculus of probability still cannot predict miracles without a great deal of carefully gathered evidence suitably organized, and even then there is no reason to assume the conditions making your probability estimate possible will prevail.

I once had this discussion with Marvin Minsky. I related an incident that changed my life. Marvin’s graduate student, Danny Hillis, immediately pointed out the probabilities, but ran into the problem that we were approaching the age of the universe in estimating the probable times between such events. It might have been fun to pursue the discussion but we were in a NASA weekend conference and had to go back to the session.

Claims of a reactionless drive are extraordinary. Evidence of the existence of such a drive needs to be “extraordinary” in the sense that the existence of such working gadget is an improbable event, and thus needs to be observed to work by a number of people.


The unsettled science of the Big Bang hypothesis

With respect to Stephanie Osborne’s citation of Hubble’s Law:
“Steady State Universe. There were no galaxies, there was only the universe, and it had always been just like it is. Then Edwin Hubble realized that the unusual spectra he was getting from those peculiar stars could be explained if they were regular spectra with extreme blueshifts, and he discovered that those peculiar stars are what we now call quasars, and they were far distant galaxies in their own right, speeding away from us at incredible velocities. And then astronomers began to realize that all those ”spiral nebulae” and such were also galaxies, and they were also blueshifted, but at an amount corresponding to their distance. And lo, Hubble’s law was born.”

Physicist Hilton Ratcliffe in The Static Universe: Exploding the Myth of Cosmic Expansion (2010) points out that what became known as Hubble’s Law started out as a tentative speculative hypothesis advanced in 1929 and based on a very limited observational data that the galaxies beyond our own that Hubble was the first to observe had redshifts that correlated inversely, though very roughly, with their magnitudes. Other astronomers pounced eagerly on this hypothesis, adopting it more or less uncritically, and interpreted the redshifts to be due to the Doppler Effect, which, if true, would give them a tool to estimate the distances to extragalactic stellar systems. And in short order other astronomers made a wild leap from this scant and dubious data, and from the unfounded assumption that these redshifts were instances of the Doppler Effect, to the staggering conclusion that the universe was flying apart at an accelerating rate and that therefore its history was analogous to an explosion.

Plots were made of the supposed recessional velocity of Hubble’s distant galaxies versus their distances, but these were analogous to circular arguments, since recession and distance were precisely the phenomena that the data were being invoked to establish. Plotting the same data as redshifts versus magnitude collapsed and scattered the same data to the extent that there no longer seemed to be any clear pattern. The situation only deteriorated from that point on.

It was soon discovered that this expanding universe model didn’t apply “locally” (i.e. to observational objects within 100 megaparsecs of us) – this was rationalized away as due to local gravity, although no rationale for considering entire galaxies or clusters of galaxies as point objects was ever advanced. Nor was it explained why more distant galaxies shouldn’t also be included, or what the criteria for inclusion in the local field should be. If the universe were actually expanding like a physico-chemical explosion, every object ought to be accelerating away from every other object, or at least moving away at constant linear velocity. And if there were exceptions to this rule due to gravitational effects, that by itself ought to distort and complicate any inference of distance from redshifts of faint stellar objects.

The exemption of “local” objects from the theory (for which alone other means of determining distance exist, such as interpolation and overlap) was the first ad hoc adjustment to preserve “Hubble’s Law” and the Big Bang Theory, to which astronomers were already heavily committed (by the 1930’s), but it wouldn’t be the last. It also became necessary to postulate that the space between mutually recessionary objects was itself expanding, and then that this space was populated with still undetectable “dark matter”. The other, much weaker, pillar supporting the BBT theory, the supposed background radiation that’s the residue of the original explosion has been subjected to such convoluted and ever-changeable modeling as to become virtually a metaphysical substance itself, like the dark matter and the ether that the Michelson-Morley experiment failed to find (Ratcliffe devotes a chapter to the torturing of background radiation data in to evidence for the Big Bang hypothesis.

Unfortunately, the creation of a local zone exempt from the BBT theory also had the effect of wiping out the data that supposedly undergirded it, as all of Hubble’s original observations, and most of those that had accumulated since, and that were likewise nebulous, fell within the area of localization. By 1935 Hubble and his colleague Richard Tolman were warning of “the possibility that red-shift may be due to some other cause, connected with the long time or distance involved in the passage of the light from the nebula to the observer”, and by 1947 Hubble was writing more broadly “it seems likely that red-shifts may not be due to an expanding universe, and much of the speculation on the structure of the universe may require re-examination.” (note Hubble’s word “speculation”).

Hubble had also argued as early as 1942, that (contrary to the BBT theorists) the fact that the redshift data were related linearly to magnitude argued for their being static, not recessionary, contrary to the assumption of the BBTheorists.

Hubble and Tolman also proposed a method of testing the Hubble hypothesis in their 1936 publication – a method that was applied in two studies published in 2006, both of which failed to confirm the theory that apparent luminosity is related to Doppler redshifting, and in fact the few studies that have purported to confirm this relationship have all been flawed by the same kinds of circular assumptions of the relationships to be demonstrated. Meanwhile, plentiful disconfirming evidence has accumulated, including classical visual astronomical observations, examples of which Ratcliffe reproduces in his book. On page 83, he lists (as I count) 32 alternate hypotheses proposed by physicists and astrophysicists to explain the redshift data. He also devotes a whole chapter to the problems that observations of quasars have caused for the classic redshift theory, ten of which were identified in a 2009 paper of Martin Lopez-Corredoira.

The most damning and disturbing aspects of Ratcliffe’s presentation are the parallels between the cultist BBT orthodoxy and the cultist AGW orthodoxy – both “settled sciences”. One doesn’t have to be a physical scientist to recognize the hallmarks of a cult: excommunication and persecution of heretics; extravagant claims of unanimity and certitude; and (not least) the vested financial and power interests at stake. One of the strengths of Ratcliffe’s book is that it amounts to a tutorial and exemplar in illustration of the Kuhnian analysis of the history of science. It is ever thus: great leaps forward must always await a prolonged period of futile and counterproductive attempts to uphold the old failing, anomaly-ridden paradigm. Thus, we are treated to the following quotation from premier philosopher of science Karl Popper:

“Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem that it was intended to solve”.

And this from Carl Sagan, laying out the two fundamental rules of science:
“First: There are no sacred truths; all assumptions must be critically examined; arguments from authority are worthless.
“Second: Whatever is inconsistent with facts must be discarded or revised. We must understand cosmos as it is and not confuse how it is with how we wish it to be. The obvious is sometimes false; the unexpected is sometimes true.”

Ms. Osborne refers to deep layers of physical scientific understanding that would be disrupted if the EM drive were to turn out to be real, but I think that you and most of your readers would agree that it’s best to keep an open mind about it nonetheless. OTOH the only edifice I see endangered by the final collapse of the BBT and it’s retinue of ad hoc additions (dark matter, perhaps black holes) would be that of an increasingly rickety and sterile cosmology. Astrophysicists would have to rethink many things no doubt, but I’d say that it was high time that be done anyway.

John B. Robb

I commend to you Petr Beckmann and his Einstein Plus Two which seeks at great and careful length to show that while there is no experiment to falsify Einstein’s Relativity, all the crucial experiments relied on to corroborate that theory can also be explained by Newtonian physics with several assumptions including a finite speed of propagation of the force of gravity. Beckmann proved to his satisfaction that there were no observations that falsified Newton, given that assumption; and that his theory of an aether entailed by planetary motion, consistent with Newton, was sufficient to explain the Michelson Morley experiment that formed the basis for Special Relativity.

Note that he never claimed to falsify Relativity; only that in his theory the math was simpler, and explained all the data. He did note that there was something strange about spectroscopic binaries; I will let you find that for yourself. His premise is summarized on the first page of his book:


The science is settled

Hello Jerry,

That the ‘science is settled’ is NOT confined to what is euphemistically known as ‘climate science’ but is now apparently the position of ALL science.  In particular, physics.

Here are two articles on the subject, both of which point out that nowadays, when observations of the behavior of the universe in action conflict with ‘settled science’ the universe is adjusted to fit theory.

Dark matter and dark energy are cases in point:  when large scale astronomical structures were observed to behave in ways not predicted by ‘settled science’, it was considered to be conclusive evidence that the universe was constructed largely (~95%) of ‘dark matter’ and ‘dark energy’ whose properties, quantities, and distribution could be deduced from the requirement that the universe conform to ‘settled science’.  In other words, since the theory was correct, the universe as observed wasn’t, so the universe was adjusted.

This article was precipitated by the reaction of the experts to the announcement of thrust from what are generically known as EmDrives, but includes references to dark matter and ‘cold fusion’:

I will be the first to admit that the existence of the ‘EmDrive’ effect is far from confirmed, but what the article is bemoaning is the immediate reaction of the experts:  the observations conflict with theory, therefore they are experimental error or deliberate hoax.  They may be right in this case, but is it necessary to trash the reputations of the apostates, personally and professionally (as they did with Pons and Fleischmann when they announced anomalous heat from their experiments) and as they are now doing with Dr. McCulloch with his MiHsC theory as he describes on his blog posting for 18 August: ?

McCulloch claims (I certainly don’t have the ‘creds’ to either support or reject his theory) that his theory explains the observations from which the existence of dark matter/dark energy was confirmed (and quite a few other deviations of observations from theory) without requiring either.  The response by ‘settled science’ has not been to point out the error of his ways, but to make him a ‘physics non-person’ and to remove anything about his theory from common reference sources such as Wikipedia (ongoing) and arXiv.

As Dr. McCulloch says:  “It is possible for a paradigm to survive not because it is more successful, but because it deletes the alternatives, and this is what an unscientific minority of dark matter supporters are doing.

That is the common practice in ‘Climate Science’, by the way.  Note how over the last 5 years or so the reputation of Dr. Judith Curry has changed from the respected climate scientist who was the Chair of the School of Earth and Atmospheric Sciences at Georgia Tech, when she was enthusiastically on board with Catastrophic Global Warming driven by anthropogenic CO2 (ACO2) to now, when she has merely expressed doubts as to the certainty of the looming catastrophe, she is portrayed as an incompetent, anti-science shill of the Republicans and oil companies by her former comrades-in-arms.

The same goes for anyone with the temerity to engage in research into the existence of low energy nuclear reactions (generic cold fusion).  Even suggesting that research should be conducted in the field, never mind opining that it may be real, is a career killer for budding physicists.

I certainly can’t support OR reject LENR, EmDrives, or theories in conflict with general relativity using theoretical arguments, but as a layman I think that the ex cathedra rejection of experiments and the creation of an unobservable 95% of the universe because of conflict with EXISTING theory bodes ill for the advancement of science.

Bob Ludwick

I do not believe the science is settled. I have enough “creds” to have a right to an opinion on whether they actually have an engine that produces thrust without loss of mass.  I have not seen the device, so I cannot give an opinion; what I have seen goes to show that a number of the more usual explanations are not present; but the observations I have seen have been light on observations of the thrust, both in magnitude and time.  If they have a gadget that will operate for weeks producing thrust all the while, I think it would be simple to make sure there were no hidden means of introducing mass to the apparatus. The observation that there is reactionless drive has apparently not been falsified, but the observers seemed unsure.  I’d love to see this thing in operation.  A reactionless thrust would give us the solar system; we can leave it to future generations to give us the stars.

The science is settled

Hi Jerry,

I read Stephanie’s commentary on the subject and found it reasonable.  As usual for her commentary.

It included the following:

“But when we start looking at cosmology and such like, we are looking at fundamental physics on many levels. And that physics does have many levels, starting with Newtonian physics, then adding special relativity, general relativity, quantum mechanics, the various string theories, M theory, et cetera. So if you encounter something that appears to knock out one of those levels, you have to realize that it doesn’t JUST knock out that level, it knocks out pretty much all the levels above it. The lower the level, the more fundamental and earth-shaking the result. We’re talking, in some cases, about scrapping pretty much the whole of physics and starting from scratch, or nearly so. This is Not A Good Thing, in many ways, because we have used established physics in so many ways in our world. (Engineering is largely physics applied to the real world — imagine if we found, e.g., that quantum mechanical fluctuations could readily occur on a macro scale, and affected a particular structure commonly used in architecture, say. Would you ever feel safe in a high-rise again? In your own house??) Consequently there is a strong urge to try to make the current levels fit observations, rather than immediately going back and saying, “Oh, physics is wrong, drop back and punt.” But this is not a new thing; it is the way it has ALWAYS been.

Example: Epicycles. An attempt to make the previously-known science fit observations of planetary motion. And then Kepler came along and there was a hullaballoo for awhile, and then it was found that his model fit observations better, and so now we have Kepler’s Laws of Planetary Motion.

“Would you ever feel safe in a high-rise again?  In your own house?”

Of course I would.  Regardless of the impact of the new theories at the esoteric limits of cosmology/physics, standard old pre-relativity/pre-quantum theory mechanics has been demonstrated to produce perfectly acceptable houses, cars, airplanes, and bridges.  Finding out that (for example) MiHsC replaced General Relativity when applied at extremely low accelerations on cosmological scales and explained galactic rotation without the requirement for either dark matter or dark energy wouldn’t impact my life a whit. Likewise, experimental confirmation that a frustum energized at resonance by microwaves would produce thrust (there is none, yet) would not affect any pre-existing, working technology AT ALL.  Everything that worked would continue to work.  It would certainly allow for activities that are not achievable by current technology, but it wouldn’t impact working devices at all.

She cites various string theories and M Theory as scientific ‘steps forward’.

Well, maybe.

The key is ‘various string theories’.   There are apparently lots.  AFIK, no one has yet devised a test of string theories that could confirm OR reject them.  Does that make them science?  This article is about a press release a few years ago about the discovery of a test for string theory:  The article says in effect, “Not so fast there, kemo sabe!.  The announced ‘test for string theory’ does no such thing.”.

I’m not qualified to critique the theory OR the proposed test, but one of the  commenters on the article, Peter Woit, had this to say:

“I do think you’re both right: string theory predicts anything you want, either EP violation or no EP violation.”  So does ACO2 driven Catastrophic Anthropogenic Global Climate Change (nee Anthropogenic Global Warming).  EVERY undesirable climactic event is instantly attributed, as predicted by the experts, to ACO2.

Ditto for ‘M Theory’.  No agreement within the ranks as to exactly what it is, what it implies, or how to test it.  But some think that some parts of it are at least mathematically consistent, however M Theory relates to our actual physical reality.  I suppose that is progress.

She cites this example of ‘accepted science’ that would be ‘knocked out’ if new theories were accepted:

  “Example: Steady State Universe. There were no galaxies, there was only the universe, and it had always been just like it is. Then Edwin Hubble realized that the unusual spectra he was getting from those peculiar stars could be explained if they were regular spectra with extreme blueshifts, and he discovered that those peculiar stars are what we now call quasars, and they were far distant galaxies in their own right, speeding away from us at incredible velocities.

Funny she should mention the ‘accepted science’ that quasars are far distant galaxies speeding away from us at incredible velocities.  She is correct of course:  it IS accepted science.  And, per my original premise, anyone who suggests otherwise is drummed out of the cosmological physicist corps.  It is ‘accepted science’ that based on their observed energy bursts and considering their accepted distance,  computed from their red shift, individual quasars  are producing energy during the burst at a rate equal to that of most of the remainder of the observable universe.  Mechanism unknown.

A large number of known quasars exhibit proper motion.  This means that if they are at the ‘accepted Hubble red shift distance’ of billions of light years the component of their velocity perpendicular to our line of sight must be multiples of c.

A few formerly respectable astronomers (Halton Arp, for example) have noticed the seeming contradiction and written papers on it, proposing that quasars are NOT at cosmological distances and can therefore be expected to exhibit proper motion.  Their relative nearness, according to the alternate theory,  removes the requirement that their observed energy bursts are at the universe level.

Accepted science has (so far) been undeterred by the contradiction and has dutifully marched Dr. Arp and anyone who has exhibited the slightest sympathy for his ideas off the ‘cosmological physicist’ plank, with the result that the mention of Arp in a paper, except to deride him as a kook, will instantly relegate the paper to the ash heap of history, whatever its other merits.  As explained in the following paper about the apparent contradiction between the cosmological red shift (Stephanie said blue shift, but it was clearly a typo) implying vast distance and their proper motion, which implies that they are relatively near and vast energy is not required: .

Quasars and the Hubble Law

A few astronomers have argued that quasars are not really that far away, and that the Hubble Law does not apply to them. Astronomer Halton Arp, for example, has spent much of his long and successful career providing evidence of associations between quasars and galaxies, suggesting that they may be at similar distances. He has also amassed a large number of photographs of galaxies with widely different redshifts which appear to be interacting, as if they were near each other. His discoveries, which have taken him out of mainstream astronomy, raise serious questions about the redshifts of galaxies being caused by recessional velocity.

Another persistent voice against cosmological distances for quasars is astronomer Tom Van Flandern, formerly of the U.S. Naval Observatory.

The problem with quasars is that using the Hubble Law to compute their distance leads to extreme distance estimates — to the edge of the universe, in fact. If quasars were not at the distances currently ascribed to them there would be no need to for them to have extraordinary energy. Non-cosmological distances would also be consistent with the observed proper motion of many quasars.”

The author notes that Arp’s ‘discoveries’ have taken him out of mainstream astronomy.  Accepted science does not take kindly to non-acceptance of its catechism.

Possibly unrelated is the comment about sympathizer Tom Van Flanders: “……formerly of the U. S. Naval Observatory.”  Wonder if his support of theories contrary to ‘accepted science’ has anything to do with his being ‘former’?

The argument may be advanced:  “You’re an idiot; you’re not qualified to critique theoretical cosmology.”

The first is possible; the second undeniable.

I’m not critiquing theories.  I am critiquing the observable response of theoretical cosmologists/climate scientists/et al to those who, based on observations, question the accepted dogma of the applicable field.  Which is that rather than QUESTION the now axiomatic nature of what was formerly a theory, use the conflicting data (cosmology) and the no longer questionable accepted science to announce the detection, properties, and distribution of the otherwise undetectable 95% of the universe for which the only evidence (so far) is the conflict between observations and accepted science.  Or when the observations of the climate do not match the accepted projections of the climate models, adjust the data.  And to shun apostates within the field.  Accepted science is no longer falsifiable by observations, since the universe can be modified at will to make the observations match theory.

Bob Ludwick

I can only refer you to Beckmann. Orthodox theory keeps inventing new constructs, like dark matter and dark energy, not from observation but from theoretical necessity. The concepts keep multiplying, and apparently the principle of similarity (the principles that govern the solar system apply to the whole universe) needs to be abandoned. As to string theory, I am unaware of any falsifiable hypotheses it has generated.


Cherry Picking, Black Swans and Falsifiability

Jerry –
You might enjoy this by Doug L. Hoffman:
– a sample:
“Whenever a skeptic points out a new paper or journal article refuting some claim made by the theory of anthropogenic global warming, climate change alarmists often shout “cherry picking!” Evidently, most climate change true believers do not understand how science works or how theories are tested. Scientific theories must make predictions by which they can be tested. Providing evidence that AGW has failed in its predictions is not cherry picking, it is refutation. Unfortunately, when confronted with failed predictions the standard alarmist answer is to disavow the predictions. They will say that those are not predictions at all, they are projections—and that means AGW is not a scientific theory at all.”
And this:
“Returning to the subject of proving or disproving the theory of anthropogenic global warming, there are only three possibilities here: AGW makes no predictions and hence is not a scientific theory; AGW depends on vague feedback mechanisms that must be constantly reinterpreted, making AGW a very weak theory and scientifically useless; or the predictions made by climate scientists about the effects of AGW are just that, predictions, and if those predictions can be shown to not be true then AGW is a false theory.”
One of my favorite living historians, Paul Johnson, makes a similar appeal to Popper’s requirements for scientific falsifiability (with an unexpected references to Wordsworth and the Venerable Bede – perhaps the first in the post-Climategate AGW debate):

He closes by suggesting that rather than destroy our economies tilting at the AGW windmill, we could better spend what we can on a project dear to all our hearts:
“So vast sums of money will continue to be spent on an unproven and unprovable theory, predicting a global catastrophe from the realms of fantasy. The money could be much more profitably spent on space exploration.”

I very much agree, and so does Bayesian analysis: we should spend money on reducing uncertainties in our predictions, not on preparing for outcomes.



The AGW half-truth

Dr. Pournelle,

Your AGW correspondent made the claim:

“It’s something that non-scientists don’t quite understand: Science is all about models.”

I can’t know if the statement was sloppily imprecise or precisely disingenuous,  but science is all about FALSIFIABLE models, as Karl Popper persuasively argued. “You don’t have a better explanation, so I must be right” is more childish than scientific.

Contrary to the assertion, I think that the average person does understand that a reliable model must be able to make reliable predictions. A reliable prediction would tell us what the climate will be like next year, so we could plan what kind of crops to plant and when. If the predictions turn out to be grossly in error, we would tell the climate modelers to go away and come back when they understood climate prediction better.

Why should we believe that climate modelers can precisely predict the climate in 100 years if they cannot precisely predict next year? And if they can precisely predict next year, why don’t they set up a publicly accessible global temperature measurement experiment to verify their model predictions? Then everyone could compare the model predictions to the data, the accuracy (or inaccuracy) of the models would become increasingly apparent.

I think that most “non-scientists” could easily grasp this idea, but I am not so sure about the AGW modelers.

Steve Chu

Global Warming
Having gone through the imminent ice age scare back the 70’s I have been skeptical about the global warming claims over the past several years. If there is in fact man-made global warming I don’t see any way to do anything about it. I believe most of the evidence used to proclaim the desperate situation we are in has been produced by cherry picking data to substantiate their claims or just flat out making up data and concealing any evidence that doesn’t agree with their theories. A large part of the problem is that research grants for studying global warming are easily obtained and the financial backing will make researchers tend to skew their results to keep the money coming in. There is already a huge network of companies funded by government dollars that are based totally on saving the world from climate change. As long as their is money to be made by defrauding the country I’m afraid it will only get worse. I have a background in turbine engine testing and instrumentation and I know how hard it is to measure temperatures to within one degree, much less 1/10 of a degree. Just maintaining the calibration of measuring equipment is difficult. Many measurements taken anymore are software adjusted to average out inconsistent or unexpected data. A model can be created to give you any results you want. When we were correlating instrument readings we threw out the high and the low and averaged the remaining results and we still couldn’t repeat temp readings to a degree. I don’t have any education regarding statistics but I could see how you could skew results based on how the data was segregated.






Freedom is not free. Free men are not equal. Equal men are not free.