Optimization, suboptimization, and staggering toward education improvement.


View 761 Wednesday, February 06, 2013

I’ll be doing a Triangulation interview with Leo Laporte at 1530 today, so I am not up in the monk’s cell working on Anvil. Actually that is the only reason. I seem to have got over a month and more of pure funk that kept me from doing much work on fiction and for that matter on much else. I’ve been doing a lot of good work lately.

It may be just recovery from a long term winter bronchitis. I used to get that every winter and it was an effort to keep working. Hardly matters. I’ve done a few thousand words in the last few days, and I know where I want to go next and which character I want to develop and how to advance the plot as I do, so I’m sure I’ll be able to start again without problems.

The secret to success in writing is what Elizabeth George calls ‘bum glue’. Ms. George is an American writer of British mysteries – the Inspector Lynley series – who says she got the phrase from Australian fans. My own phrase was butt in chair, but bum glue is pithy and quite exact.


I’m still working on a reaction to Bill Gates’s article on fixing all the world’s problems by measuring them.

Bill Gates: My Plan to Fix The World’s Biggest Problems

From the fight against polio to fixing education, what’s missing is often good measurement and a commitment to follow the data. We can do better. We have the tools at hand.


I’ve mentioned It before, and I thought I’d have been able to write something more about it, but it turns out to be worth more than a few words. A lot more than a few words. There’s nothing much new in what Gates says. The essence of it is

We can learn a lot about improving the 21st-century world from an icon of the industrial era: the steam engine.

Harnessing steam power required many innovations, as William Rosen chronicles in the book "The Most Powerful Idea in the World." Among the most important were a new way to measure the energy output of engines and a micrometer dubbed the "Lord Chancellor" that could gauge tiny distances.

Such measuring tools, Mr. Rosen writes, allowed inventors to see if their incremental design changes led to the improvements—such as higher power and less coal consumption—needed to build better engines. There’s a larger lesson here: Without feedback from precise measurement, Mr. Rosen writes, invention is "doomed to be rare and erratic." With it, invention becomes "commonplace."

In the past year, I have been struck by how important measurement is to improving the human condition. You can achieve incredible progress if you set a clear goal and find a measure that will drive progress toward that goal—in a feedback loop similar to the one Mr. Rosen describes.

This may seem basic, but it is amazing how often it is not done and how hard it is to get right.

In The Strategy of Technology (by Stefan Possony, Jerry Pournelle, and Francis X. Kane) we tried to explain how to create and develop new technology as part of a systematic military strategy. The book was intended for military systems developers and tried to explain a process called Systems Analysis by Herman Kahn, but which was very similar to what had long been known as operations research. If I can be said to have had a specialty skill in aerospace it would have to be that I was an OR man, as operations research people were known in those days, and for a while when I was at Boeing I was among a very small group whose job title was Systems Analyst. It was said that unlike specialists who tended to know more and more about less and less until they knew everything about nothing at all, OR people and Systems Analysts knew less and less about more and more until they knew nothing about everything. Think of those statements as vectors rather than quantitative estimates and they’re not far off the mark. The main tool of the OR people was an ability to tool up to where you could understand the experts well enough to come up with some models of what they were doing. The idea was to quantify operations, then figure out what moves you might make to maximize results.

And the problem there – particularly before the development of large scale integrated circuit architecture leading to small computers – was that if you couldn’t measure something you couldn’t do much about it. This led to the temptation to study what you could quantify and measure. Often that was a good way to go, but sometimes it led to exactly the opposite result of what you wanted – if you chose to optimize on the wrong objective. This was known in the trade as sub-optimization, and one case of that nearly led to disaster.

In the early days of World War II, the OR boffins were aimed at the problem of the Battle of the Atlantic. England’s survival depended on getting convoys through to the island nation. The Germans rightly believed that England could be blockaded and starved into submission. After all, Britain had done that to France in the Napoleonic wars. Germany had no surface fleet to challenge the British – and later American – fleets, but they did have submarines, and some very effective submarine tactics.

The OR boffins studied the situation and came up with optimum techniques for the escorts to use to sink submarines. In particular the trick was not to attack too early after an air sighting of a surfaces sub. Hang on until you vector an escort ship to the scene then have a coordinate air-sea attack. That gave the best probability for sinking the sub. It worked, too. The number of subs sunk went up. The problem was that the number of cargo ships sunk by the subs went up, too.

The problem was that they had chosen the wrong measure to optimize. After all, the goal was not to sink subs. The real goal was to get cargo ships through the submarine wolf packs.

That, as it turned out, required entirely different tactics. The best tactic to get the convoy through was to attack immediately, and once the enemy sub was submerged forget about it and look for others. Make them stay under water, because by far the most effective attacks were done from the surface, particularly at night. A sub firing a torpedo from the surface had a far higher chance of hitting the target than it did from a submerged release.

And once the boffins figured this out and applied the new strategy, the number of submarines sunk went down and down, but the tonnage of cargo that got through grew. And the battle was won.



It’s getting towards time when I have to do the Triangulation interview with Leo so I’ll continue this another time. My point is that we need to choose the proper goals for education before we start changing the system. If the goal is to expose the maximum number of young people to a curriculum you get one result. If the goal is, as Gates once thought, to give every young person in America “a world class university prep education” you get a different result, and indeed, since achieving the goal is demonstrably impossible no matter how many severely challenged children you “mainstream”, you may in fact achieve the result of fewer people receiving a world class university prep education, and fewer receiving a world class college prep education, and fewer learning any skills they can actually use to do jobs they are capable of doing, and — but you get the idea.

The magic of measurement and small feed back loops must not be neglected. It’s terribly important. But what you measure and what you optimize depends on many factors. Taken as a call to find real measures of progress in the education system Gates’s essay is important; but it is all to easy to suboptimize and sometimes suboptimization can be disastrous.








Bookmark the permalink.

Comments are closed.