Chaos Manor View, Tuesday, March 03, 2015
Slow day. The physical therapist was here today. I had to tell her about the fall yesterday, and she looked at the places where I have pains, called them sprains, and came up with tortures which made them better, but they are still sore.
Took Roberta out for dinner. Well, sort of. Went to Tony’s, a neighborhood Mexican place we both like, no tablecloths, unlimited quantities of pico de gallo, and everyone friendly. I’ve been to Hugo’s and a pizza place, since the stroke, but this was the first time since that we’ve been to Tony’s..Hasn’t changed.
Nothing unexpected in Netanyahu’s speech to Congress today. The President made a point of telling the world that he didn’t watch it, but he didn’t like it and there were no viable alternatives in it. All of which is true. There no viable alternatives we don’t know about, and none of them looks good. Over time the alternatives grew fewer and fewer – inevitably – and the ones remaining got more unpleasant and therefore less viable. One of the few remaining ways to stop Iran’s nuclear capability now is with massive military force on the order of the Iraq invasion, and this President isn’t going to do that. Another possibility is massive Israeli airstrikes against all of Iran’s nuclear facilities, and that probably isn’t enough; it might take nuclear weapons, and Israel would hand anti-Semites everything they ever wanted if they did that. Joint Israeli-NATO air strikes might do it, but it would not be quick, and likely would need ground ops as well.
Massive economic warfare has a very low probability of success. It is too late for that. We have delayed far too long; can we now live with an Iran that has nuclear weapons? We’d better learn how. Mr. Obama may delay that day until after the next inauguration, but not much longer.
If there are other alternatives, I would much appreciate hearing them. What won’t work is friendliness. I wish it would.
U.S. millennials post ‘abysmal’ scores in tech skills test, lag behind foreign peers (WP)
By Todd C. Frankel March 2 at 10:21 AM
There was this test. And it was daunting. It was like the SAT or ACT — which many American millennials are no doubt familiar with, as they are on track to be the best educated generation in history — except this test was not about getting into college. This exam, given in 23 countries, assessed the thinking abilities and workplace skills of adults. It focused on literacy, math and technological problem-solving. The goal was to figure out how prepared people are to work in a complex, modern society.
And U.S. millennials performed horribly.
That might even be an understatement, given the extent of the American shortcomings. No matter how you sliced the data – by class, by race, by education – young Americans were laggards compared to their international peers. In every subject, U.S. millennials ranked at the bottom or very close to it, according to a new study by testing company ETS.
“We were taken aback,” said ETS researcher Anita Sands. “We tend to think millennials are really savvy in this area. But that’s not what we are seeing.”
The test is called the PIAAC test. It was developed by the Organization for Economic Co-operation and Development, better known as the OECD. The test was meant to assess adult skill levels. It was administered worldwide to people ages 16 to 65. The results came out two years ago and barely caused a ripple. But recently ETS went back and delved into the data to look at how millennials did as a group. After all, they’re the future – and, in America, they’re poised to claim the title of largest generation from the baby boomers.
U.S. millennials, defined as people 16 to 34 years old, were supposed to be different. They’re digital natives. They get it. High achievement is part of their makeup. But the ETS study found signs of trouble, with its authors warning that the nation was at a crossroads: “We can decide to accept the current levels of mediocrity and inequality or we can decide to address the skills challenge head on.”
The challenge is that, in literacy, U.S. millennials scored higher than only three countries.
In math, Americans ranked last.
In technical problem-saving, they were second from the bottom.
“Abysmal,” noted ETS researcher Madeline Goodman. “There was just no place where we performed well.”
But surely America’s brightest were on top?
Nope. U.S. millennials with master’s degrees and doctorates did better than their peers in only three countries, Ireland, Poland and Spain. Those in Finland, Sweden and Japan seemed to be on a different planet.
Top-scoring U.S. millennials – the 90th percentile on the PIAAC test – were at the bottom internationally, ranking higher only than their peers in Spain. The bottom percentile (10th percentile) also lagged behind their peers. And the gap between America’s best and worst was greater than the gap in 14 other countries. This, the study authors said, signaled America’s high degree of inequality.
The study called into question America’s educational credentialing system. While few American test-takers lacked a high school degree, the United States didn’t perform any better than countries with relatively high rates of failing to finish high school. And our college graduates didn’t perform well, either.
There is a lot more, but you get the idea. Our high schools are awful. And now the rot has spread to many of our colleges. We have sown the wind for decades; we now reap.
There is much we could do, but we will not do it. We will continue to mandate programs from the District of Columbia with its terrible schools, imposing new theories on Podunk, Iowa and East Misery, Missouri. We will continue to act as if anyone believes that the solution is more money. And the schools will get worse.
Subject: Could IBM’s brain-inspired chip change the way computers are built? (WP)
Could IBM’s brain-inspired chip change the way computers are built? (WP)
By Amrita Jayakumar March 2 at 7:00 AM
The human brain is a powerful supercomputer, but it consumes very little power.
The brain is also excellent at processing information efficiently — billions of neurons are deeply connected to memory areas — which gives us the ability to access the data we need to make a decision, quickly make sense of it and then resume normal operation.
That fundamental structure is what sets us apart from machines. It’s the reason we can think and feel and process millions of pieces of data in a fraction of a second every day, without our heads exploding.
Computers don’t work this way.
For decades, they’ve been built to perform calculations in a series of steps, while shuttling data between memory storage areas and processors.
That consumes a lot of power, and while computers are good at crunching huge volumes of information, they’re not so good at recognizing patterns in real time.
With funding from the Defense Advanced Research Projects Agency and partnerships from national laboratories, engineers at International Business Machines created a chip last year that could imitate the structure of the human brain, in the hope that it would lead to a more efficient model of computing.
The result has the potential to transform the way computers are built in the future, according to IBM, while consuming as much power as a hearing-aid battery.
IBM’s long-term goal is to build a “brain in a box” that consumes less than 1 kilowatt of power and yet can quickly identify patterns in large data sets, said Dharmendra Modha, IBM’s chief scientist for brain-inspired computing.
Applications for this technology range from national security to disaster response. That’s why IBM’s team and scientists from Lawrence Livermore, Oak Ridge and other national laboratories took a trip to Capitol Hill last week to demonstrate the technology before lawmakers.
Devices powered by the chip could be used to perform biosecurity checks by sifting through biological samples to identify harmful agents, or power autonomous spacecraft, or monitor computer networks for strange behavior, scientists said.
IBM’s flagship supercomputer, Watson, which is built on today’s computer architecture and consumes large amounts of power, exemplifies linear calculation, Modha said.
In contrast, the chip has the ability to recognize or “sense” its environment in real time, similar to what humans do with eyes and ears.
For instance, the chip has been used to play a game of Pong by “looking” at the ball and moving the paddle to meet it.
Again there is much more. Clearly, while the average and even above average schools continue to deteriorate, there are still sources of well trained innovative development scientists.
One of my advisors comments:
Designing and scaling up the hardware is the easy part. Figuring out how to use it is difficult.
It’s been about 14 years since the first GPU with reasonably flexible programmability (NVIDIA’s GeForce 3). It didn’t take long before people started using it for general-purpose computation (I hosted a panel at the first conference on this topic— see the panel slides), but the process of co-evolution continues. Computer scientists influence the evolution of GPU programming models, and GPU designers offer new ways to build programmable hardware.
The same process has actually been underway with neural networks for five times as long, since that concept dates back to 1943. Neural networks were basically all software-based for the first several decades, but hardware entered the picture at least 20 years ago (from IBM!). Progress has been very uneven, but I have to assume that if commercial applications for simple neural networks were forthcoming, we’d have seen them by now.
IBM wants to make very complex neural networks, but I don’t know how they intend to configure them (the equivalent of “programming”), and I don’t know if any of their proposed applications are truly better served by neural networks than they might be by distributed processing (separate small CPU cores spread throughout a robot or vision sensor or whatever). Much of what makes the human brain valuable is encoded in its configuration, the way that its sensors and actuators are pre-wired into the brain’s structure. It took an awful lot of trial and error to work out these elements, and I don’t think anyone would claim the result is particularly optimal; in many ways, it’s barely functional.
Still, I don’t mind that IBM is working on this problem. It could turn out to be hugely valuable. I think it’s just too early to say.
A sentiment I tend to agree with, but we must understand that while computer power probably follows an S curve (ogive), we are on the exponential part of it, and probably can expect a thousand fold increase in computing power at least. I tend to believe in more.
3D Printing Everywhere from Lab to Factory (EE Times)
Cars, lab equipment, DIY nearly anything
3/1/2015 10:04 AM EST
PORTLAND, Ore. — Printers that print three-dimensional (3D) objects were invented as a way to enable kids to make cool toys for themselves. But now dozens of companies are making industrial-sized versions capable of making production quality products — such as the Local Motors car — and custom parts for laboratories that used to have be to go to the machine shop.
“The first question we ask when we conceive of new part for an experiment is if we can print it ourselves on the 3D printer,” said Alex Millet, a visiting student from Puerto Rico who works with professor Andrew Zwicker, head of the Princeton Plasma Physics Laboratory (PPPL).
According to Zwicker and Millet 3D printers have become a crucial piece of laboratory equipment, allowing them to make one-offs of practically any piece of laboratory equipment (except lenses and other glass parts). 3D printers build up layers of plastic, metal, ceramic or organic materials. The piece is merely designed using a computer aided design (CAD) program that transfers instructions to the 3D printer — telling it when and what to “extrude” to form each layer of an object — with 100-micron accuracy.
The biggest advantage — except low cost — is the speed at which experiments can be accelerated, since the 3D printer can one-off custom parts in a matter of hours — including the CAD programming time — instead of sending the plans off to a machine shop and waiting days to get the part back.
Before using the 3D printer, Zwicker’s team tested its parts for resilience to heat, pressure, stress and strength, finding them adequate for most laboratory experiments — including dielectric insulators for electrodes. Funding was provided by the U.S. Department of Energy’s Office of Science under its Fusion Energy Sciences program.
Beside labs, now even mass production is being switched to 3D printing, a capability not unnoticed by Chinese manufacturers, who are investing heavily in the manufacture of 3D printers. But is China’s large, relatively inexpensive workforce working themselves out of a job by making 3D printers?
One company trying to short circuit the exploitation of cheap foreign labor is Local Motors, which is promising to open 100 microfactories to make its vehicles locally in every country where they will be sold, each customized to meet the needs of local residents.
They are also building a Mobi-Factory in the back of a semi-trailer so that vehicles can be produced in-place in remote locations that cannot support the expense of a permanent micro-factory. So far they are planning on three models, the Rally Fighter (pictured), the Racer and the Cruiser, all manufactured by the same 3D printer from different CAD files.
Local Motors U.S. factories will be introducing the Rally Fighter to the commercial market later in 2015 using the 3D printer to make both its body and chassis. The electric car will use motors and other drive train parts from Renault. The company also will allow engineers and partners — and eventually even consumers — to go online and use its CAD tools to produce customized vehicles with features that fit their particular application. Currently Local Motors has micro-factories in Phoenix, Ariz and Las Vegas, Nev. with Washington D.C. next on the list.
— R. Colin Johnson, Advanced Technology Editor, EE Times
The Color Blue
I present comments; I have no expertise in this matter ;
I need to research this some more, but the assertion that ancient Hebrew did not have a word for the color blue may not be correct.
The third paragraph of the Shma (daily prayer starting with, “Hear O Israel, the Lord is our God, the Lord is one) makes reference to tassels (called tzitzit) on the prayer shawl (called the tallit). This paragraph of the prayer is a quote from Numbers 15:37-41. The paragraph includes a direction that the tzitzit are to include a blue thread.
This suggests at least one source dating from at least 400 BCE (and perhaps older) referring to the color blue.
Dear Dr. Pournelle,
I read your mail on the color blue with interest. [Yesterday] Doing some reading of my own, I find it is espousing the ‘relativist’ school of color theory. It is by no means the only one. There is a ‘universalist’ school as well, one that relies on human biology.
It is a fascinating question; can they really not see blue until they have a word for it? Then who first invented the word? Or is it that they can see the difference but literally don’t have the language for it? If you have the words “black” and “white” but not “gray” in your dictionary? How would you describe gray? As lightish black? What if you’re not given any choices and can only choose one answer, as on a multiple choice test?
In Exodus 24:10 (English Standard Version): “and they saw the God of Israel. There was under his feet as it were a pavement of sapphire stone, like the very heaven for clearness.” The sapphire stone referenced is Lapis Lazuli, a very beautiful blue stone. No matter what folks may think of what happened to the Elders of Israel in this account, the significant side event is the reference to a pavement that was blue. It was noticed, it was a familiar color like unto Lapis Lazuli and this comes from antiquity. The word blue may be recent, but folks have noticed likeness for quite some time.
Perhaps the Jews learned to see Blue before the rest of the world? I don’t read Hebrew, but per Wikipedia, the description of Tzitzit comes from the book of Numbers 15:38
“Speak to the children of Israel, and say to them, that they shall make themselves fringes on the corners of their garments throughout their generations, and they shall put on the corner fringe a blue (tekhelet) thread.”
Also per Wikipedia Tekhelet appears ~48 times in the Tanakh, and was obtained as far back as 1400 BCE and was described as the color of turquoise. I don’t know how this fits in with “when did we begin to see blue,” but perhaps the Torah and the existence of Sanai turquoise mines need too be reconciled with the theory?
Being one of countless men stricken with red-green color “blindness,” I see blue as one of the two colors most clear (the other being yellow).
To me, there are shades of red and green which are indistinguishable from each other in natural sunlight, but which are as different as black and white under lighting of different spectra. There are shades of green which are indistinguishable from grey under natural sunlight.
Looking at an aeronautical chart, I can’t tell the difference between blue and magenta lines unless there are lines of the other color close to the one I’m looking at (in which case they are sharply different and identifiable). Due to this, my FAA medical certificate prohibits me from flying at night (when, ironically, color differences are more
apparent) or from airports which are controlled by colored lights from the control tower (in other words, “no nights and no lights”).
I’ve never seen “deep blue sea” as being blue. It’s almost black to me.
Shallow water, such as La’ie Bay, is clear with color patches in it, some of which are blue. The dark paint favored by Navy-warbird owners is definitely blue, even against the background of the “non-blue” ocean itself, while people with normal vision say that the paint and the ocean are exactly the same color. Thus, planes which are all but invisible to them are as obvious to me as if they were painted yellow!
Even when colors are seen, color vision is largely a matter of interpretation. At what point does red become pink? Why is there no equivalent of “pink” to describe an equally diluted intensity of green?
Ancient people SAW blue, but it was so pervasive that they couldn’t describe it any more than we can describe the taste of salt. They didn’t have a word for “gravity” either, but they were still fully aware of its existence! Once the Egyptians began creating blue dyes, that color needed its own definition.
Regarding Blue, Words, and Cognition.
The Russian language has two words for blue: синий (navy blue) and голубой (sky blue). An English speaker confined to the nouns would be forced to call both colors “blue.”
Does the ancient lack of a word for “blue” mean the ancients couldn’t “see” blue? Ancient Greek had no word for “velocity,” but their natural philosophers were certainly aware of change of location over time. They just couldn’t discuss it compactly.Sort of like English-speakers discussing Gemütlichkeit.
I really have no conclusions, and I am certainly not a Biblical scholar. I am mildly color blind and my father was more so. We know that adult lactose is a fairly recent development (25,000 years or so.) I am inclined to agree with Mike Flynn, but I don’t know how recent a development is color blindness – or the lack of it.
And that is enough for tonight. My sprains are not painful but they are annoying. More tomorrow.
Freedom is not free. Free men are not equal. Equal men are not free.