Thoughts on intelligent machines.

View 797 Tuesday, November 05, 2013

“Transparency and the rule of law will be the touchstones of this presidency.”

President Barack Obama, January 31, 2009

 

Christians to Beirut. Alawites to the grave.

Syrian Freedom Fighters

 

What we have now is all we will ever have.

Conservationist motto

 

If you like your health plan, you can keep your health plan.

Barrack Obama, famously, 2012 Presidential Campaign.

Obama Officials In 2010: 93 Million Americans Will Be Unable To Keep Their Health Plans Under Obamacare

Federal Register

http://www.gpo.gov/fdsys/pkg/FR-2010-06-17/pdf/2010-14488.pdf

clip_image002

IT’S ALIVE! IT’S ALIVE! Google’s secretive Omega tech just like LIVING thing .

Jerry

Emergent behavior – the crucial piece that turns a computator into a Heinleinian sentience – seems to have arrived:

http://www.theregister.co.uk/2013/11/04/google_living_omega_cloud/

"Systems at a certain complexity start demonstrating emergent behavior, and it can be hard to know what to do with it."

Ed (pshrink)

IT’S ALIVE! IT’S ALIVE! Google’s secretive Omega tech just like LIVING thing

‘Biological’ signals ripple through massive cluster management monster

By Jack Clark, 4th November 2013

Exclusive One of Google’s most advanced data center systems behaves more like a living thing than a tightly controlled provisioning system. This has huge implications for how large clusters of IT resources are going to be managed in the future.

"Emergent" behaviors have been appearing in prototypes of Google’s Omega cluster management and application scheduling technology since its inception, and similar behaviors are regularly glimpsed in its "Borg" predecessor, sources familiar with the matter confirmed to The Register.

Emergence is a property of large distributed systems. It can lead to unforeseen behavior arising out of sufficiently large groups of basic entities.

Just as biology emerges from the laws of chemistry; ants give rise to ant colonies; and intersections and traffic lights can bring about cascading traffic jams, so too do the ricocheting complications of vast fields of computers allow data centers to take on a life of their own.

The kind of emergent traits Google’s Omega system displays means that the placement and prioritization of some workloads is not entirely predictable by Googlers.

http://www.theregister.co.uk/2013/11/04/google_living_omega_cloud/

In Robert Heinlein’s masterpiece The Moon is a Harsh Mistress, a large computer network on the Moon “comes alive” when the technicians add new system to the network. And of course science fiction stories have had intelligent self-conscious computers in stories since the Golden Age. Poul Anderson had a particularly attractive ship’s computer in some of his after Sandry stories. (See Starfog, 1967). Then there’s my own Starswarm, which I am still rather proud of. http://www.amazon.com/Starswarm-Jerry-Pournelle-ebook/dp/B006O1XF6U/ref=sr_1_1?s=digital-text&ie=UTF8&qid=1383680625&sr=1-1&keywords=starswarm

A long time ago there was Eliza, a simple BASIC program that aped a Rogrian psychotherapist, and was actually thought to be intelligent; it was amplified by someone who sent me a copy of “Analiza” in what I believe to be CBASIC – alas, it was on 8 inch disks and I have long since ceased to have a way to read those, and I suspect any I have kept are now unreadable anyway. My fault for not putting them onto CDROM when I had the chance. Analiza was able to accept new scripts, and I played with it for a while: it could do a pretty good job of emulating an analyst, and what I added was scripts to deal with questions taken out of context – gentle admonition: “This will all go better if you stay to the subject. How do you feel about that?” and such like. It had groups of responses and would call one or another at random if sent to that group, with the goal of appearing not to be so repetitive. By being more directive it became more like a Freudian analyst and less like the Rogerian model of Eliza, and when I tried it on some undergraduates a number of them thought it might actually be intelligent if a bit limited. I wish I still had a copy.

Of course since that time psycho pharmaceuticals have become far more common, and the old schools of “talk” psychotherapy have vanished or been relegated to organizations outside the control of academic psychology. I suppose some such school still exist but I seldom hear of them. At one time debates among Rogerians, Freudians and disciple breakoffs such as Horney and Jung, Semanticists such as Wendell Johnson ( I studied under him at Iowa a lifetime ago), psychodrama, Primal Scream – these were important. They don’t seem to be any longer.

Incidentally, iPhone users can ask Siri about Eliza, but she doesn’t know about Analyza.

clip_image002[1]

The question of intelligent computers becomes serious when we contemplate the future of armed drones. See

Killer Robots and the Laws of War

Autonomous weapons are coming and can save lives. Let’s make sure they’re used ethically and legally.

Autonomous weapons are coming and can save lives. Let’s make sure they’re used ethically and legally.

By Kenneth Anderson and Matthew Waxman

With each new drone strike by the United States military, anger over the program mounts. On Friday, in one of the most significant U.S. strikes, a drone killed Pakistani Taliban leader Hakimullah Mehsud in the lawless North Waziristan region bordering Afghanistan. Coming as Pakistan is preparing for peace talks with the Taliban, the attack on this major terrorist stirred outrage in Pakistan and was denounced by the country’s interior minister, Chaudhry Nisar Ali Khan, who said the U.S. had “murdered the hope and progress for peace in the region.”

Recent reports from Amnesty International and Human Rights Watch have also challenged the legality of drone strikes. The protests reflect a general unease in many quarters with the increasingly computerized nature of waging war. Looking well beyond today’s drones, a coalition of nongovernmental organizations—the Campaign to Stop Killer Robots—is lobbying for an international treaty to ban the development and use of “fully autonomous weapons.”

http://stream.wsj.com/story/latest-headlines/SS-2-63399/SS-2-371963/

At the moment there is always a human in the link when a drone launches a Hellfire against a human target, but there are a number of automated weapons on modern warships: there’s no way humans can control the weapons and keep the ship safe against multiple attacks. Imagine 250 small autonomous helicopters with a range of 30 miles and a payload of a kilogram of thermite or C4 or both. Flying at under 10 meters altitude above the land (if the target is in harbor) or sea. Or a dozen Exocet cruise missiles. Or even a thousand one-kilo payload model aircraft. You can’t just raster the target area, and they’re closing fast…

clip_image002[2]

On the general subject of the future of the Navy – how many Big Carriers do we need, and is that the most effective force we can build and afford – my son Commander Phillip Pournelle had an article in the May, 2013 Proceedings of the U>S> Naval Institute “The Rise of the Missile Carriers” http://www.usni.org/magazines/proceedings/2013-05/rise-missile-carriers which has generated considerable debate among naval strategists. He has replied to some of that in

We Need a Balanced Fleet for Naval Supremacy

The following contribution is by CDR Phillip E. Pournelle, USN. CDR Phillip E. Pournelle is a Surface Warfare Officer and an Operations Analysts.  He currently serves as a military advisor to OSD’s Office of Net Assessment.
Lazarus’ essay entitled Naval Supremacy Cannot be ‘Piggybacked’ on Small Ships attempts to rebut essays of Captains Hughes, Kline, Rubel and Admiral Harvey (here and here) advocating the employment of small missile combatants operating as flotillas in the littoral environment.
Technological changes underway today will increasingly challenge the way we conduct business today.  The United States will have to adapt to retain its lead.  In order to adapt, debates such as these must be part of a larger Cycle of Research, an ongoing iteration of wargames, analysis, and fleet exercises.

http://www.informationdissemination.net/2013/11/we-need-balanced-fleet-for-naval.html

The United States has always been a maritime power – our first projection of force beyond our borders was against pirates, and our first foreign war was over naval matters and the laws of war – and the structure of the Navy has been a vital matter. The debate is important.

And developments in self-directed weapons is important. American overseas force projection carries us far from the sea and into the mountains of Pakistan.

clip_image002[3]

 

Visions of a Permanent Underclass

A new book imagines an America of the rich and the ‘shantytown’ dwellers.

By

William A. Galston

In 1958, as millions of American high-school students were beginning their long infatuation with George Orwell’s "1984" and Aldous Huxley’s "Brave New World," a major figure in the British Labour Party, Michael Young, published a far more prescient futurist tract. His essay, "The Rise of the Meritocracy" described a year-2034 dystopia in which general intelligence determined the distribution of income and status.

The losers knew that they were failures, and the ideology of meritocracy had eliminated the moral basis of complaint: The losers deserved their subordination and should accept it. In the end, they didn’t, and they revolted against a system that insulted their dignity.

Fifty-five years after Young’s neglected classic, economist Tyler Cowen has entered the fray with his latest book, "Average Is Over," which analyzes the dynamics behind the rise of what he terms the "hyper-meritocracy." As his point of departure, he takes some well-known trends—growing economic inequality, falling male wages, declining labor-force participation and the rising share of the national product flowing to capital rather than to labor.

Citing the work of economists such as David Autor, Mr. Cowen depicts a polarizing labor market, increasingly hollowed out as middle-skill, middle-wage jobs disappear. The Great Recession, he argues, unmasked the fact that U.S. employers had taken on more middle-wage workers than they needed or could afford. That’s why so many displaced workers are being forced to accept new jobs at lower wages—and why so many others have dropped out of the workforce.

The main driver of these disquieting trends is technology—specifically, smart machines that can do (and do better) an ever-rising share of what human beings do to earn their living. As this proceeds, some will win out: people who work with and around smart machines; managers who can organize these people; individuals with high general intelligence who can size up new situations and quickly learn what they need to know; and conscientious subordinates with the key new virtues of reliability and team play. Everyone else will lose out—except the marketers who know how to appeal to the wealthy.

http://online.wsj.com/news/articles/SB10001424052702303918804579107754099736882

The American education system, coupled with the drive for higher and higher minimum wages, seems designed to produce a society which would rather buy robots than hire citizens.

I suppose it is not appropriate to ask, Why wouldn’t it? Robots don’t form unions to demand guns, and they don’t feel entitled. And what do our schools qualify the lower half of the class to do?

clip_image002[4]

 

Don’t buy into the Google ‘Omega’ hype.

It’s just a complex feedback-driven network-/processing-/storage-/workload-distribution system, there are no *cognitive* emergent behaviors concerned.

This is a very worthy and admirable achievement within the IT space, but it has nothing to do with ‘AI’. ‘Omega’ is an automated bounded-domain application, nothing more.

——-

Roland Dobbins

Oh, I doubt that just hooking up large networks will spontaneously generate consciousness, and I am familiar with the argument that nothing else ever will, but it’s only now that computers are fast enough to allow the kinds of programs that might make expert systems look sentient even to run in real time. When I was writing Starswarm I had to give a lot of thought to requirements of a real AI entity: not that I could design one, but what would one need for there to be a possibility of it working.  I am sure there are a few more iterations of Moore’s law to come, and when I look at what we can do now as compared to what we had …

 

clip_image002[4]

Freedom is not free. Free men are not equal. Equal men are not free.

clip_image002[5]

clip_image004

clip_image006

Bookmark the permalink.

Comments are closed.