Can Robots Have Souls; Human flesh in the marketplace;What happens if you give cocaine to eels; thoughts on peer review; and other issues

Chaos Manor View, Monday, May 18, 2015

ISIS has taken another large city. The Caliphate grows, and each success is seen as a confirmation of its right to rule. The latest conquest should be good for another 50,000 recruits who see it as a sign: this is a legitimate state. When I first wrote about this, I said that we could end the Caliphate with an infantry division and the remaining Warthogs. It would be messy, but it could be done quickly. We could give the conquests in the Kurdish districts to the Kurds, who would be loyal allies; and doing so would be a salutary lesson to the corrupt rulership in Baghdad.

Now it becomes more difficult. The problem is not defeating the Caliphate, it is governing the conquered territory. It is not yet too late: there are Sunni and Shia Iraqi enough to form a federation. And it is our responsibility. We broke it. We threw out Saddam Hussein, and we sent in – Bremer, who managed to set a much worse record than the worst of the Roman pro-consuls.

Obama did not start this, and Bremer was not his man; but his haste to get us out of Iraq, while understandable, showed his deep misunderstanding of Middle East affairs.

The Caliphate is not yet an existential threat to the West, but at it’s present rate of success, it will be. War feeds war.

clip_image001

More pictures from Niven’s birthday party

clip_image003

7

clip_image005

That’s Larry in the top picture. Jim Ransom in the next along with some of the waiters. This in in the entryway of Niven’s house…

clip_image001[1]

From the blog of Fredrik deBoer, an academic in rhetoric and composition, May 13:

Criticism of today’s progressives tends to use words like toxic, aggressive, sanctimonious, and hypocritical. I would not choose any of those. I would choose lazy. We are lazy as political thinkers and we are lazy as culture writers and we are lazy as movement builders. We ward off criticism of our own bad work by acting like that criticism is inherently anti-feminist or anti-progressive. We seem spoiled, which seems insane because everything is messed up and so many things are getting worse. I guess having a Democratic president just makes people feel complacent. Well, look: as a political movement we are in pathetic shape right now. We not only have no capacity to move people who don’t already share our worldview, we seem to have no interest in doing so. Our stock arguments are lazy stacks of cliches. We seem to want to confirm everything conservatives say about our inability to argue without calling other people racist. We can’t articulate why our vision of the future is better than the other side’s, and in fact many of us will tell you that it’s offensive to think that we have an obligation to educate others on that vision at all. We celebrate grassroots activist movements like Black Lives Matter, but we insult them by treating them as the same thing as hashtag campaigns, and we don’t build a broader left-wing political movement that could increase their likelihood of success. We spend all day, every day, luxuriating in how much better we are than other people, having convinced ourselves that the work of politics is always external, never internal. We have made politics synonymous with social competition. We’re a mess.

So, apparently, I am not the only one to discover that America is losing its mind. We sowed the wind for generations; I said so at the time, and often for decades. I suppose it should be no surprise to discover we are reaping the whirlwind.

clip_image001[2]

Dear Jerry:
You wrote in View for 5/17/2015:

I’ve been a bit depressed all week, not because of this place, but another forum which I had thought was still rational, but which has turned poisonous, everyone looking for verbal errors so they can charge racism or sexism or check your privilege, thus winning whatever they thought was a contest, and ending all discussion before it starts.

Immediately I was reminded that your experience is nothing new, that Paul wrote to Timothy about this phenomenon 2000 years ago:

For the time is coming when people will not endure sound teaching, but having itching ears they will accumulate for themselves teachers to suit their own passions,  and will turn away from listening to the truth and wander off into myths.

2 Timothy 4:3-4 ESV

When entering the kind of forum you described I also thought how Jesus advised his disciples to enter a new town:

And whatever town or village you enter, find out who is worthy in it and stay there until you depart. As you enter the house, greet it.  And if the house is worthy, let your peace come upon it, but if it is not worthy, let your peace return to you. And if anyone will not receive you or listen to your words, shake off the dust from your feet when you leave that house or town.

Matthew 10:11-14 ESV

You and I are of an age when we should not be bothered by the turmoil of internet forums and the shifting notions and passions of the day as nations rage and people plot in vain. I am reminded of what Thomas Jefferson wrote to John Adams:

1812 January 21. “I have given up newspapers in exchange for Tacitus and Thucydides, for Newton and Euclid; and I find myself much the happier.”

You can find this quote in many reliable sources including

http://www.monticello.org/site/jefferson/quotations-reading

http://tjrs.monticello.org/letter/280

and in context at

http://www.let.rug.nl/usa/presidents/thomas-jefferson/letters-of-thomas-jefferson/jefl213.php

http://www.gutenberg.org/files/16784/16784-h/16784-h.htm#link2H_4_0099

Peace, and best regards,
–Harry M.

clip_image001[3]

Unregulated capitalism in Nigeria

Jerry:

Your recurring comment regarding unregulated capitalism has happened in Nigeria:

Nigerian restaurant shut down for serving HUMAN FLESH – and had bags in kitchen containing heads that were still bleeding
Police raided the restaurant after locals reported it was selling human meat
They discovered human heads which were still dripping blood into plastic bags
Weapons including grenades also found during the raid in Anambra region
Ten people have so far been arrested in connection to the various crimes

http://www.dailymail.co.uk/news/article-3084326/Nigerian-restaurant-shut-serving-HUMAN-flesh-bags-containing-human-heads-bleeding.html

Sorry if I spoiled your lunch–

Doug Ely

We have not yet seen full decivilization, but this approaches it. Unregulated laissez faire leads to human flesh in the market place. Unrestrained government leads to the Nomenklatura and the ossified communist state. We have run that experiment; how many repetitions do we need?

“I did not know I had been served human meat, and it was that expensive.”

<http://www.telegraph.co.uk/news/worldnews/africaandindianocean/nigeria/11610908/Nigerian-restaurant-shut-down-for-serving-human-flesh.html>

—————————————

Roland Dobbins

clip_image001[4]

Peer review, evolution/devolution and sci-fi

Hello, Jerry Pournelle! I have been thinking about the fact that in real life, progress towards more unified theories in physics stagnated as soon as peer review journals gained access to the tools to enforce their policy against redundant publication. Peter Higgs and Francois Englert published the same theory independently without being stopped.
In the last decades, the only progress have been in fine-tuning of preexisting theories and in increased technical application of the same theories. General theories makes many unique predictions, increasing the chances of some of them being cheaply testable. Fine-tuning of existing theories, on the other hand, makes much fewer predictions and increases the risk of them all being expensive to test. So if the stagnation was due to increased research costs (e.g. Pareto principle), it would have struck fine-tuning of existing theories even more severely than breakthrough generalized theories. Ergo, it cannot be costs but must be something else.
I think it is due part to arbitrary divisions into “fields” in academia preventing ideas and falsifications from spreading, and part due to peer review no redundant publication policies scaring people with theories into not expressing them. This has given me a plot idea that can be applied into science fiction: that the road to a theory of everything allowing modified space-time goes by using the no redundant publication policy against itself so it ceases to work.
The entire idea behind peer review, claiming humans to be unreliable yet relying on rules written by humans and controls enforced by humans, is self-defeating. It is like when Epemenedos said that all Creteans are lying, despite being Cretean himself (the origin of the “all that is written at this paper is a lie” paradox). Psychologists do the same self-defeating thing when they claim all humans to be unreliable despite being human themselves. This also applies to other types of “control”, see 2 or more sections below. It does not, however, contradict evolution: while the existence of science requires the existence of science-capable beings today, it does not require the ancestors of science-capable beings to always have been science-capable. So evolution, including evolution of science-capable beings from ancestors that were not science-capable, is science. Psychology with it’s claims of “cognitive bias” today, however, is not a science and by the Epemenedos principle can never become one.
This also means that cognitive bias theories can be used to deny anything that conspiracy theories can be used to deny, without technically being conspiracy theories. Just like cognitive bias theorists claim all humans to share cognitive biases behind common mythological elements, they may just as well use the same type of arguments to claim that all agreement on, say, the moon landings or the holocaust is also due to panhuman cognitive biases. They may say that all observations of things left on the moon by the astronauts is also made by humans or by human-made instruments. When it comes to genocides, the cognitive bias theorists may state that since the testimonies agree regardless of the ethnicity of the witnesses, it shows that they are all fully human and that their denial is therefore not a hate ideology at all (repeating common claims of panhuman biases not being malicious). They may explain away the number of people disappeared by claiming a panhuman glitch in mathematical ability.
Which brings us to the next level of cognitive bias theory, things that conspiracy theories cannot do. Obviously, any cognitive bias theory is a more efficient denial tool than the equivalent conspiracy theory: cognitive bias theories rely on “selfish genes” just being there and not having to conspire, eliminating leak risks. Then cognitive bias theories can be used to deny not only historical events but also obvious things that no conspiracy could fake, like 1+1=2 and things fall down not up. They can even say that the assumption that you would leave the Earth if things fell up is also a genetic delusion not objective fact. Summary: cognitive bias theory is incompatible with science and thus not scientific theories.
I have also thought about that if cavemen specifically punished individuals with more Homo sapiens characteristics and “excused” the others by “they cannot help” their actions, that would have bred against Homo sapiens characteristics so that modern humans would never have existed. That made me think about some scifi possibilities too: maybe space archaeologists discovering ruins of civilizations that destroyed themselves by psychologistic morality somewhat similar to today’s Earth values breeding themselves into stupidity. Maybe the Fermi paradox being solved by all other proto-intelligent species thwarting their own evolution and humanity being extremely lucky to be so late in creating psychologistic morality. Maybe creation of intelligence-positive societies totally devoid of psychologistic morality, cultures in which the same action is never considered any worse just because it was conscious.
This must NOT be conflated with any kind of forcible eugenism against different behaviors. On the contrary, it is a rejection of the entire classification of certain behaviors/preferences as “sick”. Obviously, since considering the same behavior to be cool in an insect or reptile yet “sick” in a person is “intelligent guilt” psychologistic morality, and the intelligence-positive view rejects all “intelligent guilt” morals.
The intelligence-positive view is applicable without biologism too: forcing people to pretend stupidity is disastrous. It is possible to write stories wherein “justice” causes civilizations to self-destruct by forcing its members to “fake” lack of conscious choice. For maximum effect, they may be contrasted to other, intelligence-positive civilizations that faces peril yet survives precisely because they do not have “intelligent guilt” morality and thus do not force their members into malingering. Whether the civilizations are from different home worlds or instead offshoots of a single spacefaring civilization’s colonization that diverged into different societies is not really important to the case.
I would like to hear your replies as to these ideas.
Greetings,
Martin J Sallberg

I think you have stated your case very well indeed. Any time there is no possibility of dissent from a theory, you will get epicycles: in cosmology we have dark energy, dark matter, and none of it in our local area. So it goes.

I have long said that a reasonable percentage of research grants should go to the opposition, for crucial experiments: most will confirm existing belief, but not all; some will insert worms of doubt. And of course those results must be published. Peer review guarantees they will not be.

Yes, we need mechanisms to weed out barking madness, but even there we need caution. I am not worried about suppression of theories; but suppression of data, case histories, impossible experimental results: that is dangerous.

We are at present running a social science experiment on this; the results are not encouraging. We still sow the wind.

clip_image001[5]

‘Rise of the Robots’ and ‘Shadow Work’

By BARBARA EHRENREICHMAY 11, 2015       nyt

In the late 20th century, while the blue-collar working class gave way to the forces of globalization and automation, the educated elite looked on with benign condescension. Too bad for those people whose jobs were mindless enough to be taken over by third world teenagers or, more humiliatingly, machines. The solution, pretty much agreed upon across the political spectrum, was education. Americans had to become intellectually nimble enough to keep ahead of the job-destroying trends unleashed by technology, both robotization and the telecommunication systems that make outsourcing possible. Anyone who wanted a spot in the middle class would have to possess a college degree — as well as flexibility, creativity and a continually upgraded skill set.

But, as Martin Ford documents in “Rise of the Robots,” the job-eating maw of technology now threatens even the nimblest and most expensively educated. Lawyers, radiologists and software designers, among others, have seen their work evaporate to India or China. Tasks that would seem to require a distinctively human capacity for nuance are increasingly assigned to algorithms, like the ones currently being introduced to grade essays on college exams. Particularly terrifying to me, computer programs can now write clear, publishable articles, and, as Ford reports, Wired magazine quotes an expert’s prediction that within about a decade 90 percent of news articles will be computer-­generated.

It’s impossible to read “Rise of the Robots” — for review anyway — without thinking about how the business of book reviewing could itself be automated and possibly improved by computers. First, the job of “close reading,” now commonly undertaken with Post-its and a felt-tip red pen, will be handed off to a scanner that will instantly note all recurring words, phrases and themes. Next, where a human reviewer racks her brain for social and historical context, the review-bot will send algorithms out into the ether to scan every other book by the author as well as every other book or article on the subject. Finally, all this information will be synthesized with more fairness and erudition than any wet, carbon-based thinking apparatus could muster. Most of this could be achieved today, though, as Ford notes, if you want more creativity and self-­reflexivity from your review-bot, you may have to wait until 2050.

This is both a humbling book and, in the best sense, a humble one. Ford, a software entrepreneur who both understands the technology and has made a thorough study of its economic consequences, never succumbs to the obvious temptation to overdramatize or exaggerate. In fact, he has little to say about one of the most ominous arenas for automation — the military, where not only are pilots being replaced by drones, but robots like the ones that now defuse bombs are being readied for deployment as infantry. Nor does Ford venture much into the spectacular possibilities being opened up by wearable medical devices, which can already monitor just about any kind of biometric data that can be collected in an I.C.U. Human health workers may eventually be cut out of the loop, as tiny devices to sense blood glucose levels, for example, learn how to signal other tiny implanted devices to release insulin. But “Rise of the Robots” doesn’t need any more examples; the human consequences of robotization are already upon us, and skillfully chronicled here. Although the unemployment rate has fallen to officially acceptable levels, long-term unemployment persists, and underemployment — part-time jobs when full-time jobs are needed, or jobs that do not reflect a worker’s education — is on the rise. College-educated people often flounder for years after graduation, finding temp jobs and permanent roommates. Adults of both sexes are drifting out of the work force in despair. All of this has happened by choice, though not the choice of the average citizen and worker. In the wake of the recession, Ford writes, many companies decided that “ever-advancing information technology” allows them to operate successfully without rehiring the people they had laid off. And there should be no doubt that technology is advancing in the direction of full unemployment. Ford quotes the co-founder of a start-up dedicated to the automation of gourmet hamburger production: “Our device isn’t meant to make employees more efficient. It’s meant to completely obviate them.”

Ford offers little hope that emerging technologies will eventually generate new forms of employment, in the way that blacksmiths yielded to autoworkers in the early 20th century. He predicts that new industries will “rarely, if ever, be highly labor-intensive,” pointing to companies like YouTube and Instagram, which are characterized by “tiny workforces and huge valuations and revenues.” On another front, 3-D printing is poised to make a mockery of manufacturing as we knew it. Truck driving may survive for a while — at least until self-driving vehicles start rolling out of Detroit or, perhaps, San Jose.

The disappearance of jobs has not ushered in a new age of leisure, as social theorists predicted uneasily in the 1950s. Would the masses utilize their freedom from labor in productive ways, such as civic participation and the arts, or would they die of boredom in their ranch houses? Somehow, it was usually assumed, they would still manage to eat.

Come to find out, there’s still plenty of work to do, even if no one is willing to pay for it. This is the “shadow work” that Craig Lambert appealingly brings to light in his new book on “the unpaid, unseen jobs that fill your day.” We take it for granted that we’ll have to pump our own gas and bus our own dishes at Panera Bread. Booking travel reservations is now a D.I.Y. task; the travel agents have disappeared. As corporations cut their workforces, managers have to take on the work of support staff (remember secretaries?), and customers can expect to spend many hours of their lives working their way through menus and recorded advertisements in search of “customer service.” At the same time, our underfunded and understaffed schools seem to demand ever more parental participation. Ambitious parents are often expected not only to drive their children to and from school, but to spend hours carrying out science projects and poring over fifth-grade math — although, as Lambert points out, parental involvement in homework has not been shown to improve children’s grades or test scores.

“Shadow Work” is generally a smooth ride, but there are bumps along the way. The definition of the subject sometimes seems to embrace every kind of unpaid work — from the exploitative, as in the use of unpaid interns, to the kind that is freely undertaken, like caring for one’s own family. At times the book gets weighed down by an unwarranted nostalgia for the old days, when most transactions involved human interactions. For example, Lambert grants that home pregnancy tests offer women “more privacy and more control,” while also lamenting — as no woman ever has — that they cut out the doctor and thus transform “what can be a memorable shared event into a solitary encounter with a plastic stick.”

Lambert, formerly an editor at Harvard Magazine, is on firmer ground when he explores all the ways corporations and new technologies fiendishly generate new tasks for us — each of them seemingly insignificant but amounting to many hours of annoyance. Examples include deleting spam from our inboxes, installing software upgrades, creating passwords for every website we seek to enter, and periodically updating those passwords. If nothing else, he gives new meaning to the word “distraction” as an explanation for civic inaction. As the seas rise and the air condenses into toxic smog, many of us will be bent over our laptops, filling out forms and attempting to wade through the “terms and conditions.”

Lambert falls short of calling for the shadow workers of the world to go out on strike. But that’s what it might take to give us the time and the mental bandwidth to confront the dystopian possibilities being unleashed by technology. If middle-class jobs keep disappearing as wealth piles up at the top, Martin Ford predicts, economic mobility will “become nonexistent”: “The plutocracy would shut itself away in gated communities or in elite cities, perhaps guarded by autonomous military robots and drones.” We have seen this movie; in fact, in one form or another — from “Elysium” to “The Hunger Games” — we’ve been seeing it again and again.

In “Rise of the Robots,” Ford argues that a society based on luxury consumption by a tiny elite is not economically viable. More to the point, it is not biologically viable. Humans, unlike robots, need food, health care and the sense of usefulness often supplied by jobs or other forms of work. His solution is blindingly obvious: As both conservatives and liberals have proposed over the years, we need to institute a guaranteed annual minimum income, which he suggests should be set at $10,000 a year. This is probably not enough, and of course no amount of money can compensate for the loss of meaningful engagement. But as a first step toward a solution, Ford’s may be the best that the feeble human mind can come up with at the moment.

RISE OF THE ROBOTS

Technology and the Threat of a Jobless Future

By Martin Ford

334 pp. Basic Books. $28.99.

SHADOW WORK

The Unpaid, Unseen Jobs That Fill Your Day

By Craig Lambert

277 pp. Counterpoint. $26.

clip_image001[6]

Soon They’ll Be Driving It, Too       (journal)

Intelligent machines are ousting low-skilled workers now. Next they’ll start encroaching on white-collar livelihoods.

By

Sumit Paul-Choudhury

May 15, 2015 4:53 p.m. ET

Should you be worried by the emergence of intelligent machines? To some the answer is clear. “Full artificial intelligence could spell the end of the human race,” Stephen Hawking warned recently. Martin Ford’s “Rise of the Robots” offers a more prosaic reason for concern: Partially intelligent machines might render humans not so much extinct as redundant. “No one doubts that technology has the power to devastate entire industries and upend specific sectors of the economy and job market,” writes Mr. Ford, a Silicon Valley software developer turned futurist. Will machine intelligence, tackling tasks once thought of as humanity’s exclusive preserve, “disrupt our entire system to the point where a fundamental restructuring may be required if prosperity is to continue?”

Mr. Ford invokes Norbert Wiener, who in 1949 prophesied an “industrial revolution of unmitigated cruelty” in which machines would outstrip humans in routine work “at any price.” In Mr. Ford’s view, just such a revolution is under way in blue-collar work. Robots are ousting low-skilled workers everywhere, from fast-food joints to factory floors—a trend that Mr. Ford argues is central to the puzzling “jobless recovery” of the past decade as well as to other anomalous trends in pay and employment.

Now the machines are encroaching on white-collar livelihoods, which is why the intelligentsia have begun to wake up to their advance. To date, most automation has been of routine tasks that are relatively easy to describe in terms of simple instructions. But the combination of ever faster processors, ever smarter algorithms and ever bigger data is yielding supercomputers that are ever more capable of tackling complex challenges. IBM ’s Watson, having triumphed over human champion Ken Jennings at “Jeopardy!,” is now turning to medicine and cookery. Other machines are proving their mettle in fields ranging from scientific research to the stock market. Creativity no longer seems an insurmountable obstacle: Computers are starting to compose music or create paintings that could pass for the work of humans.

We are still a long way from all-round human intelligence—smart machines are becoming more flexible but still tend to excel in only a specific area—but Mr. Ford lucidly sets out myriad examples of how focused applications of versatile machines (coupled with human helpers where necessary) could displace or de-skill many jobs. If you are of the professional classes, you will likely read with mounting dismay Mr. Ford’s compelling explanation of how tools that encapsulate “analytic intelligence and institutional knowledge” will enable less qualified rivals to carry out your job proficiently, quite possibly from another country. An intelligent system might mine huge corporate data sets to distill years of experience into simple instructions for an overseas worker—who can then use translation and telepresence to overcome linguistic and geographical barriers. When the tools systems have become smart enough, those offshore workers may in turn be deemed surplus: In a particularly dastardly move, computers may even acquire those smarts by spying on their human users.

The author is persuasive in his discussion of the business logic that makes this process seem all but inevitable. Machines may be less accomplished than humans, but they are often cheaper, more dependable and more docile. While you might worry about their growing abilities, it is the economic incentives that seem truly problematic. Mr. Ford worries that if this trend runs away it will prove bad for all but the ultra-wealthy capitalists who own the machines. Because workers are consumers too, a declining workforce translates into declining demand, and that threatens the entire edifice of modern capitalism. Continue as we are, he suggests, and we may return to feudalism.

Will we? Why should this time be any different from previous waves of automation, in which displaced workers have moved, after some initial disorientation, to satisfactory new jobs? Machine intelligence, says Mr. Ford, is a general-purpose technology with broad applications: There will be few untouched fields to which workers can turn in their search for employment. Still, his copious examples, striking though they are, add up to no more than strong circumstantial evidence for that claim.

We should always be skeptical about the difficulty of transferring polished theories into unruly reality. And for the moment, there will remain bastions of human exceptionalism. One recent analysis suggests that “highly creative” work (including architecture, design and entertainment), which accounts for around a fifth of U.S. jobs, will prove intransigent. Mr. Ford also dedicates chapters to the ways in which the health-care and educational sectors have resisted automation.

Could we find new jobs in these areas for those put out of work by automation? The author’s short answer is that we can’t. Those at the bottom of the labor pyramid aren’t capable of doing jobs higher up it, and there wouldn’t be enough of those jobs anyway. Rather surprisingly, he gives only passing treatment to the potential deployment of intelligent machines to up-skill workers. “For the majority of people who lose middle-class jobs, access to a smart phone may offer little beyond the ability to play Angry Birds while waiting in the unemployment line,” he writes. Today’s smartphones, yes; but tomorrow’s smarter phones may enhance their owners’ reach and abilities in more productive ways.

The author’s apparent reluctance to engage with technological solutions to a technological problem perhaps reveals where his true object lies. His answer to a sharp decline in employment is a guaranteed basic income, a safety net that he suggests would both cushion the effect on the newly unemployable and encourage entrepreneurship among those creative enough to make a new way for themselves. This is a drastic prescription for the ills of modern industrialization—ills whose severity and very existence are hotly contested. “Rise of the Robots” provides a compelling case that they are real, even if its more dire predictions are harder to accept.

Rise of the Robots

By Martin Ford
Basic, 334 pages, $28.99

— Mr. Paul-Choudhury is the editor of New Scientist.

I have said often: by 2020, half of the jobs of those presently employed can be done by a robot whose cost is not much more than the annual wage paid to the current job-holder. Maintenance and supervision of the robot will be no more that 10% of the robot’s cost. The robot will need neither health care, family leave, vacation, nor a pension. Employers and investors will have decisions to make.

I see no reason to change that observation.

In 1982 I stated that by the year 2000, anyone in the Free World would be able in a timely manner to get the answer to any question that has an answer. The Internet made that happen well before the year 2000.

It is not too early to begin considering what happens to Democracy when half the population can cannot find employment that cannot be done cheaper by a robot.

clip_image001[7]

http://www.washingtonpost.com/business/economy/ready-to-lend-a-hand-or-3-in-the-next-disaster/2015/05/16/2ea78a16-fa6c-11e4-9ef4-1bb7ce3b3fb7_story.html?hpid=z1

Military push for emergency robots worries skeptics about lethal uses (WP)

By Christian Davenport May 16 at 10:18 PM

It’s 6-foot-2, with laser eyes and vise-grip hands. It can walk over a mess of jagged cinder blocks, cut a hole in a wall, even drive a car. And soon, Leo, Lockheed Martin’s humanoid robot, will move from the development lab to a boot camp for robots, where a platoon’s worth of the semiautonomous mechanical species will be tested to see if they can be all they can be.

Next month, the Pentagon is hosting a $3.5 million, international competition that will pit robot against robot in an obstacle course designed to test their physical prowess, agility, and even their awareness and cognition.

Galvanized by the Fukushima Daiichi nuclear power disaster in 2011, the Defense Advanced Research Projects Agency — the Pentagon’s band of mad scientists that have developed the “artificial spleen,” bullets that can change course midair and the Internet — has invested nearly $100 million into developing robots that could head into disaster zones off limits to humans.

“We don’t know what the next disaster will be, but we know we have to develop the technology to help us to address these kinds of disaster,” Gill Pratt, DARPA’s program manager, said in a recent call with reporters.

There’s more but you get the idea.

clip_image001[8]

http://www.vice.com/en_uk/read/dont-think-that-you-can-become-free-or-the-master-of-your-life-through-knowledge

Through Flaws in the Machine, Robots May Develop “Souls”: An Interview with John Gray

Photo via Flickr user Tom Simpson

It wasn’t until after I interviewed John Gray, major British philosopher, public intellectual, and the author, most recently, of The Soul of the Marionette, that I realized he was—in the words of a British friend—”a total hero.” Gray, who recently retired from a storied professorship at the London School of Economics, was not only blazingly smart, with a cracking wit; he also came across as down-to-earth, considerate, and rather even-keeled. Considering that his book makes a fairly damning case against the techno-utopian logic of Silicon Valley and cuts straight into our “self-flattering” ideas of freedom, Gray’s moderate tone was a surprise. Our conversation ranged from ancient Greek warfare to cryogenically-frozen tech tycoons, from the state of the humanities to the works of Philip K. Dick, from robotic souls to the UK’s astonishing general election results earlier this month. Much like Gray’s book, our 75-minute chat flew by and left me electrified.

The Soul of the Marionette offers a mini-education over the span of 20 short chapters, which romp through major and minor works of philosophy, art, history, and science fiction. The book can be disorienting—each of the chapters can be read on its own, Gray notes—but it’s never dull. Gray likens the style of this book to Pascal’s Penseés. (“Though, of course, I’m no Pascal!” Gray laughed, perhaps underselling himself.)

Gray’s ideal reader, in his words, is “a person who is curious, who thinks that there might be something wrong with our modern world, the world in which we expect human progress from science and technology.” If that sounds like you, check out The Soul of the Marionettewhen it’s released on May 19th in the US from Farrar, Straus, and Giroux.

VICE: The Soul of the Marionette addresses the fundamental question of whether or not human beings have freedom. You seem to say that we don’t.
John Gray:
I guess a different way of posing the question that the book asks is, “What kind of freedom do we think we want, and do we really want it?” The book is not really addressed to traditional philosophical issues of free will and metaphysics. We all think we want to be free. We all feel frustrated and thwarted and powerless when we think we’re not free. But what is it that we want from freedom? Do we really want what we think we want?

Your book also discusses how torture and “hyper-modern techniques of control” are being used today, in the name of human rights and freedom. Do you see this situation improving, or worsening, over the coming decades?
All of these technologies, they’re ambiguous. What they humanly mean, their human values, is always ambiguous. I’m old enough to remember when photocopiers and video machines were thought as bound to bring down tyrannies, back in the 70s and 80s. People said things like, “Well, if massacres can be videoed, no country would dare to commit a massacre!” It happens every day now. It happened with Tiananmen Square. They possibly even use that movie to show other people, in other parts of China, what might happen to them if they rebel.

There is considerably more relevant to this discussion.

clip_image001[9]

And now for the burning question:

What happens when you give eels cocaine?

<http://www.hakaimagazine.com/article-short/dr-eelgood>

—————————————

Roland Dobbins

clip_image001[10]

‘We are convinced the machine can do better than human anesthesiologists’ (WP)

By Todd C. Frankel May 15

I wrote recently about Sedasys, a machine that automates anesthesia. It’s a first-of-its-kind device in the United States. Only four hospitals use it for now. It’s restricted to colonoscopies in healthy patients.

[New machine could one day replace anesthesiologists]

But Sedasys, in development for 15 years, is no longer on the true cutting edge of what’s possible with automated anesthesia.

A machine with the clunky name of iControl-RP is. It’s an experimental device that pushes the boundaries of how much responsibility is turned over to technology. It monitors brain wave activity. And it’s even been tested on children.

One of the reasons that Sedasys was approved by U.S. health regulators is that it’s a conservative leap forward. The device is innovative, but it doesn’t decide alone how much anesthesia to give to a patient.

It’s an open-loop system. The initial dose is pre-determined based on a patient’s weight and age. And Sedasys only reduces or stops drug delivery if it detects problems. Only a doctor or nurse can up the dose. That gave regulators a level of comfort.

But the iControl-RP makes its own decisions. It is a closed-loop system.

This new device, being tested by University of British Columbia researchers, monitors a patient’s brain wave activity along with traditional health markers, such as blood oxygen levels, to determine how much anesthesia to deliver.

“We are convinced the machine can do better than human anesthesiologists,” said Mark Ansermino, one of the machine’s co-developers, who works as director of pediatric anesthesia research at the university’s medical school in Vancouver.

http://www.eetimes.com/document.asp?doc_id=1326592

Is D-Wave a Quantum Computer? (EE Times)

R. Colin Johnson

5/14/2015 08:52 PM EDT


Critics charge its not a “real” QC
PORTLAND, Ore.—Recently I had to explain to a reader why critics say that D-Wave’s so-called quantum computer was not a “real” quantum computer, the answer for which he accepted on my authority. However, the question kept nagging me in the back on my mind “why” D-Wave markets what it calls a quantum computer if it is not for real. To get to the bottom of it, I asked Jeremy Hilton, vice president of processor development of D-Wave Systems, Inc. (Burnaby, British Columbia, Canada) about why critics keep saying its quantum computer is not for real. He also revealed details about D-Wave’s next generation quantum computer.

“The Holy Grail of quantum computing to build a ‘universal’ quantum computer—one that can solve any computational problem—but at a vastly higher speed that today’s computers,” Hilton told EE Times. “That’s the reason some people say we don’t have a ‘real’ quantum computer—because D-Wave’s is not a ‘universal’ computer.”

D-Wave’s quantum computer, rather, only solves optimization problems, that is ones that can be expressed in a linear equation with lots of variables each with its own weight (the number that is multiplied times each variable). Normally, such linear equations are very difficult to solve for a conventional ‘universal’ computer, taking lots of iterations to find the optimal set of values for the variables. However, with D-Wave’s application-specific quantum computer, such problems can be solved in a single cycle.

“We believe that starting with an application-specific quantum processor is the right way to go—as a stepping stone to the Holy Grail—a universal quantum computer,” Hilton told us. “And that’s what D-Wave does—we just to optimization problems using qubits.”

There is considerably more detail.

clip_image001[11]


clip_image001[18]

clip_image007

Freedom is not free. Free men are not equal. Equal men are not free.

clip_image007[1]

clip_image009

clip_image007[2]