Ancient Greek robot stories, and more:
Out of Pandora’s box flew pestilence, disaster, misfortune. In simple versions of the myth, the last thing to flutter out of Pandora’s box was hope. But deeper, darker versions say that instead of hope, the last thing in the box was ‘anticipation of misfortune’. In this version, Pandora panicked and slammed down the lid, trapping foreknowledge inside. Deprived of the ability to foresee the future, humankind received what we call ‘hope’.
Stupidity for one side of the strategic equation is added camouflage for the other.
Octobot is “the first ever self-contained, completely soft robot”.
(It’s static, at this point, so more like a robot anemone.)
The automation of military technology goes nonlinear:
“Machines have long served as instruments of war, but historically humans have directed how they are used,” said Bonnie Docherty, senior arms division researcher at Human Rights Watch, in a statement. “Now there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.”
Some have argued in favor of robots on the battlefield, saying their use could save lives. […] But last year, more than 1,000 technology and robotics experts — including scientist Stephen Hawking, Tesla Motors CEO Elon Musk and Apple co-founder Steve Wozniak — warned that such weapons could be developed within years, not decades. […] In an open letter, they argued that if any major military power pushes ahead with development of autonomous weapons, “a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
“Virtually inevitable” is a cybernetic truth-bomb.
Herzog has made a robotics documentary (to be shown at the 2016 Sundance Film Festival). The title is Lo and Behold: Reveries Of The Connected World.
There’s a trailer here.
“… it would not necessarily reveal itself to us …”
Cooperate with it, and no one has to get hurt.
How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself? […] This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders. …
The first draft (focusing on public relations), is in:
In this age torn apart by ethnic and religious conflicts, it may very well be that these ‘killer robots’ might teach us the value of unity, the ridiculousness of the politics of difference, and what it is to be human. For once in history, we will be united under one identity against one common enemy – a non-human, who falls beyond the fallible concepts of feelings and morals. AI might actually provide us the redemption that we need from ourselves.
Losses will be great, but we have already lost so much at each other’s hands. Our victory, however, will be in our united, permanent struggle under the banner of a genuine, universal humanity.