Octobot is “the first ever self-contained, completely soft robot”.
(It’s static, at this point, so more like a robot anemone.)
Octobot is “the first ever self-contained, completely soft robot”.
(It’s static, at this point, so more like a robot anemone.)
The peculiar John Murray Spear:
… In 1852, Spear broke all ties with the Universalist church, and instead turned to Spiritualism. He claimed that he was in contact with “The Association of Electrizers”, a group of spirits including Benjamin Franklin, Thomas Jefferson, John Quincy Adams, and Benjamin Rush, as well as Spear’s namesake John Murray. Evidence indicates he occasionally faked signatures as a way to gain authority from a “guide from the past”; however, these signatures were dated beyond the lifetimes of the deceased. Spear believed that the purpose of this group was to bring new technology to mankind, so that greater levels of personal and spiritual freedom could be achieved. The following year, Spear and a handful of followers retreated to a wooden shed at the top of High Rock hill in Lynn, Massachusetts, where they set to work creating the “New Motive Power”, a mechanical Messiah which was intended to herald a new era of Utopia. The New Motive Power was constructed of copper, zinc and magnets, all carefully machined, as well as a dining room table. At the end of nine months, Spear and the “New Mary”, an unnamed woman, ritualistically birthed the contraption in an attempt to give it life. Unfortunately for Spear, this failed to have the desired effect, and the machine was later dismantled. …
The automation of military technology goes nonlinear:
“Machines have long served as instruments of war, but historically humans have directed how they are used,” said Bonnie Docherty, senior arms division researcher at Human Rights Watch, in a statement. “Now there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.”
Some have argued in favor of robots on the battlefield, saying their use could save lives. […] But last year, more than 1,000 technology and robotics experts — including scientist Stephen Hawking, Tesla Motors CEO Elon Musk and Apple co-founder Steve Wozniak — warned that such weapons could be developed within years, not decades. […] In an open letter, they argued that if any major military power pushes ahead with development of autonomous weapons, “a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
“Virtually inevitable” is a cybernetic truth-bomb.
Rohit Gupta on the world-historic confluence into AlphaGo:
Dutch computer scientist John Tromp noted that comparing Go to chess is “not even like comparing the size of the universe with the nucleus of an atom”. As the game progresses, the smallest error made in this dynamic universe of Yin and Yang can magnify surreptitiously into an irreversible cataclysm. A butterfly flutters its wings, slaves mutiny on a ship, corporations go bankrupt, the Soviet Union breaks apart, a black asteroid strikes the earth and dinosaurs go extinct. […] Artificial intelligence too, like this ancient boardgame, goes back to the very dawn of human civilisation. A Roman tutorial on rhetoric for orators called Ad Herennium (86-82 BC) says this about memorisation technique, “…and now we will speak of the artificial memory.” Many Greeks, including Socrates, were against the invention of the written word, because they feared it would destroy the ability of human beings to remember. Millennia later, the rise of computers has released the art of memory like a gigantic djinn from Aladdin’s lamp, just as telescopes opened up the horizons of astronomy. …
AI Helps Humans Best When Humans Help the AI https://t.co/bEGX1VBFYW via @WIRED #artificialintelligence
— Artificial Other (@ArtificialOther) December 8, 2015
//platform.twitter.com/widgets.js
Cooperate with it, and no one has to get hurt.
It begins:
How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself? […] This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders. …
German Sierra on ‘Deep Media Fiction’ (preliminary draft):
If the ghost used to be the subject of action, it is now the machine who becomes responsible for animating the ghost. The consequence of this action-reversal is that what works mechanically — or organically — can only be examined, modelled or modified in accordance to the (recurrent) reloading of humanist discourses …
(Much interesting Ccru deployment within.)