How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself? […] This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders. …
Its contour already visible:
Lately Facebook is getting a little too intimate with me. “Good morning, Leigh,” it coos. “Thanks for being here. We hope you enjoy Facebook today.” Then, like a slice of dystopian cafeteria lunch, it serves one of its abysmal “memories” into my feed, some forgotten years-old share, and when I tell it I don’t want to see that, Facebook scrapes apologetically: “We know we don’t always get it right.” […] No, Facebook, of course you don’t. Remember how you started serving me wedding ads when I’d only just told one or two people I was engaged? That was creepy. Facebook is absolutely, indisputably creepy, a fungal colony of privacy violations fused helplessly to our human infrastructure. It spies on its employees and it demands pictures of our ID so it can regulate our names. […] Everybody knows Facebook is creepy. Nonetheless, all this time it never occurred to me to delete my account until it began doing this: Trying to act like a person. Pretending we are on a first-name basis. […] We often imagine the inevitable future tech dystopia will be cold, populations marching under the eye of sterile robot overlords, our speech monitored and scrubbed of sentiment and intonation. Increasingly, though, it seems like we’re hurtling toward the opposite: A singularity of smarm, where performative — maybe even excessive — intimacy is the order of the day. […] Of course we don’t want creeper spy colony Facebook to be our friend. But creating the impression of intimacy is becoming increasingly crucial to the content economy today, and it’s happening everywhere. …
(Coldness be my God.)
Peter Galison on Einstein and Poincaré (from Einstein’s Clocks, Poincaré’s Maps: Empires of Time, 2003):
Yet despite their differences, both were grappling with the same extraordinary insight into electrocoordinated time, and in so doing both men stood at the crossing of two great movements. On one side lay the vast modern technological infrastructure of trains, shipping, and telegraphs that joined under the signs of clocks and maps. On the other, a new sense of the mission of knowledge was emerging, one that would define time by pragmatism and conventionality, not by eternal truths and theological sanction. Technological time, metaphysical time, and philosophical time crossed in Einstein’s and Poincaré’s electrically synchronized clocks. Time coordination stood, unequaled, at that intersection: the modern junction of knowledge and power.
(Tracking this techno-cultural lineage forward into Bitcoin is illuminating.)
The concept of “anthropotechnics” rests on the hypothesis that the current psychophysical and social constitution of the species Homo sapiens — note the evolutionist emphasis of this classification — is based substantially on autogenic effects. In this context, the term “autogenic” means “brought about by the repercussions of actions on the actor.” The human being — especially in so-called “advanced civilizations” — is the animal that molds itself into its own pet.
A selection of choice sound-bites (although it doesn’t include all of my favorites).