Learning AI From Bacteria

What if Earth’s simplest organisms were also the smartest? What if we could learn new computational methods from them? Biomimicry is the adoption of a biological innovation to solve a biological problem. Neural networks are an example of this as applied to artificial intelligence. Where bacteria are a keystone species, they can do virtually anything. Both intersected greatly in tech innovation and a trained biologist, this area really excites me! I’m launching a literature review whereby I look at research with bacteria that could inform the way we pursue artificial intelligence.

Our Tiny Cousins

Photo of E. coli bacteria.
Micrograph of E. coli bacteria. E. coli is a commonly used model organism in biology research. Photo credit: NIH.

Bacteria are the simplest organisms on the planet. They form the lowest branch on the shrub of life. Ancestral bacterial cells probably were not the first life on the planet, but they surely are an early lineage. They are single celled and lack organelles (complex internal structures) but they are ubiquitous: found at the bottom of the oceans, in the hottest thermal pools, in the coldest corners of Antarctica, inside nuclear reactors, and more.

Uniquely Suited

Bacteria have special powers that make them well suited for this type of study. There is tremendous diversity of species, meaning a variety of genes to work with. They replicate fast. Many species can trade genes and acquire genes from their environment (packed into circular molecules called plasmids), and these capabilities can be used to add genes artificially. For there many properties, bacteria are already used industrially (e.g.: consuming wastes) and medically (e.g.: detecting carcinogens with the Ames test).

My Plan

This series of posts will explore current research in biomimicry and bacteriology as they relate to computers and AI. Posts will go up intermittently, about once every two weeks.

© Peter Roehrich, 2017

Travels Through the Uncanny Valley

Have you ever seen a robot or animation that was pretty lifelike, but not quite there? Did it frighten or repulse you? If so, you found yourself in the uncanny valley.

Photos of faces and graph describing the uncanny valley.
The uncanny valley describes the area of robot or animation realness that is uncomfortable for many people. Photo: Mathur and Reichling.

The uncanny valley describes the phenomenon whereby we seem to dislike humanoids, either rendered or three dimensional, that are close to lifelike, but falling just short. Our affinity for these objects follows a curve. When an object has no human like traits, it falls to charm us. As human like attributes are added to the device, perhaps cameras and a speaker in the shape of eyes and a mouth, we are endeared by it. Approximating gunsmith form, however, becomes a little much, such that see are beyond uninterested but actively put off by the machine. This is the uncanny valley. Leaping the valley to full lifelike features (as in an actual person) we are again attracted. Mathur and Reichling recently tested this empirically.

Ivan Ivanovich

Photograph of Ivan Ivanovich dummy illustrating the uncanny valley.
Ivan Ivanovich, a Soviet space program test dummy, falls well within the uncanny valley. Photo Credit: Smithsonian, edited.

Enter Ivan Ivanovich, one of the top five most upsetting things I’ve even seen. He falls well within the uncanny valley, down to having eyelashes. Ivan is a mannequin from the Soviet space program used to test Vostok spacecraft systems, including its ejection system, and carried experiments inside him on two test spaceflights. I imagine a technician walking into a dark room only to turn on the lights and find Ivan sitting there, eyes fixed on him, terrified. His uncanny qualities are certainly reasobable: to understand how spaceflight might affect the human body, the test subject must be as close an approximation as possible. Upon parachuting to earth, the peasants finding him believed he was a downed American spy plane pilot (Francis Gary Powers, a U2 pilot, was shot down and captured shortly before) and attempted to take him prisoner.

But Why

The uncanny valley seems to be a paradox: how is it that we are uncomfortable with more humanlike animations or robots. Many camps have weighed in on this: hypotheses have roots in esthetics, psychology, and biology.

Having a background in biology, I’m inclined toward the evolutionary explanation of pathogen avoidance. Humans are primed to steer clear of anything that we associate with disease. It’s no accident that we are repelled by the sight and smell of vomit, rather it’s the product of natural selection. Somewhere in our evolutionary history, some of our ancestors were exposed to vomit; those who kept their distance were more likely to avoid contacting a (possibly deadly) infection, passing the trait on to their progeny, while those who were not repelled by the vomit were less lucky. To that end, when we see something in a person that doesn’t look right, that resembles a sign of infectious disease, we recoil. I believe that the uncanny gulf represents our recoiling from figures that are just human enough to trick our instincts into believing they’re real, but diseased. (As an aside, this evolutionary argument is not a justification for shunning or ignoring the humanity of those who have it appear to have an illness.) Further, that discomfort with figures in the uncanny valley is an automatic reaction lends credibility to it being an instinctual rather than cerebral reaction, in much the same way that pulling one’s hand away from something hot requires no thought about the temperature or consequences of touching the object. Tybur and Lieberman wrote an excellent piece examining the function of disgust.

Where Are We Headed

What does the visual uncanny valley portend? Its very existence raises the question of whether it is a manifestation of a larger phenomenon. In other words, does the uncanny valley extend beyond just the visual realm and into the cognitive? Would a computer’s “thoughts” be off-putting to us if they were a close, but not perfect analog of human cognition? If so, this could be a major stumbling block to widespread AI adoption. After all, a robot can be designed away from human appearance, but in so far as AI aims to mimic human thought, there may be no way around it. All might be moot, however. It seems a younger generation, digital natives, are less bothered by the uncanny valley. Perhaps bridging the uncanny valley is all a matter of familiarity.

© Peter Roehrich, 2017