Learning AI From Bacteria

What if Earth’s simplest organisms were also the smartest? What if we could learn new computational methods from them? Biomimicry is the adoption of a biological innovation to solve a biological problem. Neural networks are an example of this as applied to artificial intelligence. Where bacteria are a keystone species, they can do virtually anything. Both intersected greatly in tech innovation and a trained biologist, this area really excites me! I’m launching a literature review whereby I look at research with bacteria that could inform the way we pursue artificial intelligence.

Our Tiny Cousins

Photo of E. coli bacteria.
Micrograph of E. coli bacteria. E. coli is a commonly used model organism in biology research. Photo credit: NIH.

Bacteria are the simplest organisms on the planet. They form the lowest branch on the shrub of life. Ancestral bacterial cells probably were not the first life on the planet, but they surely are an early lineage. They are single celled and lack organelles (complex internal structures) but they are ubiquitous: found at the bottom of the oceans, in the hottest thermal pools, in the coldest corners of Antarctica, inside nuclear reactors, and more.

Uniquely Suited

Bacteria have special powers that make them well suited for this type of study. There is tremendous diversity of species, meaning a variety of genes to work with. They replicate fast. Many species can trade genes and acquire genes from their environment (packed into circular molecules called plasmids), and these capabilities can be used to add genes artificially. For there many properties, bacteria are already used industrially (e.g.: consuming wastes) and medically (e.g.: detecting carcinogens with the Ames test).

My Plan

This series of posts will explore current research in biomimicry and bacteriology as they relate to computers and AI. Posts will go up intermittently, about once every two weeks.

© Peter Roehrich, 2017

Travels Through the Uncanny Valley

Have you ever seen a robot or animation that was pretty lifelike, but not quite there? Did it frighten or repulse you? If so, you found yourself in the uncanny valley.

Photos of faces and graph describing the uncanny valley.
The uncanny valley describes the area of robot or animation realness that is uncomfortable for many people. Photo: Mathur and Reichling.

The uncanny valley describes the phenomenon whereby we seem to dislike humanoids, either rendered or three dimensional, that are close to lifelike, but falling just short. Our affinity for these objects follows a curve. When an object has no human like traits, it falls to charm us. As human like attributes are added to the device, perhaps cameras and a speaker in the shape of eyes and a mouth, we are endeared by it. Approximating gunsmith form, however, becomes a little much, such that see are beyond uninterested but actively put off by the machine. This is the uncanny valley. Leaping the valley to full lifelike features (as in an actual person) we are again attracted. Mathur and Reichling recently tested this empirically.

Ivan Ivanovich

Photograph of Ivan Ivanovich dummy illustrating the uncanny valley.
Ivan Ivanovich, a Soviet space program test dummy, falls well within the uncanny valley. Photo Credit: Smithsonian, edited.

Enter Ivan Ivanovich, one of the top five most upsetting things I’ve even seen. He falls well within the uncanny valley, down to having eyelashes. Ivan is a mannequin from the Soviet space program used to test Vostok spacecraft systems, including its ejection system, and carried experiments inside him on two test spaceflights. I imagine a technician walking into a dark room only to turn on the lights and find Ivan sitting there, eyes fixed on him, terrified. His uncanny qualities are certainly reasobable: to understand how spaceflight might affect the human body, the test subject must be as close an approximation as possible. Upon parachuting to earth, the peasants finding him believed he was a downed American spy plane pilot (Francis Gary Powers, a U2 pilot, was shot down and captured shortly before) and attempted to take him prisoner.

But Why

The uncanny valley seems to be a paradox: how is it that we are uncomfortable with more humanlike animations or robots. Many camps have weighed in on this: hypotheses have roots in esthetics, psychology, and biology.

Having a background in biology, I’m inclined toward the evolutionary explanation of pathogen avoidance. Humans are primed to steer clear of anything that we associate with disease. It’s no accident that we are repelled by the sight and smell of vomit, rather it’s the product of natural selection. Somewhere in our evolutionary history, some of our ancestors were exposed to vomit; those who kept their distance were more likely to avoid contacting a (possibly deadly) infection, passing the trait on to their progeny, while those who were not repelled by the vomit were less lucky. To that end, when we see something in a person that doesn’t look right, that resembles a sign of infectious disease, we recoil. I believe that the uncanny gulf represents our recoiling from figures that are just human enough to trick our instincts into believing they’re real, but diseased. (As an aside, this evolutionary argument is not a justification for shunning or ignoring the humanity of those who have it appear to have an illness.) Further, that discomfort with figures in the uncanny valley is an automatic reaction lends credibility to it being an instinctual rather than cerebral reaction, in much the same way that pulling one’s hand away from something hot requires no thought about the temperature or consequences of touching the object. Tybur and Lieberman wrote an excellent piece examining the function of disgust.

Where Are We Headed

What does the visual uncanny valley portend? Its very existence raises the question of whether it is a manifestation of a larger phenomenon. In other words, does the uncanny valley extend beyond just the visual realm and into the cognitive? Would a computer’s “thoughts” be off-putting to us if they were a close, but not perfect analog of human cognition? If so, this could be a major stumbling block to widespread AI adoption. After all, a robot can be designed away from human appearance, but in so far as AI aims to mimic human thought, there may be no way around it. All might be moot, however. It seems a younger generation, digital natives, are less bothered by the uncanny valley. Perhaps bridging the uncanny valley is all a matter of familiarity.

© Peter Roehrich, 2017

What’s in a Name

Is artificial intelligence the new all naturalThat’s what Tech Crunch’s Devin Coldewey thinks.

In the United States, there are no formalized requirements that a food product must meet to be deemed all natural. It means many things to many different people, especially those people marketing foods. Throwing an extra, positive sounding descriptor on a product is a great tactic for boosting its commercial appeal. Artificial intelligence is much the same; in the absence of authority, ideas about its meaning abound. Coldewey argues that many, if not most, claims of artificial intelligence are mere puffery.

What is Intelligence

We can debate whether a computer has artificial intelligence, but this raises the larger question of the meaning of intelligence. This article is hardly the place to review the theories behind intelligence, you’d be reading forever. I like defining intelligence as the ability to solve complex problems with creativity by gathering information, developing knowledge, and executing ideas. Researchers posit a number of areas of intelligence; without going into all of the proposed intelligence types, examples include linguistic, artistic, and numeric, among many more. This raises the interesting question of whether one can be intelligent if he or she excels in some categories but lags in others. Psychologist Charles Spearman’s research in the early 1900s identified g-factor as an underlying general intelligence, a high level concept driving performance on discrete measures. G-factor manifests as the correlation in performance on the discrete intelligence measures; intelligence in one area suggests intelligence in other areas. As an aside, Spearman, having used tens of intelligence metrics, developed factor analysis, whereby several variables are examined to determine whether they move together, thus possibly under control of some other (perhaps unmeasured) driver.

We run into a problem when considering artificial intelligence in the context of different forms of intelligence. Computers are clearly capable on a mathematics ability axis when one considers how numeric intelligence is measured (i.e.: solving math problems), however they fall short with art (screenplays written by computers are more comedy than drama!). Perhaps we need a method of arriving at a computer’s g-factor, if artificial intelligence can even be described with a g-factor.

Defining Artificial Intelligence

Given the complexity of defining intelligence, what can we say of artificial intelligence? I propose that rather than defining artificial intelligence as binary–as a system either having artificial intelligence or not–a system must be considered as having intelligence on continua on multiple axes.

Under such a paradigm, a computer employed to solve Ito calculus problems such as predicted rocket flight trajectories, might score very highly on numeric ability but poorly on self awareness. Self aware robots, likewise, may perform well on inter- and intrapersonal intelligence, but poorly on mathematical intelligence. To measure these systems’ intelligence requires a global review of their skills, maybe this is accomplished by scoring each metric (of how many to be determined) and taking an average. Maybe achieving this requires accepting that there are too many facets of artificial intelligence to reduce it to a single value.

This is more than an academic exercise. Where artificial intelligence is of great interest to consumers, researchers, product designers, healthcare, industry, government and military, and more, we must have a uniform definition, scoring system, and vocabulary to communicate it.

© Peter Roehrich, 2017

AI Makes Google Plus Photos Sharper

Google Plus Photos is an excellent service for storing, editing, and sharing pics taken with your phone. Unlimited free storage for compressed files, adequate for most smartphone cameras, along with instant upload, and in app editing and sharing make using it a no-brainer. If you use a dSLR or otherwise wish to store super sized files, you can dip into your free storage or purchase more. (I’ve never noticed quality problems with my photos, and I allow my files to be compressed so as to qualify for free, unlimited storage.)

Google’s announcement that it will use AI to enhance compressed photos by 75% is interesting. Its easy to go from a crisp photo to a grainy, pixelated image, but its hard to go the other way. But that’s exactly what Google is doing. Unfortunately it’s not available to Google Photos users writ large yet, however is offered for select Google Plus users.

Less Bandwidth

Photos take are, at least compared to text, large files requiring more data and time to download. Where a user has a poor connection or limited data plan, compressing photos makes a lot of sense as smaller images equate to smaller file size. But such an approach sacrifices quality for speed and size.

Downsampling

Downsampling is the process through which a large image is compressed. It works by taking several very small pieces of the image and combining them. Imagine a checkerboard where, in full resolution, each cell is rendered either black or white. In downsampling several squares will be combined to yield fewer, larger blocks of some intermediate shade. Through this process, the file shrinks in size as it is called upon to store fewer pieces of information. The cost is blurred lines and muted colors.

Crime dramas on TV may make ‘enhancing’ grainy images look easy, but it’s not. Doing so requires figuring out what the underlying, downsampled pixels were.

Upsampling

In a crime drama, an investigator may ‘enhance’ a pixelated license plate image, for example, with ease to yield crisp numbers. This makes for a great show, but in reality, it’s more likely that the human eye interprets the license plate number from a larger picture. As downsampling is taking fewer ‘samples’ of an image so as to represent it in fewer pixels, upsampling (interpolation) is the process of going from a low quality image to a higher quality rendering.

Example of image compressed and processed by RAISR.
Example photo compressed and then enhanced through RAISR. Compression reduces the amount of data necessary to transmit the photo by 75%. Photo by Google.

Humans can (somewhat) follow the lines of the image, block by block, to fill in the missing curves and sharpen colors in the mind. Asking a computer to do so is a taller order.

RAISR

Computers lack the human intuition to say that a fuzzy figure is a ‘3’ or an ‘8’ in a grainy picture. But what if computers could be trained to recognize the patterns that result from downsampling various shapes? Then could they backfill the missing detail to sharpen up those compressed pictures? Enter machine learning.

Diagram and images explaining RAISR.
Google’s RAISR (Rapid and Accurate Image Super-Resolution) process. The steps of the process are shown on the top and a RAISR processed image below. Photo by Google.

Google is training its brain to recognize just such patterns so that it can fill in detail missing from compressed images. Its process is RAISR or Rapid and Accurate Image Super-Resolution. Pairs of images, a high resolution and low resolution, are used to train Google’s computers. The computers search for a function that will, pixel by pixel, convert the low resolution image back to (or close to) the original high resolution image. After training, when their computers see a low resolution photo, they hash it. In hashing, each piece of information is combined through a mathematical operation to come up with a unique value, the hash value, that can be compared against the hash values of other, known images which were computed similarly. From this comparison, Google’s computers ascertain which function is required to convert the particular image (or perhaps piece of image) back to high resolution.

We can imagine a schema where low resolution image is downloaded to a user device and hashed locally on the phone, etc. The device could then send the hash value back to the Google mother ship, retrieve the required formulas, and implement them locally, generating a very high quality picture. Google says the process will be something along these lines, cutting file size by 75%.

The Next Step

What could Google have in mind with this technology? Clearly they are deploying it to allow full resolution Google Photo image download with lower data burden. But is there anything else? Perhaps they see it used more universally with Chrome, whereby any picture on the web is compressed, downloaded, and then upsampled, making webpages load faster. Or perhaps they will pair it with their unlimited photo storage option, allowing users to store a ‘pseudo’ high resolution photo that exists in the ether as a compressed file, but appears on the screen as full size.

Time will tell.

© Peter Roehrich, 2017