What’s in a Name

Is artificial intelligence the new all naturalThat’s what Tech Crunch’s Devin Coldewey thinks.

In the United States, there are no formalized requirements that a food product must meet to be deemed all natural. It means many things to many different people, especially those people marketing foods. Throwing an extra, positive sounding descriptor on a product is a great tactic for boosting its commercial appeal. Artificial intelligence is much the same; in the absence of authority, ideas about its meaning abound. Coldewey argues that many, if not most, claims of artificial intelligence are mere puffery.

What is Intelligence

We can debate whether a computer has artificial intelligence, but this raises the larger question of the meaning of intelligence. This article is hardly the place to review the theories behind intelligence, you’d be reading forever. I like defining intelligence as the ability to solve complex problems with creativity by gathering information, developing knowledge, and executing ideas. Researchers posit a number of areas of intelligence; without going into all of the proposed intelligence types, examples include linguistic, artistic, and numeric, among many more. This raises the interesting question of whether one can be intelligent if he or she excels in some categories but lags in others. Psychologist Charles Spearman’s research in the early 1900s identified g-factor as an underlying general intelligence, a high level concept driving performance on discrete measures. G-factor manifests as the correlation in performance on the discrete intelligence measures; intelligence in one area suggests intelligence in other areas. As an aside, Spearman, having used tens of intelligence metrics, developed factor analysis, whereby several variables are examined to determine whether they move together, thus possibly under control of some other (perhaps unmeasured) driver.

We run into a problem when considering artificial intelligence in the context of different forms of intelligence. Computers are clearly capable on a mathematics ability axis when one considers how numeric intelligence is measured (i.e.: solving math problems), however they fall short with art (screenplays written by computers are more comedy than drama!). Perhaps we need a method of arriving at a computer’s g-factor, if artificial intelligence can even be described with a g-factor.

Defining Artificial Intelligence

Given the complexity of defining intelligence, what can we say of artificial intelligence? I propose that rather than defining artificial intelligence as binary–as a system either having artificial intelligence or not–a system must be considered as having intelligence on continua on multiple axes.

Under such a paradigm, a computer employed to solve Ito calculus problems such as predicted rocket flight trajectories, might score very highly on numeric ability but poorly on self awareness. Self aware robots, likewise, may perform well on inter- and intrapersonal intelligence, but poorly on mathematical intelligence. To measure these systems’ intelligence requires a global review of their skills, maybe this is accomplished by scoring each metric (of how many to be determined) and taking an average. Maybe achieving this requires accepting that there are too many facets of artificial intelligence to reduce it to a single value.

This is more than an academic exercise. Where artificial intelligence is of great interest to consumers, researchers, product designers, healthcare, industry, government and military, and more, we must have a uniform definition, scoring system, and vocabulary to communicate it.

© Peter Roehrich, 2017

AI Makes Google Plus Photos Sharper

Google Plus Photos is an excellent service for storing, editing, and sharing pics taken with your phone. Unlimited free storage for compressed files, adequate for most smartphone cameras, along with instant upload, and in app editing and sharing make using it a no-brainer. If you use a dSLR or otherwise wish to store super sized files, you can dip into your free storage or purchase more. (I’ve never noticed quality problems with my photos, and I allow my files to be compressed so as to qualify for free, unlimited storage.)

Google’s announcement that it will use AI to enhance compressed photos by 75% is interesting. Its easy to go from a crisp photo to a grainy, pixelated image, but its hard to go the other way. But that’s exactly what Google is doing. Unfortunately it’s not available to Google Photos users writ large yet, however is offered for select Google Plus users.

Less Bandwidth

Photos take are, at least compared to text, large files requiring more data and time to download. Where a user has a poor connection or limited data plan, compressing photos makes a lot of sense as smaller images equate to smaller file size. But such an approach sacrifices quality for speed and size.


Downsampling is the process through which a large image is compressed. It works by taking several very small pieces of the image and combining them. Imagine a checkerboard where, in full resolution, each cell is rendered either black or white. In downsampling several squares will be combined to yield fewer, larger blocks of some intermediate shade. Through this process, the file shrinks in size as it is called upon to store fewer pieces of information. The cost is blurred lines and muted colors.

Crime dramas on TV may make ‘enhancing’ grainy images look easy, but it’s not. Doing so requires figuring out what the underlying, downsampled pixels were.


In a crime drama, an investigator may ‘enhance’ a pixelated license plate image, for example, with ease to yield crisp numbers. This makes for a great show, but in reality, it’s more likely that the human eye interprets the license plate number from a larger picture. As downsampling is taking fewer ‘samples’ of an image so as to represent it in fewer pixels, upsampling (interpolation) is the process of going from a low quality image to a higher quality rendering.

Example of image compressed and processed by RAISR.
Example photo compressed and then enhanced through RAISR. Compression reduces the amount of data necessary to transmit the photo by 75%. Photo by Google.

Humans can (somewhat) follow the lines of the image, block by block, to fill in the missing curves and sharpen colors in the mind. Asking a computer to do so is a taller order.


Computers lack the human intuition to say that a fuzzy figure is a ‘3’ or an ‘8’ in a grainy picture. But what if computers could be trained to recognize the patterns that result from downsampling various shapes? Then could they backfill the missing detail to sharpen up those compressed pictures? Enter machine learning.

Diagram and images explaining RAISR.
Google’s RAISR (Rapid and Accurate Image Super-Resolution) process. The steps of the process are shown on the top and a RAISR processed image below. Photo by Google.

Google is training its brain to recognize just such patterns so that it can fill in detail missing from compressed images. Its process is RAISR or Rapid and Accurate Image Super-Resolution. Pairs of images, a high resolution and low resolution, are used to train Google’s computers. The computers search for a function that will, pixel by pixel, convert the low resolution image back to (or close to) the original high resolution image. After training, when their computers see a low resolution photo, they hash it. In hashing, each piece of information is combined through a mathematical operation to come up with a unique value, the hash value, that can be compared against the hash values of other, known images which were computed similarly. From this comparison, Google’s computers ascertain which function is required to convert the particular image (or perhaps piece of image) back to high resolution.

We can imagine a schema where low resolution image is downloaded to a user device and hashed locally on the phone, etc. The device could then send the hash value back to the Google mother ship, retrieve the required formulas, and implement them locally, generating a very high quality picture. Google says the process will be something along these lines, cutting file size by 75%.

The Next Step

What could Google have in mind with this technology? Clearly they are deploying it to allow full resolution Google Photo image download with lower data burden. But is there anything else? Perhaps they see it used more universally with Chrome, whereby any picture on the web is compressed, downloaded, and then upsampled, making webpages load faster. Or perhaps they will pair it with their unlimited photo storage option, allowing users to store a ‘pseudo’ high resolution photo that exists in the ether as a compressed file, but appears on the screen as full size.

Time will tell.

© Peter Roehrich, 2017

Alexa and Assistant Square Off

In the past month or so I came to own Amazon Alexa and Google Assistant devices. I purchased a 7 inch Fire tablet on a Black Friday special to run a home automation system. Then I purchased both a Google Home and Pixel phone. The Pixel and Home arrived this week, meaning I now have both Alexa and Assistant at my disposal; of course I immediately wanted to do a side by side test.

I have tried to trick Google Now. I mentioned in an earlier post that Google wasn’t stumped when I asked it to show me pictures of Blue Helmets. So, how would Alexa and Assistant do when faced with the same questions?

Experiment Design 

Since I cannot figure out how to wake Alexa on the Fire tablet by voice (whether this is user error or a short coming of the system is beyond me, but a problem with the device either way), I decided to use Assistant on my Pixel, so that I could summon both by touch, to make the experiment as equal as possible. 

I set up the test by drafting a set interactions (commands and questions) and expected outcomes, grouped them by theme. I included some informational questions as well as commands to perform tasks for me. Also, there are some functions (e.g. email) that I wanted to test but couldn’t because I don’t have the Fire logged into my Google account.

I administered the test by posing each interaction, one by one, alternating between systems, using the same wording for both.


Both systems could satisfy the basic informational interactions, such as providing the date and headlines. They also could perform the task interactions like setting alarms and timers. They aced the conversational test, where I first asked who the queen of England is, followed by asking her age. Google Assistant did a little better with many of the challenges, for example, while both systems could set a ten minute timer, Google Assistant accurately gave the timer a label to boot.

Screenshot of Google Assistant.
Google Assistant understands the figurative reference to UN troops.

Amazon Alexa used Bing to power its searches rather than Google, as Assistant did. Assistant had no problem with translations and it handled transportation questions with ease, even automatically launching Maps. Assistant got the Blue Helmets question, no sweat, but Alexa was flummoxed.

Amazon Alexa tripped up when it came to describing Blue Helmets as UN peacekeeping forces.
Amazon Alexa tripped up when it came to describing Blue Helmets as UN peacekeeping forces.

Alexa has Amazon’s product catalog behind it, and immediately followed up my battery challenge by asking if I wanted to place an order (when I said “no” to the first product offering, it moved on to another battery listing). Google simply pulled up a link to Amazon.


Of the 25 interactions I drafted, I was able to pose 20 in the test. Alexa got 12 correct, or 60%, a D-. Assistant got 18 correct, earning 90%, an A-. These findings are consistent with other comparisons I’ve read, even the ratio of the scores (Assistant beat Alexa by 50%).


Assistant has the full force of Google to respond to challenges, a decided advantage. Alexa excelled when it came to placing an order for a product, to be expected when considering the company’s core competencies. Also, although conversational capabilities were just pushed to Alexa, it did well. As its third party skills weren’t tested, I cannot speak to how well it leverages them.

All told, Google’s Assistant is superior both in what it can do out of the box and in how it does it.

In the interest of full disclosure, I own both Alexa and Assistant devices.

© Peter Roehrich, 2017

Which is better: Alexa or Assistant?

2016 saw personal assistant technology mushroom. The big developments were natural language capability and device innovation. Although market shares are closely guarded company secrets, the dominant players are finishing the year neck and neck in capacity.

Natural Language Grows

Facebook introduced chatbot functionality to Messenger in April 2016, opening it to developers. In doing so, Facebook exposed a wide audience to chatbots. While Google and Amazon Alexa have employed natural language processing to aid users for years, they had not offered conversational support: asking a question of those assistants did not ‘prime’ them for subsequent similar questions. Chatbots brought conversational interaction to the consumer writ large.

Google deployed similar conversational interaction with Google Assistant, announced in May 2016. Whether this release was in direct response to Facebook’s bots (one month prior) is mere speculation. Alexa got very limited conversational functionality in late December 2016, oddly, after the holiday shopping season.

Device Offerings Expand

The natural, conversational interaction with assistants is impressive, but its penetration is based in device availability.

At the start of 2016 consumers had one choice in personal assistant devices: Amazon Echo. True, there was Google Now and Siri (and Cortana in a very distant fourth place), on phones, but I’m speaking in terms of true assistants capable of performing tasks beyond simply performing a search. Amazon made public its Dot and Tap devices in March 2016, the same time that Facebook put chatbots into Messenger. This development had the effect of lowering the price of access to Alexa, thereby facilitating Amazon securing more users.

The devices supporting true smart assistants reached a tipping point when Google Assistant went live. Google Assistant is something of the smarter child of Google Now, and as such, Google seems to want it everywhere. To that end, Google made it native in its Pixel smartphone, launched in October 2016. Taking a page out of Microsoft’s book (i.e.: that Cortana is available for download by many smartphone users), Google made Assistant available through the Play Store. However, rather than downloading Assistant as a standalone app, Google packaged it in the (not too popular) messaging app Allo, released in September 2016. We can infer two things from this integration. First, Google reasoned that Assistant would be an irresistible lure to pull users to the new messenger. Second, and more important, it tells us that Google sees, and wants us to see, Assistant as a true virtual being to be communicated with just as like a person. This second implication is evidenced by the fact that it can be summoned in text conversation with others. Google’s jab to Amazon’s cross completely eliminates the cost burden of acquiring Assistant; it is reasonable to assume that those interested in Alexa or Assistant already own capable smartphones, either Android or iPhone. Amazon responded by rolling out Alexa to its other tablet devices.

Google’s next move in the sparring match was to roll out the smart speaker Google Home. Initially limited in functionality, although boasting the conversational abilities of Google Assistant, opening it to developers in December 2016 led to a title wave of new services.

The Winner

Having not tested either Amazon’s Alexa or Google’s Assistant, I cannot speak to superiority. I recently asked Google Now “show me pictures of blue helmets” and to my delight I was shown pictures of UN troops; I hope that such perception of nuance will carry over to Assistant. In a December 30, 2016 test by Jay McGregor published in Forbes, Google Home beat out Amazon Echo, answering 50% of questions correctly vs. 35% for Alexa.

The Future

Siri may have run its course. Where we will forever be indebted to her for kick starting the development of virtual assistants, it seems to have failed to keep up with Amazon and Google. Amazon and Google have tremendous advantages over Apple in this regard. Apple is a design firm principally, while Amazon and Google are information companies. (Note that Amazon is often seen as a retailer, and it does sell directly to consumers, but it is able to make those sales only to the extent that it can understand the consumers’ queries and then present them with the products that will best satisfy the demand that drove the product search.) Similarly, Microsoft is primarily a software company, and we will no doubt see Cortana (maybe it will get a boost from being integrated into new versions of Windows) languish. It is too soon for speculation on how the competition between Google Assistant and Alexa will play out, or whether they will continue to compete toe-to-toe at all, or move to occupy different spaces. That said, we can be reasonably sure that the technology will become increasingly ubiquitous.

In the interest of full disclosure, I just ordered a Google Home device.

© Peter Roehrich, 2016.