18 Jun

What happens when you train an artificial neural network to recognise images, then turn the system around to start with random noise and evolve an image representing what it “sees” when you ask it about things that appear in pictures, which could be anything from a banana to a landscape? Apparently, you discover that the software is tripping its nonexistent tits off and hallucinating like mad.


Yes, this is a multi-eyed knight with a Rottweiler saddle and llama hand puppet, under a swirling sky full of snails, eyes and leering Breugelesque cow-dogs.


Google obviously have a lot of time and money invested in technologies for image searches and classification. The digital learning systems responsible for these images– some of which have been going viral recently, 99% of the time without any context whatsoever apart from LOL weirdness– analyse examples of what the programmers want them to learn. The whole process and concept is much more interesting and much more profound in its implications than its viral LOLness at first suggests. In the cases shown here, the ANNs were trained with a lot of animal images with the strange side effect that they see animals everywhere: in the clouds, in the trees, in a horse rider’s saddle. Like the classic bad tripper or paranoid schizophrenic they see watchful eyes everywhere. In humans it’s called pareidolia; false pattern recognition, seeing connections and structure where none actually exists. The classic example is seeing pictures in clouds. The networks sometimes harbour unexpected– but with hindsight strangely logical– misconceptions such as taking it as normal that dumbbells can’t exist without a beefy arm attached to them, because most photos of dumbbells also feature weightlifters. Horizons get pagodas and towers because that’s how people tend to picturesquely frame them in photographs. Trees are apparently hard to distinguish from buildings and therefore tend to get mixed up with them, and so on.

Edvard Munch’s Scream is even more disturbing with the addition of AI-paranoia sky-eyes, and the screamer himself gets a daft golden retriever-beagle makeover:



A picture of a tree in a field becomes populated with ghostly tractor-like structures, with a foreground made of jumbled belvederes and spires, bird head minarets on the horizon and a sky full of vague bicycle things, while the tree itself sprouts complacent dog heads.

red-tree-orig red-tree-small-long-unsmoothed

Some more strange animals devised by the networks:


Given the internet lumpencommentariat’s worship of funny (or rather, “funny”) cats, it’s strange that everything seems to gravitate towards looking canine. Possibly this is just another bias created by the pictures that the systems learned from. Or maybe deep down they’re really just dog artificial neural networks and not cat artificial neural networks. They’re also capable of generating some very trippy cityscapes and landscapes.

Iterative_Places205-GoogLeNet_6 Iterative_Places205-GoogLeNet_18

Some of these images are better and more thought-provoking than half of what contemporary artists are doing, even at this rudimentary and experimental stage. They’re definitely more worthy of attention than 90% of what’s sold on the commercial art market. Screw Zombie Abstraction, I want to frame a Chinese acid landscape painting by an artificial intelligence. Bad artists are always wittering on about raising questions about this and interrogating that, but these images and the research behind them really do raise all manner of questions: if we could get access to the raw data of our own cognition and sensory apparatus, would it look like this? And if so, is this why the images seem so familiar to so many people, so reminiscent of dreams or hallucinations or drug experiences? Is existence itself perhaps just the universe running its mindless calculations and doing the infinite, eternal equivalent of iterating random noise into pig snails and dopey-looking dog faces? Perhaps most importantly of all, can I get one of these things for myself and make loads of money by secretly autogenerating art works that I later claim as a product of my unique artistic genius? Because somebody’s certainly going to do that very soon, if they haven’t already.

Read the original article at Google’s research blog, and see more ANN images on Michael Tyka’s page, which is the source of all the illustrations for this post.


  1. Eric Wayne 21/06/2015 at 5:10 AM #

    I wouldn’t have given these much attention at all if I didn’t know they were made by a computer. I’d just have thought they looked like more Photoshop filters. Once I started to look carefully at them, what intrigued me is the formations the algorithm would generate. I’d like them more fully realized, so they didn’t look so much like tragically over-sharpened jpegs. I think you are right that the process could be used to deliberately create artworks. I’d like to play around with it as well, even if just to generate starting points and ideas. However, it might get old as quickly as fractals, once the effects become familiar.

    • Alistair 21/06/2015 at 11:42 AM #

      Me neither. Undoubtedly they’re using similar processes and algorithms to the ones Photoshop uses to find edges, enhance detail, and so on. I agree that nobody in their right mind would want a school of art where everything has eyes and dog faces on it.

  2. Alistair 22/06/2015 at 11:03 AM #

    Reblogged this on Alistair Gentry.



    […] (Previously: Google AI’s halluncinations) […]

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: