Tag Archives: artificial intelligence


22 Jun

OLYMPUS DIGITAL CAMERAAn interesting summary in MIT Technology Review of some recent research done on creativity in historical art, creativity here being taken to mean novelty in imagery or content that had an influence on other– by definition less creative and more derivative– works by the same artist or by others. A machine vision algorithm analysed “classemes”: visual concepts which “can be low-level features such as color, texture, and so on, simple objects such as a house, a church or a haystack and much higher-level features such as walking, a dead body, and so on.”

Intriguingly, the algorithm is not restricted to figurative art and it can cope with abstraction and pop art, although at this stage they seem to be looking at painting. The software critic also tends to broadly agree with human assessments of the most influential works and artists even though it was not primed or biased in any way; all it did was look at which artists were being creative and which were being derivative in their imagery. Possibly another point for the “yes, good and bad art is quantifiable” side.

By the way… I must point out that despite MIT supposedly having some of the best logical minds on the planet, nobody seems to have noticed that MIT stands for Massachusetts Institute of Technology, therefore this publication’s name is Massachusetts Institute of Technology Technology Review.

Read the original scientific paper here, and Massachusetts Institute of Technology Technology Review’s review here here.

(Previously: Google AI’s halluncinations)


18 Jun

What happens when you train an artificial neural network to recognise images, then turn the system around to start with random noise and evolve an image representing what it “sees” when you ask it about things that appear in pictures, which could be anything from a banana to a landscape? Apparently, you discover that the software is tripping its nonexistent tits off and hallucinating like mad.


Yes, this is a multi-eyed knight with a Rottweiler saddle and llama hand puppet, under a swirling sky full of snails, eyes and leering Breugelesque cow-dogs.


Google obviously have a lot of time and money invested in technologies for image searches and classification. The digital learning systems responsible for these images– some of which have been going viral recently, 99% of the time without any context whatsoever apart from LOL weirdness– analyse examples of what the programmers want them to learn. The whole process and concept is much more interesting and much more profound in its implications than its viral LOLness at first suggests. In the cases shown here, the ANNs were trained with a lot of animal images with the strange side effect that they see animals everywhere: in the clouds, in the trees, in a horse rider’s saddle. Like the classic bad tripper or paranoid schizophrenic they see watchful eyes everywhere. In humans it’s called pareidolia; false pattern recognition, seeing connections and structure where none actually exists. The classic example is seeing pictures in clouds. The networks sometimes harbour unexpected– but with hindsight strangely logical– misconceptions such as taking it as normal that dumbbells can’t exist without a beefy arm attached to them, because most photos of dumbbells also feature weightlifters. Horizons get pagodas and towers because that’s how people tend to picturesquely frame them in photographs. Trees are apparently hard to distinguish from buildings and therefore tend to get mixed up with them, and so on.

Edvard Munch’s Scream is even more disturbing with the addition of AI-paranoia sky-eyes, and the screamer himself gets a daft golden retriever-beagle makeover:


Continue reading

%d bloggers like this: