Artificial Intelligence is steadily becoming the norm and we’re now trying to teach machines everything from scratch — one step at a time. Google has been working on several AI projects and has today decided to share the research involved in one which was released to the public earlier this week. Yeah, we’re talking about the Autodraw tool, which enables you to scribble and replace your drawings with corresponding clipart pieces.
Well, you’d be surprised to know that in making this drawing tool available you, Google used the sketches drawn by you – or other real people like you. These drawing, 5.5 million of them, were fed into a recurrent neural network (RNN) to make the AI learn the basics of drawing doodles and improve on it. Still not able to guess, which app collected so many of your drawings?
The application is called Quick, Draw! and Google released it as an experiment which allows you to sketch simple things — such as a snake or a boat — and then the AI tries to guess what you’ve drawn. To put it in simple words, you were playing a game of Pictionary with Google’s AI software and it was learning each time you played a new round.
Now, the results collected from the said experiment has given birth to a new resulting program called Sketch-RNN and it can match any toddler in sketching. In their recently published paper, A Neural Representation of Sketch Drawings, Google Brain researchers David Ha and Douglas Eck have described the whole process behind the working of the Autodraw tool. And they’re extremely proud of their creation.
Let’s go about how they managed to teach a neural network to draw doodles because this development is definitely mind-boggling. It will enable you to understand how much time and work it takes to train an AI — where you input new data sets and tweak its working based on its response. The AI system also continues to learn as it goes through each process, thus, finally being able to recognize that you’re trying to draw a lightning streak by simply seeing a zig-zag line on the canvas.
Firstly, the duo of researchers segregated the drawing collected from Quick, Draw! into 75 different categories of items — containing everything ranging from owls, human faces, and cats to gardens or axes. Each of this category then included 70,000 drawing samples to help the AI learn different styles of drawing the same doodle itself using a vector method. It means that the AI software has learned to draw object by learning the pencil strokes you drew.
Oh yeah, it is true! Each time you fired up your browser and played the Quick, draw! experimental Pictionary game, the system wasn’t only recording your final image. The AI was also reading every pencil stroke you used to draw the object in question, starting with the first stroke to the last — drawn in the time limit — was saved. This enabled the AI to particularly learn how a human perceives an image before drawing it and then doodle it. The same has been described by David Ha in the official blog post as under:
The same has been described by David Ha in the official blog post as under:
We train our model on a dataset of hand-drawn sketches, each represented as a sequence of motor actions controlling a pen: which direction to move, when to lift the pen up, and when to stop drawing.
Now, the reservation that strikes your mind at this moment will be that the AI could possibly be copying the images submitted by the humans. But, that’s not the case at all. The AI is neither copying the complete doodles made by you nor is it putting bits and pieces to make the sketch appear to be what it should be. The AI has actually learned to draw the objects, based off the concept of each of them. It now knows what a cat looks like and can draw it line for line, but on its own. The blog post says,
It is important to emphasize that the reconstructed cat sketches are not copies of the input sketches, but are instead new sketches of cats with similar characteristics as the inputs.
But, if you doubt the capabilities of Google’s Sketch-RNN system, then the researchers have a challenging human input for the system. What do you would the output sketch would be when you let your imagination go a tad bit crazy and draw a three-eyed cat? The AI is, however, now aware of the concept that cats — or mammals — usually have two eyes, thus, it would ignore the three eyes sketched by you. It would still continue to spew out a doodle with two eyes because the system has trained with images where cats had two eyes. It is also the truth.
In the blog post, David Ha mentions that adding noise to the doodle also cannot fool the AI sketching system. He continues to add,
When we feed in a sketch of a three-eyed cat, the model generates a similar looking cat that has two eyes instead, suggesting that our model has learned that cats usually only have two eyes.
Also, Sketch-RNN has even more capabilities in store for you. Now, the AI can intake two different vectors (or disassociated images) to create unique hybrids or a complete picture. This means the AI was presented with two images — one with the pig’s head and other with the complete pig. Using the two images, it was able to steadily learn to draw the complete pig. It started with just the face. Isn’t this just awesome? The blog post added,
We want to get a sense of how our model learned to represent pigs, and one way to do this is to interpolate between the two different latent vectors, and visualize each generated sketch from each interpolated latent vector.
You can also provide the AI system two completely different images — like a cat and a snake and get unique outputs for the same. It is one of those features that users’ would love to experiment with and use to curb their boredom. Similarly, this research has given you an Autodraw tool that can detect what you’re about to draw and help you complete it with just a click. The AI system also doesn’t need a starting sketch and can even complete doodles you may have started, but not finished.
Well, if you’re still mulling over the objective of this experiment? This wasn’t done for fun, such as to explore what output does the AI give (while it would’ve been fun to see the AI draw — correctly). Instead, the purpose was to teach the machine to draw and generalize abstract sketches in a manner similar to humans. This technology can now have several operations and be useful to advertisers, teach people how to draw or generate patterns.
The “anonymous guy” behind the desk who keeps pushing press releases and sponsored content on our site.
P.S. Don’t go to the profile pic on the left, we keep trolling one of our own writers with this… :p