As soon as the internet was presented with a face depixelizer that takes advantage of top-of-the-line neural network technology to take super-pixelated photos and reconstruct them into high-quality portraits of what the AI thinks is the right person based on complex algorithms and programming ingenuity… some people began testing it with pictures that the AI was not really designed for, resulting in some pretty creepy creations.
And despite people knowing that it doesn’t work ideally all the time, they are still trying it out because it’s exciting to see what other bizarre ideas the AI will decide to make a reality with its AI brain, leading to a number of very interesting results.
Bored Panda invites you to check out some of the results that the internet has reached with its own experimenting on the face depixelizer. While you’re there, be sure to leave an upvote and a comment on the ones you think are great.
The meat and bones of the tool were created by Alex Damian and his colleagues, while Denis Malimonov, a popularizer of Machine Learning in Art, gave it a simple and easy to use interface, to make it more accessible to a wider audience. He posted a tweet about it and people were engaged immediately, showing off their results. We’ve also got in touch with Malimonov for more information.
Malimonov used to be a designer, and now began dabbling in programming. Since his target audience (for the most part) consists of people who have never programmed before, he wanted to make sure that they do not experience difficulties in testing neural network technologies.
The tool is based on the “PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models” study. The way the algorithm works is the AI gets a pixelated image, which it compares to a bunch of proper quality portraits that it also pixelates down to the necessary quality to find the one that looks the most identical to the original picture. When it finds the right one, it adds several filters and adaptations to make it look even more like the original, and voila, you have a depixelated picture.
“Users immediately began testing the neural network using characters from computer games, and began wondering why the algorithms work so poorly,” explained Malimonov. “Then they began to try on real people's faces, the first of whom was Barack Obama. The result was a dark man of European appearance.”
And, naturally, one should have expected some discrepancies, not just because there is only so much a computer can do with no more than a hundred pixels, but also because the neural network was very limited in what it was, as explained Malimonov:
“The reason is very simple: this neural network uses the pre-trained CelebA-HQ model. The model was trained using faces of various celebrities. The majority of faces were of white people, amounting to over 7,000 pictures, while Asian, black, and Indian were around or under 1,000 pictures. As far as I know, there is no such model in which the distribution across races is uniform.”
Now, people have been putting up pictures of practically everything that’s effectively a face. Mario, the Creeper, Doom Guy, emojis, memes—anything that technically has a face. And while some resulted in pretty good renditions, others were rather questionable. Given the limited sample of the neural network, oddities were bound to happen.