AI asked to generate images of ‘last selfies ever taken’ produces nightmarish results

Influencers of the Apocalypse! AI is being asked to “think” what the last selfie ever taken on Earth would look like, and continues to create nightmarish images.

  • DALL-E AI, developed by OpenAI, is a new system that can generate complete images when fed descriptions in natural language.
  • A TikTok user asked the AI ​​to show what he thinks the last selfie they take will be.
  • This created chilling scenes of bombs falling and catastrophic weather, as well as burning cities and even zombies.
  • Each image shows a man holding a phone in front of his face, and behind him the world comes to an end.

People taking pictures of themselves with melting skin, bloody faces and mutated bodies, standing in front of a burning world, are what the DALL-E AI considers the last selfies taken at the end of time.

DALL-E AI, developed by OpenAI, is a new system that can generate complete images when submitting natural language and TikToker descriptions. Robot Masters simply asked him to “show me the last selfie taken.”

Each of the nightmarish results features a man holding a phone, followed by scenes of bombs falling, colossal tornadoes and burning cities, and zombies standing in the midst of the destruction.

One of the selfies is an animated image of a man wearing what appears to be protective gear. He slowly moves his head as if his life is flashing before his eyes as bombs fall from the sky around him.

Each of the videos has been viewed hundreds of thousands of times, with users commenting on how horrific each selfie is – one user felt the images would keep him awake at night because they are so frightening.

Scroll down for video

A TikTok user asked DALLE-AI to generate images of what he thought would be the last selfies ever taken. In one image, a man in protective gear watches in horror as bombs fall behind him.

Other users joked about taking a selfie at the end of time, with one commenting, “But let me take a selfie first” (If no one gets this link, I’ll cry).”

TikTok user Nessa shared, “And my boss will still ask if I’ll show up for work.”

However, not everyone was carefree looking at what the end of time would look like.

A user named Victeur shared: “Imagine hiding in the dark in a war, not seeing your face for years and seeing this when you take your last picture.”

DALLE-AI is a system that can create images by simply entering certain descriptions.  This result shows a frightened person who may have been running from the devastation behind him.

DALLE-AI is a system that can create images by simply entering certain descriptions. This result shows a frightened person who may have been running from the devastation behind him.

The latest selfies also show people with blood-stained faces and burning cities.

The latest selfies also show people with blood-stained faces and burning cities.

Most commentators see the funny side of the images, but DALL-E also brought out the dark side – its racial and gender bias.

The system is public, and when OpenAI launched the second version of the AI, it encouraged people to enter descriptions so that the AI ​​could improve its image creation over time. NBC News reports.

However, people began to notice that the images were biased. For example, if the user typed CEO, DALL-E would only return images of white males, while for “flight attendant” only images of women would be presented.

OpenAI announced last week that it is launching new mitigation techniques to help DALL-E create more diverse images, and claims the update ensures users are 12 times more likely to see images with more diverse people.

The nightmarish images of zombies standing in front of burning cities were created by DALL-E AI.

The nightmarish images of zombies standing in front of burning cities were created by DALL-E AI.

Some of the disturbing selfies also look like zombies with missing eyes and skin.

Some of the disturbing selfies also look like zombies with missing eyes and skin.

The images are so frightening that some TikTok users have said they will now have nightmares after viewing them.

The images are so frightening that some TikTok users have said they will now have nightmares after viewing them.

The initial version of DALL-E, named after Spanish surrealist artist Salvador Dali and Pixar’s WALL-E robot, was released in January 2021 as a limited test of how AI can be used to represent concepts, from boring descriptions to flights of fancy.

Some of the early AI creations included a mannequin in a flannel shirt, an illustration of a radish walking a dog, and a baby penguin emoji.

Examples of phrases used in the second edition to create realistic images include “an astronaut on horseback in a photorealistic style.”

On the DALL-E 2 website, this can be configured to create images on the fly, including replacing the astronaut with a teddy bear, a horse playing basketball, and displaying it as a pencil drawing or Andy Warhol-style “pop art” . painting.

DALL·E 2 studied the relationship between images and the text used to describe them,” OpenAI explained.

“It uses a process called ‘scattering’ that starts with a pattern of random dots and progressively changes that pattern towards an image as it recognizes certain aspects of that image.”

HOW ARTIFICIAL INTELLIGENCE IS LEARNED WITH THE HELP OF NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs) that try to mimic how the brain works in order to learn.

ANNs can be taught to recognize patterns in information, including speech, textual data, or visual images, and have been at the heart of much of the AI ​​development in recent years.

Conventional AI uses input to “train” an algorithm on a particular subject, passing it vast amounts of information.

AI systems rely on artificial neural networks (ANNs) that try to mimic how the brain works in order to learn.  An ANN can be trained to recognize patterns in information, including speech, textual data, or visual images.

AI systems rely on artificial neural networks (ANNs) that try to mimic how the brain works in order to learn. An ANN can be trained to recognize patterns in information, including speech, textual data, or visual images.

Practical applications include Google’s language translation services, Facebook’s facial recognition software, and live filters for changing Snapchat images.

The process of entering this data can be very time consuming and is limited to one type of knowledge.

A new generation of ANN called Adversarial Neural Networks pits the minds of two AI bots against each other, allowing them to learn from each other.

This approach is intended to speed up the learning process as well as improve the outcomes generated by AI systems.