Skip to main content
  • DALL-E, a robot built by OpenAI, generates images from a set of instructions or descriptions.
  • A team of seven researchers spent two years developing the technology behind DALL-E, which is not yet available to the public.
  • While DALL-E offers many opportunities for image creation, the technology also creates risks for deep fakes and the spread of misinformation.

OpenAI, one of the world’s leading artificial intelligence labs, is building robots which generate images from a set of instructions or descriptions. The robot is named DALL-E, in reference to both surrealist painter Salvador Dali and WALL-E, the 2008 Disney-Pixar animated film about autonomous robots.

According to The New York Times, OpenAI’s team of seven researchers spent two years developing the technology behind DALL-E. While OpenAI is not yet sharing its tech with the general public, DALL-E will eventually become available as a tool for creating digital images.

DALL-E falls into a branch of artificial intelligence known as neural networks. Neural networks learn skills by combing through vast amounts of data (such as millions of digital images and captions) while performing analysis and pattern recognition. Through this process, the neural network learns to identify images and their associated words or descriptions.

Scroll to Continue

Recommended for You

Although DALL-E presents potential for broadening artistic and creative horizons, some experts find the technology worrisome. In the wrong hands, it could be used to deceive and spread disinformation. Subbarao Kambhampati, a professor of computer science at Arizona State University, explained, “You could use it for good things, but certainly you could use it for all sorts of other crazy, worrying applications, and that includes deep fakes.”