This article was last updated 1 year ago

Ramping up its lead in the generative AI space, OpenAI today unveiled DALL·E 3, an upgrade to its DALL·E text-to-image AI learning model. What is significant now, is the use of ChatGPT for refining the text prompts received from users to generate image.

OpenAI has built DALL·E 3 natively on ChatGPT. What that means, is ChatGPT will now act as a “brainstorming partner” and further refine your prompts based on its massive database of and its impressive GPT-4 LLM. Users can just ask ChatGPT what they want to see in anything from a simple sentence to a detailed paragraph, and it will refine it further and feed it into the new DALL·E 3.

If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.

Additionally, OpenAI says that images generated using DALL·E 3 are much more refined, as compared with DALL·E 2, thanks to refined and descriptive prompts coming in via ChatGPT. Here’s a comparative example on the difference in two models:

Open AI says, that DALL-E 3 has new mechanisms to reduce algorithmic bias and improve safety. For example, DALL-E 3 will reject requests that ask for an image in the style of living artists or portray public figures. Additionally, perhaps taking cue from the lawsuits OpenAI and its partners face, artists can now opt out of having certain — or all of — their artwork used to train future generations of OpenAI text-to-image models.

DALL·E 3 is now in research preview, and will be available to ChatGPT Plus and Enterprise customers in October, via the API and in Labs later this fall.