I feel the role of a makerspace is to provide the tools and framework for enabling creative ideas to come to life. Sometimes this is achieved using computer-controlled machines like a CNC router or a laser cutter, and other times it is achieved using analog tools like screwdrivers, paintbrushes, and hammers. At the end of the day, I feel it is important to be aware of all of the tools available.
We had the opportunity to get access to Dalle 2 from OpenAI, so we accepted the invitation and have been playing around with the system. It is difficult to imagine exactly where an image generator like this can fit into my creative workflow at the moment, but we are experimenting with different prompts and attempting to understand the system better.
I won't be getting into a high level of detail about how the model works (since I am not Computer Scientist and I don't completely understand it), but on a basic level, image generation is accomplished in two steps.
In the first step, Diffusion Prior uses natural language prompts and takes what has been trained about the relation between images and captions, and then it generates an image based on its "understanding" of the relationship between these captions and images.
In the second step, Diffusion Decoder, Dalle takes the result from step one and applies a diffusion decoder model to generate new images. The way the system currently works is you give a prompt as input, and then you receive four images in output.
If you would like to take a deeper look at how Dalle works you can read the research paper here.
I have included a couple of the images we have generated using Dalle so far below.
Comments