It can also generate its own layout with text-only prompts, if that’s what the creator chooses. Make-A-Scene captures the scene layout to enable nuanced sketches as input. It demonstrates how people can use both text and simple drawings to convey their visions with greater specificity using a variety of elements. With Make-A-Scene, this is no longer the case. For example, the text input “a painting of a zebra riding a bike” might not reflect exactly what you imagined the bicycle might be facing sideways, or the zebra could be too large or small. Prior image-generating AI systems typically used text descriptions as input, but the results could be difficult to predict. Make-A-Scene empowers people to create images using text prompts and freeform sketches. Today, we’re showcasing an exploratory artificial intelligence (AI) research concept called Make-A-Scene that will allow people to bring their visions to life. Imagine creating a digital painting without ever picking up a paintbrush or instantly generating storybook illustrations to accompany the words.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |