Generative AI becoming increasingly visual

  • 13 October 2023
  • 0 replies

Userlevel 1

Over the last few weeks we’ve seen generative AI become increasingly visual. 

Chat GPT have now integrated with DALL-E 3 in order to offer users the chance to type in prompts into Chat GPT and have a generative AI image appear alongside it. This could be great for trying new recipes or creating story books for kids on the spur of the moment and there could be all sorts of opportunity for standalone apps that build around these concepts. 

Chat GPT can now also see. Now you can take or upload a photo and have Chat GPT understand the photo and then interact with you based on its understanding of the photo. On the Open AI website they use the example of taking a photo of a bike seat and then asking Chat GPT how to lower the seat. Chat GPT can now understand the context and tell you how to lower the bike seat. This integration could be really useful for all sorts of handy work and opens up the potential for a possible Open AI SDK to be integrated into home and lifestyle apps to further enhance the experience. 

While Open AI and Chat GPT may be getting all the attention. Don’t forget Adobe. This week at Adobe Max they showcased that they can do generative AI on video (not just images as we’re used to) with Project Fast Fill and they showed a video of using generative AI to change the colour of a man’s shirt within a video. Just by editing one aspect of one frame with generative AI, changes are automatically applied to all other frames to make for a seamless edit with your generative AI changes applied to the whole video clip. Editing user generated content might have become a whole lot more fun with Adobe’s Project Fast Fill. If media rights holders were to make their content publicly editable as Amazon Prime Video have tried in the past, then the can of worms that was before has only got even bigger with this innovation! 

0 replies

Be the first to reply!