As a seasoned creative professional who’s been navigating the dynamic world of digital art and design for decades, I must say that this year’s Adobe MAX has left me utterly spellbound! The pace at which Adobe is incorporating AI into their Creative Cloud apps is nothing short of breathtaking.
Once more, here comes Adobe MAX 20XX, and with it over a hundred fresh features across various Adobe Creative Cloud applications. Additionally, Adobe unveiled a novel generative video model called Adobe Firefly Video, which is now available in public beta testing. They are touting this as the world’s first AI-powered video model “specifically designed for commercial use with safety in mind.
Adobe’s AI-powered tool, Firefly, debuted last year and has since produced an impressive 13 billion images, as per the brand’s recent data disclosure. An astounding six billion of these were created in just the past six months! Today, Adobe unveiled the advancement of its Firefly AI model with the arrival of Firefly Video. Similar to its still image counterpart, Firefly Video is designed to be intuitive and simple for users. Users will now be able to engage the model using both text and visual prompts, either specifying what they want to create or providing reference content to help it understand their creative vision.
As a gamer, I’m thrilled about the latest update from Adobe! They’ve integrated their new Firefly Video generative AI model into Premiere Pro, my go-to video editing software. This integration unlocks a universe of possibilities for me and other professionals or content creators who use this application. During today’s MAX event, we saw a fantastic demonstration of one such use case – generative scene filling. In this example, a clip was shown with some missing frames, and Firefly Video magically filled in the gaps, making it hard to distinguish the new frames from the original video (at least, that’s what it seemed like from my seat in the audience). Given how early we are in the adoption of AI for videos, this was truly awe-inspiring!
Today, Adobe unveiled one captivating development: Neo, a cloud-based 3D art tool that operates in real-time on the web. For the past few months, Neo has been under beta testing, and Adobe has now made it public, inviting artists – even those without extensive 3D art skills – to explore this medium. Similar to other new features from Adobe, the application is intended to be extremely user-friendly and easy to navigate. This design should enable users to concentrate on their creative ideas by minimizing time spent on technical chores deemed tedious.
Today, along with various video-related advancements, Adobe introduced several new capabilities for Photoshop. One of these updates is an improvement to the Remove Tool, which is essentially an AI brush that helps users highlight and eliminate unwanted elements in a photo. Adobe has renamed this tool as Distraction Removal, reflecting its enhanced ability to handle more intricate objects within a scene, like wires or cables, effortlessly and automatically. On the other hand, Adobe’s Generative Fill and Expand features are now available for all users across all versions of Photoshop, not just in beta versions.
Immediately following Adobe CEO Shantanu Narayen’s speech at the commencement of Adobe MAX 2024 in Miami Beach today, he delivered a daring (yet recurring) proclamation: “Creativity is being reshaped… yet another time!” Given the swift evolution of AI and Adobe’s rapid adoption and utilization of these innovations in their products, it appears that the future, which was drastically altered at last year’s Adobe MAX, has undergone another transformation.
Read more at TopMob
Read More
Sorry. No data so far.
2024-10-15 00:56