Adobe does not want to lag behind in the artificial intelligence race. The company that specializes in software development for creative work, which surprised us with the release of Adobe Firefly this month, has announced that it will be making an update. Premiere Pro with new features powered by algorithmic technology.
We are talking about automatic transcription, text-based editing and automatic color adjustment in HDR editing with videos from different sources. Some features will initially be available in beta and will be available to all users later this year. Let’s see what each of these promising innovations is about.
automatic transcription
Over the years, technology has made our lives much easier. Certain tasks that were previously done manually are now automated (or the bulk of their processes). One of these Voice-to-text transcription. There are currently many applications that allow us to do this task without having to listen to the audio and take notes.
Adobe has taken a step in this direction by including automatic transcription functionality in Premiere Pro, meaning it brings this functionality to the professional realm, a leap that requires certain minimum operating standards. The company claims that this functionality can provide up to 90% accuracy.
From Adobe, they did not specify whether the percentage therefore refers to English only or other languages (because works with multiple languages It’s also auto-detected), though that seems like one of its strong points. About the functionality powered by Adobe Sensei they say “You can transcribe a 30 minute interview in 3 minutes”.
So, you might be wondering how this automatic transcription thing works. As we can see, the operation is quite simple. The first thing we have to do is import the videos we are interested in and then enable automatic transcription. After the system completes the analysis process, it converts the speeches to text.
A very interesting thing is that the Adobe Premiere Pro transcription looks pretty neat. system getting to know different speakers and defines them in some kind of text with tags like “speaker 1”, “speaker 2” which we can change to “interviewer” and “interviewee” for example. This brings us to the next function introduced.
text-based editing
Script created from text transcription is now an item that can be used to edit our video clips. If we cut and paste a piece of text, the relevant video clip is also cut and pasted, hence the name “text-based editing”. And they explain that this can be very useful in different scenarios.
At Adobe, they are targeting professional-world editing tasks where different professionals who are part of the production process of a film project need access to video clips and decoded audio during their post-production tasks. Previously this was done manually, now it will be done almost automatically.
Automatic color registration
High Dynamic Range, also known as HDR, is here permanently. This technology, which landed on our televisions a few years ago, is increasingly featured in different video productions. However, filmmakers face various difficulties during filming, and one of them is the product of using different equipment.
Each manufacturer has its own HDR parameters; this means that this is displayed when importing videos into the editor. not being homogeneous; there are minor differences. Adobe wants to solve this with “Automatic tone mapping”, that is, automatic color registration. Conclusion? Make the project visually consistent.
The scenario they gave as an example is this: Someone is recording HDR content with an iPhone and professional cameras from Sony, Canon and Panasonic. When you import video files, it uses Auto tone mapping to automatically adjust HDR parameters so you can work on a visually identical project.
Images: Adobe (one) | Wahid Khene
On Xataka: Subscription fatigue is real, but Adobe has run out of reasons to stop offering them