Apple has added to its growing repertoire of artificial intelligence by creating a tool that uses large-scale language models (LLM) to animate static images based on user text prompts. Apple describes the innovation in a new research paper titled “Keyframer: Enhancing Animation Design Using Large Language Models.”
“While one-off interfaces are common in commercial text-to-image systems such as Dall·E and Midjourney, we argue that animation requires more complex user considerations such as timing and coordination, and it is difficult to fully capture these in a single tooltip; so, especially for animations, users can Alternative approaches may be needed that allow iteratively to create and refine designs.
“We combined new design principles with LLM’s code generation capabilities to drive design artifacts through language to create a new AI-based animation tool called Keyframer. With Keyframer, users can create animated drawings from static 2D images using natural language commands. Keyframer generates CSS animation code to animate input Scalable Vector Graphics (SVG) using GPT-4 3.”
To create the animation, the user uploads an SVG image of, say, a space rocket and then enters a prompt such as “create three designs where the sky fades in different colors and the stars twinkle.” Keyframer then generates the CSS code for the animation, and the user can then refine the code by directly editing the code or entering additional text hints.
“Keyframer allowed users to iteratively refine their designs with sequential commands, rather than thinking through the entire design in advance,” the authors explain. “Through this work, we hope to inspire future animation design tools that combine the powerful generative capabilities of the Master with dynamic editors that allow creators to retain creative control to accelerate design prototyping.”
According to the document, the research was based on interviews with professional animation designers and engineers. “I think this is a lot faster than a lot of things I’ve done,” one of the study participants was quoted in the paper as saying. “I think it would have taken hours to do something like this before.”
This innovation is just the latest of Apple’s breakthroughs in the field of artificial intelligence. Last week, Apple researchers released an AI model that leverages the power of multi-modal MA to edit images at the pixel level.
In late December, Apple also announced that it had succeeded in deploying LLM on the iPhone and other memory-constrained Apple devices by inventing an innovative technique to use flash memory.
AND Informationand analyst Jeff Pu said Apple will have some form of generative AI feature on “iPhone” and iPad when iOS 18 launches later this year. Bloomberg The next version of Apple’s mobile software will include an improved version of Siri with AI rendering functionality similar to ChatGPT and could be the “biggest” update in iPhone history, according to Marc Gurman.