Experiments with Generative AI in video animation

Interactive Content developer Jackie Aim shares her early experiments with Generative AI when creating educational short videos.
In particular, using Adobe’s commercially safe Firefly to extend stock video footage and generate bespoke imagery.
Background
Earlier this year (2025) I completed two 7-8 minute video animations. The scripts for these animations were provided by the client. The EDE Media Team filmed the “piece to camera” or talking head segments. The final videos integrated a mix of talking head footage, additional video footage, photographs, and graphics based on a previous video developed by Interactive Content.
Creating the video animations
Balancing the amount of talking head video with the other footage, graphics and animation is key. Creating and editing static graphics in Adobe Illustrator and using Adobe After Effects for animations and video compilation was great fun. A lot of After Effects transitions and built in effects were used, to help the flow between sections and different media types. I also used rotation techniques, scaling, and animating along a path, and all timed with the audio. For the other video footage and photos I looked to Pixabay and Pexels to keep them free and open source as there was a possibility of the videos being shared with other institutions. For icons I searched the Noun Project.
Using Generative AI with video clips
Occasionally the free footage is just a wee bit short as most of them are around 6-12 seconds long. There were a couple of parts in the second video where the clips weren’t quite long enough, so I looked at a couple of options to help resolve this, but I’m sure there are more:
- Use a different video clip
- Extend the video:
- Time stretching – adjust the start and/or end frames
- Generative Extend – use AI to add more frames, there is now a Beta version in Adobe Premiere Pro and After Effects
I used the time stretching option, as it is only a few frames you don’t actually notice it has slowed it down very slightly.
The project was completed before I got a chance to try the Generative Extend method but I have since experimented with it in Premiere Pro on the too short video clip, it is surprising what it can do, so next time!
Adobe Firefly
I couldn’t find quite the right image for the first video so I tried Adobe Firefly, and this is the result, which appears in the video:

Image created with Adobe Firefly
My text prompt was ‘Horizon of a road in the middle with heather and mountains on a sunny day’. I like the fact that you can get several different versions.
In the first video there are a number of portrait photographs used, one of those was also AI generated, though you wouldn’t know. Creating AI images with people can have quite varied results.
Edinburgh (access to) Language Models (ELM)
Although ELM is not video related, I attended one of the ‘Generative AI: Introduction to responsible use of Generative AI and ELM’ webinars run by Digital Skills. I had looked at ELM previously and wasn’t really sure how and when to use it. It was a revelation when we were shown how to write your own prompt, I now use it regularly.
Future projects
I have just started working on another video animation, so depending what is required I will certainly try more AI created images.