"Love Can Heal" - Peter Gabriel 5050
- Ravi Swami
- May 24
- 4 min read
I discovered the "5050" project thanks to a recommendation from a friend - it's a scheme initiated by Peter Gabriel inviting film-makers to submit videos to tracks from his "Bright Side/Dark Side" album with no stipulation on techniques used and on a 50/50 profit-share from sales basis, though the videos currently available to view on the 5050 main site explore the emergent technology of AI and having explored AI image and video generation since 2022 I decided to apply to the scheme. Gabriel's reputation as someone fascinated by new technologies such as digital synthesisers to make music, facing early criticism until its wider acceptance, is very evident in his acceptance of AI generated video as a simply another creative tool that enables artists to realise ideas quickly in a medium where time and cost are often a barrier, alongside the possibility of using existing music that is copyrighted by its creators.
My interest centred on seeing if it was possible to craft a coherent narrative using the currently available AI image-to-video tools based on a series of Midjourney AI explorations from 2023 when the primary barriers to precise control of output were frame composition and consistency from frame to frame. My background in storyboarding for animation where you have precise control of what is in frame was at that time frustrated by the inherent randomness of AI even if image quality had improved exponentially since I first started exploring AI image-making in 2022.
In 2025 the tools have improved enough that it is possible to create coherent narratives, though precise control is still lacking in some areas.
The point of this post is to mention the tools used to create the 5050 video for "Love Can Heal" and to provide some insight into the process of constructing the visual narrative, speaking as a someone with a background in storyboarding, in particular for animation where everything has to be locked down and make sense narratively before a single frame is drawn and animated.
Firstly, the idea for the short film, before it was ever a music video, came about exactly 2 years ago (I checked my hard-drive) following some explorations in Midjourney V5 - as anyone who uses AI for image-making will know, you generate a ton of images that remain dormant on a hard-drive waiting to be utilised in some way down the line, hopefully - orphan images, if you like, waiting for a story.
While the tools for image to video - mostly Runway at the time - were promising, they weren't quite at a stage where I felt I could do justice to the idea as I saw it in my head - they still aren't, in my view, but they are getting there and anyway, story has to come first.
For the video I used Midjourney V5 for the bulk of images alongside V7 using the same prompt with modifications.
For image to video I used the full suite of image to video tools currently out there : Runway Gen-4, Luma Ray2, Kling V6.1, Hailuo Minimax, Veo2, Hunyuan text2video, on a variety of platforms, both native and non-native, like Krea AI - whichever tool failed in one area, another would produce a better result - for the most part I had a fail-rate of 2-3 videos before achieving a satisfactory result.
Getting the plot down was a different matter and I soon discovered that storyboarding it first was a pointless exercise due to the inherent randomness of AI and the fact that some shots I imagined were very difficult to get close to what I had in my mind's eye - this despite great strides in AI in controlling frame composition and prompt adherence.
For the one problematic but necessary shot I used Blender with a CGI drone and wasp to generate frames that were then post-processed in AI with motion to achieve a style consistent with the rest of the video.
The end result is a hybrid of free-form film-making using shots that are "close" to what I had in mind and visual narrative techniques that go back to the earliest cinema but all the time trying to ensure that the core idea and intent remains intact while still allowing a degree of viewer interpretation and in accepting that ‘what you want to see” may not always be the right or best solution.
I was fortunate that the selected Peter Gabriel track and lyrics were a perfect match to the visuals though in fact this came afterwards, not before, the idea behind the video.
The longest AI video I made (2 years ago!) prior to this was to a Roy Orbison track ("Blue Bayou") and using Runway’s morph tool to combine Midjourney V5 images into a continuous fluid sequence but for copyright reasons I could never share it anywhere online - it currently lives on my website landing page, so it’s nice that I can share the “Love Can Heals” video without those issues.
24/05/25
Ravi Swami
Comments