Jason Fletcher
Motion Designer
PACK ⬕ Cloud Computing
- This pack contains 79 VJ loops (63 GB)

Get your head in the clouds. It all started with wanting to visualize an intense lightning storm. So I collected 178 real images of lightning and trained them on SG2. But when I rendered out some latent seed walk videos and something didn't look right about it, yet I couldn't put my finger on what it was. So I put it on the backburner for a few months. Then I randomly had the epiphany that it was correctly visualizing the electric bolts and yet it was constantly visible, without the quick flashes in the dark. So I brought the videos into After Effects, used an expression to randomly key the transparency, and then added some glow on top.

After that I realized that it would be interesting to create some beautiful flowing clouds. I collected 1,515 real images of clouds and then trained them on SG2. The results were just what I was hoping for. I did some comparisons between the ScaleUp AE plugin and Topaz Labs Video Enhance AI and found that Topaz produced better details in this instance. So it seems that each uprez tool has different strengths according to the context, which I somewhat expected, and helped to confirm my theory that each uprez AI tool has been trained differently. From there I brought the videos into After Effects and did some experiments with Colorama gradients, Time Difference coloring, loads of accenting with Deep Glow, and some good ole Slitscan to play with time. I also did some layering of the clouds with the flashing lightning that was delicate to figure out. The laser visuals were captured in NestDrop and then composited with the clouds.

From there the only missing piece was an ethereal rainbow. I had this daydream of a rainbow in the sky twisting and looping over itself in wild ways. So I used the NMKD Stable Diffusion GUI app to experiment with text prompts to create just that, but against a black background. It was tricky to figure out how to make SD consistently output what I had in mind without needing to use IMG2IMG, since I needed SD to generate thousands of images unhindered and with tons of diversity. I used SD to generate 9,447 images and then trained them on SG2. I'm very happy with these results since it's rare that something matches exactly what I had initially imagined. From there I played with Slitscan and dd_GlitchAssist to further juice up the rainbows. Somewhere over the rainbow.

Released November 2022