VJ Loops by
Patreon /// Instagram /// Facebook /// Youtube ///
Open for Commissions
Need some custom visuals? me and let's discuss.
PACK ⬕ Mask Hypno
- This pack contains 174 VJ loops (25 GB)

Masks are so useful for bright visuals that you want to tone down so that the environment can be more dynamic. Masks are also useful as dazzling patterns for LED strips. So it's two birds with one stone. For the last few months I've been brewing on techniques that would create some interesting masks.

I've always been interested in Moiré patterns and so that was a natural thing to begin experimenting with. After some initial exploration I realized that the difference blend mode was going to be super useful here. I'd been a bit nervous to begin experimenting with Moiré patterns since it's reliant on just trying things out and seeing what sticks, but the process ended up being fun to just quickly try things out.

The checkerboard scene was one of the first scenes I had in mind to jam with. Checkerboard5 and 6 was such a pain to set up since I wanted to have a checkerboard continually halve itself, all the way to the pixel level. That meant I had to create loads of stacked precomps within precomps within precomps that relied on the difference blend mode. I'm actually amazed that After Effects was able to handle it since it was insane how many nest precomps were necessary. From there it was very satisfying to create some slitscan variations since I wouldn't be able to create that look any other way.

A while back I had wanted to create an Escher themed pack and so I used Stable Diffusion to output 48,907 images that were inspired by Escher in an abstract style. But the resulting dataset had a large amount of variation and so StyleGAN2 wasn't able to stabilize. So I put the dataset on the backburner knowing that it would be useful someday. Months later I stumbled across StyleGAN2-Extended and realized that it more than doubled the size of the model, which might mean that it could absorb a dataset with more variations on a theme. It ended up training about just as heavy as StyleGAN3 and so it took multiple days to fully transfer learn the model. But I ended up with not something kinda adjacent to what I was actually after but was just as interesting. It's curious to see how the interpolations of StyleGAN2-Extended are somewhat different then SG2. The results can be seen within the "EscherSG2ext2" scenes.

Lately I've been tinkering with AnimateDiff in ComfyUI and trying to get smooth animations out of it, which has been annoyingly difficult. As part of my experiments I did some more Escher experiments using this tool. I had the best results injecting a frame sequence into the Line Art ControlNet, which allowed me to create videos with unlimited length. For these experiments the TemporalDiff-v1 model was the best in this context and sometimes I kept the denoise at 1.0 and other times at 0.9. The results can be seen within the "EscherAD" scenes. I didn't use any motion LoRA's in these experiments but that's something I definitely want to experiment with in the future. I think this AnimateDiff will continue to bloom into something groundbreaking and I have so many ideas to try out.

I also made a bunch of very simple masks. But when it comes to masks, even the most simple animations can be the icing on the cake and make your visuals sing. Plus sometimes it's useful to stack multiple masks together. Have you ever seen the Goat Sea? Hope you enjoy the easter egg.

PACK ⬕ Fractal Persp
- This pack contains 40 VJ loops (105 GB)

OMG 3D fractals! For a few years I've been drooling over the Vectron plugin for creating 3D fractals. But since I'm primarily a Maya/Redshift and Blender user, the thought of diving into Cinema 4D and Octane just for this tool wasn't altogether enticing. But then they released Essence and Mecha which are native to the Unreal Engine and enable real-time exploration of 3D fractals. Years ago I explored Mandelbulber and Mandelbulb3D and plus I designed the prior version of the Mandelbulber website and curated its gallery. So I've long loved fractals and the idea of actually keyframing 3D fractals was exciting.

The good news is that the Essence and Mecha plugins live up to the hype! With a high-tier GPU I was able to explore each of the 3D fractals in real-time. But the real game changer is being able to easily keyframe attributes and see them playback quickly. I just scratched the surface of exploring the fractals. I found this tutorial to be helpful in laying out the basics of how to work with these two plugins and also unreal. The render times even when using the ultra quality setting are incredible and typically 1 to 2 seconds per frame for the most complex scenes, which is wild considering how computationally intense fractals can be. Other than adding glow in AE, these are renders straight out of Unreal. And I could have let Unreal do the glow (bloom) but I wanted manual control of that aspect. I was able to render out an alpha channel for most of the fractal videos.

I've long admired the alien shapes of the quaternion 4D julia fractal but it was too basic of a 3D fractal for the Unreal plugins. I did some research and stumbled across this amazing tutorial by Capucho3D showing how to use the geometry nodes to create a quaternion in Blender without any extra plugins. They even shared their Blender scene, which is awesome! Turns out that keyframing the attributes was very delicate and I had to strictly stay within certain numerical limits or Blender would instantly crash. There wasn't much to explore with the attributes and the "Quaternion Julia" scene basically shows what is visually possible. Also even after cranking up the subdivision to an insanely high number, I couldn't get enough precision to render it without the volume aliasing. So instead I just embraced this limitation and added a styrofoam shader, which I think works nicely. Sometime in the future I'll create a fully detailed quaternion.

Prior to this I had worked in Unreal for a few assets needed in other projects. So this was the first time sitting down and doing everything in the Unreal Engine, including keyframing, shaders, camera work, DOF, render settings, and such. Working within a video game engine that is cutting edge is quite satisfying since I can crank up the settings and still get very high quality renders, from anti-aliasing to the global illumination. Having worked in Maya for something like 16 years, it's inspiring to see a paradigm shift like this happening. That said, while much of my knowledge transfers over, there are also so many aspects of video game design that are kinda bizarre. For instance, Maya and After Effects have scenes where everything is self contained, except for file references. But in Unreal a Project contains all of the scene assets, Levels hold the state of the scene, and a Level Sequence holds any keyframes. Yet the moment you add an asset into a Level Sequence then any changes must be keyframed or else it'll revert to the original state. And if you want to save as and make an iteration of a Level Sequence then you have to jump through some weird hoops. Maybe I'm doing something wrong since I did learn quickly via trial and error. At one point I wanted to disable the motion blur but even when turned off, it was still active. Turns out that the anti-aliasing functions my averaging together multiple frames, aka temporal anti-aliasing, and you get a super crisp image and the added benefit of motion blur. It's an amazing feature but confusing when instead of disabling the motion blur I should have just zeroed out the attributes. These kinds of hoops are unique because it's not a bug, you just need to have an understanding of how the core engine works. Also enabling alpha channel output to PNG sequences was a pain, since as a solo artist I can't afford the file space required for EXR sequences. But all of this was just the learning pains of jumping straight into the deep end of the pool, which is how I best learn things. Even with those quirks, I'm in awe of Unreal. I think that Maya/Redshift still has its place but Unreal opens up some new doors for experimentation. Nodes within nodes within nodes.

PACK ⬕ Potato Face
- This pack contains 50 VJ loops (81 GB)

Call up Frankenstein! Build your own face. Each of the videos in this pack include an alpha channel so you can collage the eyes, ears, mouth, security camera, and brain together to create the weirdest faces you can imagine. The only videos that don't include alpha are the laser videos since they look so much better when you use the screen blend mode anyways.

I started off by finding a high quality human eye 3D model and then recreating the shaders in Maya/Redshift. I was careful to choose a model which had the pupils animated as a blendshape. But due to the way that the iris poly was constructed it gave me some trouble since I didn't want the iris to be reflective, yet needed the eye whites to have strong specular highlights and have highlights visible on top of the iris area. Eventually I realized I needed another poly that was slightly bigger and apply the specular highlights everywhere. I find the often strange shortcuts in 3D animation to be fascinating.

I found some teeth and gums that I modified to look like dentures. Curating through the available models on TurboSquid was challenging because there are a bunch of options that are just too realistic and yet the quick previews are sometimes not ideal, but I wanted something in the middle ground between real and stylized. Recreating the shaders using Redshift was tricky since the teeth really need subsurface scattering to make it feel like teeth. From there I realized that adding a tongue would be amazing but I wasn't thrilled about rigging and animating a tentacle-like rig. But then realized that I could use a bunch of Maya deformers to achieve exactly what I had in mind. I found that a human tongue wasn't quite right for what I had in mind and instead ended up using a dog tongue 3D model.

After learning some new things about subsurface scattering in Redshift, I thought it would be interesting to animate an ear that were illuminated with a bright light source. So I found a model set that contained both a typical human ear and also a ear with had large gauges. The subsurface scattering was again tricky to setup but worth the effort. And again Maya deformers were used for rigging. If you need a right ear then just flip the X axis.

I had gone back and forth whether including a human brain was a good fit, but now I felt confident I could really nail a translucent bubble gum shader with everything that I had learned about subsurface scattering. It ended up being the most delicate to set up since I needed it to have specular highlights and also deep subsurface scattering. I got more experimental with the Maya deformers on this model because it was just asking for wacky distortions.

I experimented with some slitscan experiments and they looked amazing on the teeth renders, yet I ran into a unique issue. Since the Time Displacement effect in After Effects looks smoothest when given 240fps footage, I typically take my 60fps footage into Topaz and interpolate it to 240fps. Yet Topaz doesn't support alpha channels, which I very much need in this pack! So instead I rendered out the video with a greenscreen background, processed it in Topaz to 240fps, and then did the slitscan FX and keyed out the green background. Messy workflow but it worked pretty well.

After some failed experiments in trying to create a 3D human head made out of circuit boards, I realized that I could use Stable Diffusion. First in Photoshop I created a rough sketch of a circuit board in the shape of a face with a hot pink background and then I applied it to SD IMG2IMG to generate 10,000 images. From there I curated the best 600 images and used After Effects to key out the hot pink and render it out at 5fps so that it would feel like stop motion. Uprezzed to 2k using Topaz. Then I brought this video into Maya and had the challenge of making it appear that each frame was a unique 3D model. I applied the video as a texture deformer onto a plane, along with another plane that was placed slighter closer to the camera and applied a black hole shader (aka holdout shader) so that the black background would be cutout.

And just when I thought that it was done, I had one more crazy idea... I found a 3D model of a realistic potato and then animated it to make it feel like a head bobbling to the beat. Stop making that big face!

PACK ⬕ Chrome Contort
- This pack contains 192 VJ loops (126 GB)

Shiny polished metal is seemingly anathema to mother nature, which is why it symbolizes digital tech so well.

Recently the ZeroScopeV2 models were released. After some poking around I was able to get it working in A1111 using the 'txt2video' extension. First I experimented with the recommended approach of doing text-to-video using ZeroScope-v2-576w and then uprezzing using ZeroScope-v2-XL. Due to the design of these models, the entire frame sequence must fit within my 16GB of VRAM and so I'm able to only generate 30 frames per video clip! And I can't string together multiple renders using the same seed. Ouch, that's a serious limitation but luckily there are just enough frames to work with and create something interesting. So I rendered out several hundred video clips using different text prompts and then took them into Topaz Video AI where I uprezzed them and also did a x3 slomo interpolation. This allowed me to reach about 1.5 seconds per video clip at 60fps. Then I brought all of the video clips into AE, did a bit of curation, and lined up the video clips back-to-back. This is exactly this type of limitation that I find to be creatively refreshing because I would not normally edit together tons of 1.5 second video clips, but this technique forced me to reconsider and the result is a large amount of short surreal visuals on a given theme with an intense feeling of the fast cuts. Sometimes embracing stringent limitations can be a useful creative game.

From there I was curious of what would happen if I skipped the text-to-video ZeroScope-v2-576w step and instead just injected my own videos into the ZeroScope-v2-XL model. So I took a few of the SG2 chrome videos, cut it up into 30 frame chunks, and injected it into the ZeroScope-v2-XL model. Holy mackerel, what utter gems this method created and were exactly what I'd been daydreaming of for years. Just like SG2, it seems that ZeroScope does really well when you inject visuals that already match the given text prompt, likely since it allows the model to really just focus on a singular task without get distracted. I'm guessing SG2 videos with their particular morphing style, plus on a black background, allowed ZeroScope to reimagine the movements in a weirdly different way.

I had been curious about the RunwayML Gen-2 platform and so I gave it a try. I tested out the text-to-video tool and generated a few surreal videos that were 4 seconds each. The amount of movement in each video was minimal yet maybe that could be refined with some tinkering. But the main limitation was having to use credits to first nail down an ideal text prompt and then render out about 100 clips, without any batch function. This limitation really hamper my creative process and hence why I always prefer to run things locally on my own computer. Also I was interested in trying out the video-to-video tool but by my estimations it was going to cost too much to create what I had in mind and be very tedious to manually send each one to render.

For a while now I've dreamed of being able to make variations of an image so that I could create a highly specific dataset in a tight theme. Last year I had tested out the Stable Diffusion v1.4 unclip model but it didn't produce desirable results for my abstract approach. So I was quite curious when I learned about the Stable Diffusion v2.1 unclip model release and immediately got some interesting results. Amazingly no text prompt is needed! Just input an image and render out tons of variations. Comparing the original image to the image variations, there was a clear resemblance even in regard to the pose, material, and lighting of the subject.

So I selected 192 abstract chrome images that I created for the Nanotech Mutations pack and then had SD2.1-unclip create 1,000 variations of each image using A1111. After that was rendered out I ended up with 192,000 images that were grouped into 192 different pseudo themes. I thought that this dataset would have enough self similarity to start training StyleGAN2, but the model had lots of trouble converging into anything interesting in my tests. So when I looked at the big picture of the whole dataset then it became clear that actually there was too much variation for SG2 to latch onto. I looked into training a class conditional model using SG2 until I realized that I wouldn't be able to interpolate the various classes, which was a bummer. But since each of the 1,000 image groups could be manually selected, I went through the dataset and split up the images into 6 different smaller datasets that definitely shared the same theme. Golly that was tedious. From there training 6 different models in SG2 converged nicely since each of the datasets were concise.

I thought it would be interesting to take the chrome SG2 videos, render them to frames, and then inject them into SD v1.5 and experiment with the stop motion technique. It was fun to try out different text prompts and see how they responded to the input images. Then I remembered back to a very early experiment where I rendered out 3D animation of a person breakdancing in Unity and had tried to inject it into Disco Diffusion but wasn't thrilled with the result. So I grabbed the breakdancer renders and injected it into SD v1.5 and loved how it sometimes looked like warped chrome metal and other times looked like a chrome plated human.

Recently Stable Diffusion XL was released which I had been excitedly anticipating. But since it's so brand spanking new, A1111 didn't yet support the dual base/refiner model design. But I could load up a single model into A1111 and experiment with it in that way... Which is where I had a happy accident idea. Why not try out the stop motion technique but use the SDXL refiner model directly, especially since this model is purposefully built for uprezzing to 1024x1024. The results were even better than what I would pull off using SD v1.5, likely due to the SDXL refiner model being trained differently. Also the difference between working at 512x512 and 1024x1024 in SD is dramatic and so many more details are included. I have many ideas to explore with this newfound technique. Plus I'm curious to see how Stable WarpFusion looks with this new SDXL model.

There are chrome machines and green vines intermingling on the distant horizon. It's a mirage of the future but some version of it approaches all the same. The golden age of algorithms is at its crest.

PACK ⬕ Cyborg Fomo
- This pack contains 78 VJ loops (61 GB)

The robots aren't coming, they're already here in our pockets. The machine learning revolution has only just begun.

I've long wanted to visualize a robot that is shaped like a chimpanzee. So for the "Chimpanzee" scenes I nailed down an accurate text prompt in Stable Diffusion v1.5, loaded up x2 instances of A1111, and had both of my GPU's render out a total of 24,955 images. Then I took this dataset and did some transfer learning using StyleGAN2 until 6988kimg, which is quite a bit more training than I normally need but I think was required due to the amount of variety in the robotic monkey faces. This converged rather nicely although there are some strange small blob artifacts that are occasionally visible, which I believe is the result of the model trying to learn the machinery details of the dataset. But I thought these blobs actually looked as if the tech was just bubbling underneath and trying to escape. Diego added the Flesh Digression scripts into the StyleGAN3-fun repo and so that was fun to experiment with. I also tried transfer learning SG3 but the variety in the dataset proved too difficult and wished I had trusted my gut since it takes double the amount of time to train.

From there I thought it would be interesting to further explore the SD stop motion technique. So I rendered out a few of the SG2 Chimpanzee videos to frames, explored some circuit board monkey related text prompts, then injected them into SD v1.5 and used a Denoising Strength of 0.6. I think the jittery feeling of the stop motion matches the wild technological evolution feeling really well. I'm amazed by how well SD reacts to being fed imagery that is similar to the text input. I then took each of these stop motion videos into Topaz, interpolated from 30 to 60fps, and then uprezzed to 2k.

I'm enamored with the SD stop motion technique, so I grabbed some human faces and eyes videos from my prior packs and used them as fodder for creating some wild cyborgs and chrome covered people. Although I had some trouble getting the model to equally represent all skin tones within a single text prompt, so instead I rendered out three different videos for the black, brown, and white skin tones. I'm amazed by how reflective the chrome metal looks, which is bizarre because what is it reflecting within the imagination of SD? Somewhere in the depths of all of those nodes it has some notion of what reflective metal typically looks like. Baudrillard would no doubt be rolling his eyes right now.

But I think the real gem of this pack is the "Implants" scenes where people have installed all sorts of cameras, cell phones, wires, and such onto their faces. It was important to me for the faces to be smiling because I often feel the rush of non-stop tech advancements and yet there is so little time to slow down. So for me the smile represents the overwhelming feeling of having to join the wave or be left behind, with the societal expectation to simply accept it and enjoy it. Again for these I injected some SG2 human faces into SD v1.5 and with the Denoising Strength tuned just right, then it has enough room to imagine the text prompt while also following the input frames somewhat reliably. Sometimes more Denoising Strength is needed, sometimes less. I also rendered out some stop motion videos of human eyes that have camera lenses for the iris and they are darting all around. I really enjoy this technique since happy accidents are at its core.

For the "Wires" and "Circuits" scenes, these are Disco Diffusion experiments from before Stable Diffusion was even released. They have a very different flavor, more dreamlike and suggestive, which I find to be evocative and yet they were very difficult to find a reliable text prompt since I expect that those models received very limited training in both time and scope.

Why stop with the human face eh? I took some videos from the Nature Artificial pack and did some more SD stop motion experiments. I think these were successful for the simple fact that circuit boards are similar looking to green leaves, and also wires are similar to roots. So I'm able to leverage the fact that SD has studied every imaginable word, understands how to visualize it, and can therefore interpolate in between any thing... But it excels at interpolating between things that already look similar. Adding some heavy glow to these in AE gave the videos a wonderful electric feeling that I think further enhances the blend of mother nature combining with human science.

I have wanted to visualize a robot with angular plastic forms and bright LED's but I had so much trouble with it. But I finally nailed it down after too much experimenting and lots of manually added parenthesis to direct SD that I wanted emphasis on certain words. So I rendered out 2,018 images and then did some transfer learning on SG2 until 3260kimg. We may not look too different from people a few decades ago, but the thoughts in our minds are certainly now digital.

PACK ⬕ Robotics Foundry
- This pack contains 83 VJ loops (71 GB)

The AI software is here and maturing quickly. And I think having it control all sorts of robot hardware is just around the corner. Things gonna get weirder.

I finally made the jump to using the A1111 Stable Diffusion web UI and it renders images so much faster thanks to the xFormers library being compatible with my GPU. Also there are tons of unique extensions that people have shared and I have much spelunking to do. I figured out how to run x2 instances of A1111 so that both of my GPU's can be rendering different jobs, which is hugely beneficial.

For the last few months I've been running backburner experiments inputting various videos into SD to see how it extrapolates upon a frame sequence. The "Stop Motion" scenes are the fruit of these experiments. But the main trouble I've had is the exported frames are very jittery, which I'm typically not a fan of. This is due to the fact that the input video frames are used as the noise source for the diffusion process and hence I have to set the Denoising Strength between 0.6 and 0.8 so that SD has enough room to extrapolate on top of the input video frames. Although SD has no temporal awareness and it assumes you're going to export a solo frame, not an animated frame sequence, and so all of this is a hack. But I found that if I chose a subject matter such as robots then I could embrace the stop motion animation vibe and match the feeling of incessant tech upgrades that we are currently living in.

I tried all sorts of different videos but ultimately my StyleGAN2 videos with a black background were by far the most successful for inputting into SD. I believe this is because my SG2 videos typically feature slowly morphing content. Plus the black background allows SD to focus the given text prompt onto a single object, therefore narrowing its focus and shortcutting SD into strange new territories. But the real key is inputting a video that features content that contextually parallels the SD text prompt, at least in the overall form, but definitely for the necessary color palate. SD's dreaming is limited in that regard. Also finding the ideal SD seed to lock down is important since there are many seeds which didn't match the style I was aiming for.

My initial tests at 60fps were far too intense and difficult to watch. So I experimented with limiting the frame rate of the input video and landed on exporting a 30fps frame sequence from After Effects. After processing the AE frames in SD, then I passed the SD frames into Topaz Video AI and interpolated every other frame so as to make it a 60fps video. Typically I do not interpolate footage that moves this fast since it makes it feel too morphy, but in this context I think it gives the stop motion aspect a buttery quality.

For the "Circuit Map" scenes I grabbed the CPU videos from the Machine Hallucinations pack and used it for the stop motion technique described above. From there I jammed with it in After Effects and couldn't resist applying all sorts of slitscan experiments to make it feel as though the circuits are alive in various ways. And of course applying some liberal use of color saturation and Deep Glow was useful in making it feel electric and pulsing with energy.

For the "Factory Arm" scenes I wanted to have an industrial robot arm swinging around and insanely distorting. So I started by creating a text prompt in SD and then rendering out 13,833 images. For the first time I didn't curate the images of this dataset by hand and just left any images which showcased any strange croppings, which saved tons of time. In the past I've worried that StyleGAN2 would learn the undesired croppings but have since learned that with datasets this large these details tend to get averaged out by the gamma or I can just stay away from the seeds where it becomes visible. From there I did some transfer learning from the FFHQ-512 model and trained it using my dataset until 1296kimg.

After that I did a new experiment that I've been tinkering with lately. I typically train using FreezeD=4 since I have found that using this setting allows the model to remain a bit more free flowing in its interpolations when rendering out video. I reason that the super low resolution layers contain very little detail and it's maybe better to just have these layers remain unchanged from the mature original state of the FFHQ model. Maybe this is because currently I rarely train for more than 5000kimg. But I'm just going by intuition here as an amateur and the devs have shared little about this aspect. Anyways after the training has stabilized using FreezeD=4 then I switched over to FreezeD=13 and further trained until 2532kimg. This allowed the training to progress a little bit faster, about 30 seconds per tick, which adds up to significant time savings... Yet I noticed it's dangerous if I switch too early since it can introduce undesirable blobby artifacts into the model. Using FreezeD=13 means that only the very last layer of a 512x512 model will receive training and all of the smaller resolution layers will be frozen during training. I have found this useful for when it seems that I have hit a threshold of the model learning any more details, so instead I just focus on the very last layer. I believe this is because the layers are connected in a way that smaller resolution layers affect the downstream layers during the training process, and so freezing the smaller layers allows it to train differently. But I need to do more testing as I'm not confident about this technique.

From there I had a SG2 model of industrial robot arms moving around in a way that I didn't dig and I almost discarded this model. But as I have often experienced, it's vital to render out about 10,000 seeds and then curate through the seeds and organize a selection of seeds into an interpolation video export. Sometimes the SG2 model can look funky when rendering out a freeform interpolation video since it's moving through the latent space without a human guiding it. After that I jammed with the videos by applying a slitscan and then mirroring it. To be honest, I typically steer clear of using the mirror effect since I think it's heavily overused in many VJ loops. But it's always good to break your own rules occasionally and in this context I think it's well deserved since having multiple industrial robot arms move in unison looks appropriate and really cool.

For the "Mecha Mirage" scenes I grabbed a bunch of videos from the Machine Hallucinations pack and applied the SD stop motion technique. These were quite satisfying since they were more in line with how I imagined SD could extrapolate and dream up strange new mutating machines. I think these videos look extra spicy when sped up 400% but I kept the original speed for all VJ-ing purposes. It is so bizarre what these AI tools can visualize, mashing together things that I would have never fathomed. Again I applied an X-axis mirror effect since the strange tech equipment takes on a new life, although this time I didn't use a traditional mirror effect since I flipped the X axis and then purposefully overlapped the two pieces of footage with a lighten blend mode. So you don't see a strict mirror line and better blends everything. And then the pixel stretch effect was a last minute addition that was some real tasty icing on the cake.. I think it's because machines are often symmetrical and so this really drives home that feeling. In the future I want to experiment with Stable WarpFusion but getting it to run locally is such a pain. Hello my AI friend, what did you have for lunch today?

PACK ⬕ Intergalactic Haste
- This pack contains 61 VJ loops (18 GB)

Looking at a field of galaxies fills me with awe. The indescribable distances, the vast proportions, the possibilities of life, the infinite questions, the universe is a time machine. And so I thought it was high time that I revisit my roots seeing as how I have experience as a Science Visualizer from when I worked at the Charles Hayden Planetarium creating fulldome astronomy shows.

The "Galaxy Field" scenes have a bit of history. About 10 years ago I made a galaxy field in Maya and then open sourced it. I have long yearned to return to this and try out some intense flight paths. Creating this Maya scene originally involved downloading about 68 real photos of galaxies, photoshopping each image to hide the edges, creating a separate luma image just for the alpha channel, creating a bunch of Maya shaders, creating thousands of 1x1 polygon planes, applying the shaders randomly to the planes, and then randomizing the position/rotation of each plane. Although duplication could be instanced in Maya to save tons of RAM, I've never been fond of this technique since it stops me from experimenting with my favorite deformer tools that require actual polygons. So working with this scene has been slow since I'm manipulating anywhere from 10,000 to 30,000 separate objects at once, but Maya is up to the task if I plan each step carefully and move slowly.

I added environmental fog so that the galaxies in the far distance would fade to black. This is not the most ideal implementation but it solved the issue easily and made it feel more realistic. After randomizing the position of each galaxy then I end up with a literal box of galaxies. I thought it was a weirdly interesting artifact and so I embraced it. I randomized the keys for the XYZ rotation so that each galaxy would spin at a different rate, although this doesn't respect the traditional axis that a galaxy would truly orbit around, but I think it works fine in this abstract context. Sorry astronomers! The Box8 and Box16 scenes refer to the scale of the individual galaxies, so the galaxies in Box8 are approximately x10 larger than in reality and the galaxies in Box16 are approximately x20 larger than in reality. The universe is so big that I have to fake the scale of the galaxies because they are in actuality so far apart from each other, what a brain blender that fact is. I'm not thrilled that these galaxies are paper thin but the aim here was to work at scale with thousands of galaxies and fly among them. In the future I'd be curious to see if I can emit fluid from each galaxy texture so that they have some depth. Overall I'm very satisfied with all of the different camera paths that I was able to create. Working at 60fps allows me to move things wildly fast and so it's been interesting to see how far I could push it, especially with the 3D motion blur.

For the "Astronaut" scenes I had this idea of watching galaxies zoom by as you're seeing it reflected in an astronaut helmet. Originally I was going to have the astronaut suit be setup with the traditional white fabric, yet the brushed metal material looked so good when it reflected the galaxies. I decided against adding any camera shake since that is an easy effect that can be added when VJing in real-time. Including the human skull was a last minute decision. I've tried multiple times to light a scene just using shader textures instead of a light. Due to the way that the 'galaxy field' scene was originally setup in Mental Ray, the shader used the incandescence attribute so as to have everything be self-lit. So now that I'm relying on Redshift I noticed that when I enabled global illumination then I was getting some light scattering on the skull from the galaxies, which was a happy accident and I didn't know was possible. From there I just had to crank up the Color Gain attribute by a factor of 10 for each galaxy texture and then they were emitting enough light. That will be a fun aspect to experiment with more in the future. For the astronaut model, I used the Extravehicular Mobility Unit provided freely by NASA.

The "Galaxy Closeup" scenes were created entirely within After Effects. As a subtle effect, I applied the 'Turbulence Displace' effect to each galaxy to give it the illusion that the star clusters are moving. I tried cutting out some select stars and adding them to a raised plane to make use of the parallax effect, but it just looked kinda janky and so I scrapped it. I'd like to explore this again in the future and see if I can make it work.

The "Black Hole" scenes were one of the few things that I never got around to visualizing when I worked at the planetarium and so it was satisfying to whip it up in my own style. I tried a bunch of different methods but getting the black hole shader to be functional in Maya/Redshift proved to be too difficult and I got sick of spinning my wheels. So I moved over to Maya/Arnold instead and I was surprised at how efficient Arnold was at rendering on the CPU. This tutorial helped me to setup the main black hole shader in Arnold. I'm not thrilled with the hard falloff at the edge of the black hole but I seemingly couldn't tweak it any further, ah well I still love the end result.

From there I experimented with having particles orbiting around the black hole, but thought it looked too basic and wanted to make it feel as if there were unseen forces at work. It came alive after experimenting with adding a Turbulence Field to the particles and animating the 'Phase X' attribute so that it moved the noise in the direction of the camera so as to hide this fact. I then experimented with creating some orbiting gas, which is a Maya fluid with the 'Texture Time' attribute animated to allow it to evolve over time even though the fluid has been frozen via an initial state. Adding the collection of spaceships orbiting the black hole was a random idea that I thought would be good for a scifi laugh. So I imported the spaceship models that I had prepared for the Warp Factor VJ pack and I used my newfound knowledge to light the spaceships using the incandescence of the Maya fluid and then enable global illumination. Also the background star globe is my old planetarium production blog.

The "Wormhole" scenes was what initially inspired this whole pack. I originally wanted to make a wormhole that was transparent and had 3 sizes of tunnel visible so as to make use of the parallax effect, but the render times were too intense for that and so I ended up with 1 tunnel featuring 4 different noise textures. To skip over any issues with UV mapping seams of an extremely long cylinder, I instead utilized the 3D noise nodes in Maya since they ignore the UV's. From there I had the idea that I could import the galaxy field into this scene and make it feel as though we were flying through a wormhole in intergalactic space. Then I had a crazy idea and wondered what it would look like if I applied a glass material to the tunnel model so that I could see the galaxies refracting through the glass. Being within the glass material ended up being too abstract, but the magic happened when I duplicated the tunnel x3 and put the camera between the 3 tunnels.

As always, I'm still obsessed with the slitscan effect. Having matured my AE 'Time Displacement' slitscan technique in the last few packs, I had yet another breakthrough. For a while I've known that 240fps footage is ideal for slitscan processing so as to hide any time aliasing in the pixel rows/columns, yet that was often much too intensive for my render pipeline. But I realized that I could use the Topaz Video AI app to process the footage to be x4 slowmo and therefore achieve 240fps. Normally I wouldn't apply a x4 slomo processing this kind of fast moving footage since it often introduces some interpolation blips and weird transitions where the inter-frame changes are too extreme, but for my slitscan purposes it worked wonderfully. So while that adds an extra step into my already tedious workflow, it allows the slitscan distortions to stretch and warp perfectly. What I love about slitscan is that it's so unpredictable and the results are often incredibly bizarre.

I recently added a second GPU to my computer and it's amazing how much having a dual GPU setup has sped up my creative process. Too often I have a series of ideas and then I have to scale back to only the best experiments since the render queue becomes unrealistic. It has also allowed me to enable 3D motion blur for all of these scenes from within Maya/Redshift, which looks amazing and proved to be absolutely crucial for this type of insanely fast motion. I think that having real motion blur gives these scenes the feeling of being shot with a physical camera. In the past I can relied on the RSMB AE plugin, which generates motion blur based on the optical flow of the footage, but it often produces subtle glitches since it can only analyze the video content and so the foreground/background aspects can suffer and also can glitch badly when the content is moving too fast. So it's exciting to have 3D motion blur that is always correct and doesn't destroy my render times. Ludicrous speed!

PACK ⬕ Graffiti Reset
- This pack contains 99 VJ loops (95 GB)

Graffiti is alive. I've been dreaming up some techniques to create an enhanced version of my graffiti animations using my more mature understanding of StyleGAN2. But to do that I needed a larger dataset of graffiti images so that I wouldn't continue running into overfitting issues. At first I tried just using the NKMD Stable Diffusion app by itself, but creating the perfect text prompt to always output a perpendicular and cropped perspective of graffiti on a wall, without grass or concrete edges visible... that proved too difficult to output even with the 50% consistency of what I as aiming for as a baseline. But seeing as how I manually cropped the images in my graffiti dataset, it's unlikely that SD v1.5 fed tons of this specific imagery when originally trained.

So I decided to fine-tune my own SD model using DreamBooth via Google Colab since I don't have 24GB of VRAM on my local GPU. For that I used my dataset of 716 graffiti images that I curated in the past and then fine-tuned SD v1.5 for 4000 UNet_Training_Steps and 350 Text_Encoder_Training_Steps onto the 'isoscelesgraffiti' keyword. The results were stunning and better than I had hoped for. I trained all the way to 8000 steps but 4000 steps checkpoint was the sweet spot.

Now that I had my own custom SD model fine-tuned to graffiti that I could plug the NKMD app. But when I rendered out images using the 'isoscelesgraffiti' keyword then all of the images had a very similar feel to them, even after experimenting with adding various keywords. So to help guide it I used the IMG2IMG function by inputting each of the 716 graffiti images at 25% strength. This allowed me to create 100 complex variations of each graffiti image. After setting up a batch queue, I ended up with a dataset of 71,600 images output from my SD graffiti model.

Using this dataset of 71,600 images, I started training StyleGAN2 to do some transfer learning using FFHQ-512 as a starting point and trained for 2340kimg. What an amazing jump in quality when training with a dataset this large! Strangely I'm still seeing some overfitting behavior, which confuses me considering I had X-mirror enabled and so the effective size of the dataset was 142,200 images. So my best guess is that maybe there are some repeating themes in the dataset that I can't recognize due to the grand scale, or perhaps I should have further lowered the gamma=10 and trained for longer. I'm not sure which is true, maybe both. But the overfitting was a minor quibble and so I moved forward with curating and ordering the seeds and then rendering out the latent walk videos.

The recent updates to Topaz Video AI have brought some excellent improvements to the upscaling quality. I have found the Proteus model to perform amazingly well when upscaling from 512x512 to 2048x2048. Due to a 512x512 video only having so many details included due to the small size, I've found the 'Revert Compression: 50' or 'Recover Details: 50' to be the ideal settings. In the past I had used the 'Sharpen: 50' attribute but in hindsight feel like I can see the sharpen effect being applied.

As always, playing around in After Effects with these graffiti videos is where I feel like I really get to experiment and have some fun after the tedious steps mentioned prior. Often the raw render are just on the cusp of being interesting and so AE is where I can make it shine and layer on some extra ideas that make it feel finalized. I live for the happy accidents when implementing a complex technique and stumble across something interesting. I did some various Time Displacement slitscan experiments which gave the graffiti a wonderful vitality. It was interesting to apply the Linear Color Key to select only specific colors, then using that selection as an alpha track matte to cutout the original video, and then apply FX to only specific parts of the video (Deep Glow, Pixel Encoder, Pixel Sorter). That was a satisfying evolution of similar techniques that I've been been trying to refine so as to simplify the execution and allow me to experiment with ideas quicker. I also did some displacement map experiments in Maya/Redshift, which was surprising since the JPG encoding ended up making the Maya displacement look like a concrete texture. Then brought these Maya renders back into AE for further compositing, which was bizarre to apply the Difference blend mode on the displacement render versus the original video. And then I rendered that out and injected it into NestDrop, recorded the fire results, and then brought that video back into AE for yet another application of the Difference blend mode technique. It's a vicious little circle of compositing. Long live graffiti!

PACK ⬕ Body Body Body
- This pack contains 136 VJ loops (122 GB)

Let's have an AI fantasize about the most popular parts of the human body... Feeling uncanny glitchy? Probably NSFW-ish.

Looking back it's clear just how much I've learned about how to train StyleGAN2/3 since I've trained 10 different models in one month for this pack. It's been so useful being able to render out locally a ~10,000 image dataset from Stable Diffusion in 1-2 days. But the real game changer has been doing the training locally and being able to run it for however long I need and also try out multiple experimental ideas. It turns out that while Google Colab has top notch GPU's, the credits end up costing too much and so I don't run experiments where I'm unsure of what'll happen.

I first started with the idea of a long tongue whipping wildly from a mouth. Interestingly it proved too difficult to make Stable Diffusion output images of a tongue sticking out. Eventually I got it to work but it looked horrific with teeth in odd places and the lips merging into the tongue. So I put the idea on the back burner for a few months until I realized that I could pivot and instead focus on just lips wearing lipstick. Then I nailed down a text prompt I rendered out 6,299 images from SD. Since the lips were quite self similar I knew it would converge quickly with StyleGAN3. I love how there are sometimes 3 lips that animate into each other. I also did some transfer learning on SG2 but the well known texture sticking aspect did not look good on the lip wrinkles areas and so I didn't use this model.

I thought it would be interesting to have some vape or smoke to layer on top of the lips. I experimented with a few SD text prompts and then rendered out 2,599 images. Then I did some transfer learning on SG2 but had issues with the gamma and it took a few tweaks to fix it. Even still I wasn't thrilled with the results until I did some compositing experiments in AE and tried for the first time doing a slitscan effect with a radial map. I love experimenting with the Time Displacement effect in AE. This made it look like smoke rings moving outwards was perfect.

At this point I realized that it would be fun to make some more SG2/3 models on different parts of the human body. I was a little nervous about it since I wanted to represent all skin tones and body types, which can be difficult since I'm dealing with two different AI systems that each react unpredictably in their own ways. I had to carefully guide Stable Diffusion so as to use specific skin tones and body types and also with added extra priority on those keywords. SD really wanted to output hourglass figures and gym bodies, even when I tried to push it hard in other directions. Through the use of wildcards in the NKMD Stable Diffusion app I was able to achieve some decent results, yet even still it was difficult to maintain an balance of images so that the dataset wasn't too heavy in one category and therefore affect my SG2/3 training downstream. Another tricky aspect is that GAN's can be unpredictable of what patterns it will decide to focus on even with a large dataset. So most of the models have a good spread of skin tones represented, but getting all body types represented was very difficult since for some reason SG2/3 kept converging towards the skinny and mid-weight body types, with large curvy bodies getting less representation. I suspect this is an artifact of the gamma attribute averaged the bodies together. Having AI train another AI has its pitfalls.

Several times in the past I've tried to create a SD text prompt that output a body builder, but I was never happy with the results. Finally I experimented with a text prompt telling it to do an extreme closeup of a bicep and that was the ticket. So I rendered out 9,830 images from SD and then trained SG3 on it. I suspect that everyone looks tan and shiny since bodybuilders often go for the oiled and fake tan look, which makes the skin tones a bit vague, but black skin tones are visualized too. The SG3 converged nicely but exhibits an interesting aspect that I've seen a few other times where it's not a smooth easing between some of the seeds and instead snaps very quickly. This actually looks really interesting in this case since it looks like muscles twitching with unreal speed. I think that perhaps that SG3 learns some different patterns but has trouble interpolating between them and so it becomes a steep transitions between those zones. I also trained a model using SG2 and that has a very different feeling to it, which was interesting to directly compare the two frameworks. When doing some compositing experiments in AE I again experimented with a new type of slitscan, what I call split horizontal or split vertical, which is basically two ramps that meet in the middle. So it looks almost like a mirror effect except that it the content is being seemingly being extruded from the center.

Next up on my list was to create some boobs. I suspect that SD was trained on lots of skinny model photos or bra photo shoots because it difficult to make it output images of large curvy bodies. Once again we're seeing the affect of cultural bias for skinny bodies has make it's way into SD. But I input a bunch of different body type wildcards into the SD text prompt and then output 30,276 images. I rendered a more images than usual to try and get a larger spread of body types but it didn't do as well as I had hoped. It seems unlikely but maybe since I was telling it to make a glowing neon LED bra that it limited itself to a certain trained limitation in body types where the datasets didn't crossover as they should have, yet the neon LED lighting just looks so dramatic and obscures the skin tones at times. Due to the smooth nature of skin, I realized that I would lower the Steps attribute in SD without much of a change in the image quality so as to half the render time per images and therefore get the dataset rendered out in half the time. From there I trained both SG2 and SG3 separately and it converged without any problems. After that I took the muscles dataset and combined it with the boobs dataset and then did some transfer learning on the boobs SG2 model. I didn't know if it would be able to interpolate between the datasets but it did surprisingly well. Strangely it did exhibit the same steep transition between seeds just the SG3 muscles model, which I had never seen SG2 do before.

Lastly I wanted to create some male/female asses and crotches. Interesting to note that I didn't have any trouble creating an SD text prompt and rendering out images for male ass (9,043 images) and male crotch (9,748 images) and see a diversity of skin tones and also various body types. The woman crotch images (10,091 images) had a good diversity of skin tones, but would only output an hourglass body type no matter how much I tweaked the text prompt. And then the woman ass images (42,895 images) were the trickiest since it tended to output tan and brown skin tones, and strictly output a hourglass figure. So as a very rough guess, I'd say the huge dataset that Stable Diffusion was originally trained on has an unequal amount of tan women in bikini's. This is clearly the early days of AI... I then took all of these images and combined them into one dataset in hopes that I could get SG2 to interpolate the skin tones onto all body types. It helped a little bit and also allowed me to animate between the asses and crotches of males/females, which looks so bizarre! In an effort to give the black skin tones some attention I did some AE compositing to key out the skin color and only see the skin highlights. Also I applied some dramatic lighting and harsh shadows to help make the skin tones more difficult to define. It's not a perfect solution but I'm working with what I got. In AE I did all sorts of slitscan compositing experiments to make those hips distort in the wildest ways. Legs for days!

PACK ⬕ Alien Guest
- This pack contains 52 VJ loops (40 GB)

We are aliens riding on this strange spaceship earth. But let's visualize some real aliens, eh?

For the "Wrinkled" scenes, I was able to create a SD text prompt that would reliably output images of a wrinkly alien in several varieties. But I wanted SD to only output bust portrait style shots, which it struggles with. In recent experiments I knew that if I input an init image at 10% strength then that would give just enough guidance in the of the starting noise state, yet enough creative freedom to still extrapolate lots of fresh images. So I output 100 human face images from the FFHQ-512 SG2 model, input it into SD, and rendered out 31,463 images from SD. Then trained SG2 on it and it converged without issues.

From there I wanted to output some green aliens against a black background, which is currently something that is a tricky request for SD. So I grabbed a human face image from the FFHQ-512 SG2 model, crudely cut it out, tinted it green, and filled in the background to be pure black. I used this for the init image and generated 8,607 images. I then trained SG2 on the images and it converged very quickly since the images are very self similar. And I realized this dataset was a good candidate for training SG3. Since SG3 is sensitive to a dataset having too much variety while also taking twice as much time to train, I often shy away from experimenting with SG3. But I knew it would converge quickly and I ended up being correct.

For the "Grays" scenes, I was trying to create big eyed aliens wearing haute couture fashion. This was the initial inspirational starting point of this pack, but I was having trouble creating a text prompt to do this reliably, even when using 20 init images of humans with a black background. But I was really excited about the few good images among the chaff. So I output somewhere around 40,000 images and then had to manually curate it down to 11,952 images. This was a tedious task and reminded me of why it's not really worth the mental drudgery and yet I love the end result, alas. I then trained SG2 on it for multiple days, likely one of the more heavily trained models I've done yet. It had a bit of trouble converging due to the huge amount of different aliens visualized in the dataset, but it finally did converge mostly on the grays typical type of popular alien since it was visualized the most in the dataset.

For the "Big Brain" scenes, I really wanted to visualize some angry screaming aliens with big brains. So after much toiling I finally nailed down a text prompt and output 8,631 images out of SD. Training SG2 converged overnight easily since the images were quite self similar. The added slitscan effect really make it feel as though the alien is falling into a black hole.

From there it was all compositing fun in After Effects. First I used Topaz Video AI to uprez the videos using the Proteus model using Revert Compression at 50. Then I took everything into AE and experimented with some fav effects such as: Time Difference and Deep Glow. Also I can't remember where I saw it but someone had created a unique boxed slitscan effect and I wondered how they technically did it. I've long used the Zaebects Slitscan AE plugin for this effect, but it renders very slowly and can only do vertical or horizontal slitscan effects. So I starting hunting for alternate techniques using FFMPEG and instead stumbled across a tutorial of how to use the Time Displacement native to AE to achieve the same slitscan effect! This ended up being is a huge savings in terms of render time since it utilizes multi-frame rendering. Ironically I still only did vertical or horizontal slitscan effects, but in the future I have all sorts of wild ideas to try out, such as hooking in a displacement map to drive it. Also I explored some other native AE effects and found some amazing happy accidents with the Color Stabilizer effect, which allows you to select 3 zones within a video for it to watch for pixel changes and then control the black, mid, and white points of the Curve effect. So it's a electric glitchy feeling that I really love and is perfect for making these aliens have that extra wild warpy feeling.

The unleashed creative imagination of Stable Diffusion is mind boggling. Having generated a bunch of 10,000 to 50,000 image datasets using SD and then manually curated through them, it's clear to see how much variety it's capable of within a given theme. I can't help but think of all the permutations that humanity has never fathomed and that SD now hints at. SD has learned from so many different artists and can interpolate between them all. As the tool matures, I can easily see how it will mutate many different disciplines.

Even after tons of testing, I've long been perplexed by the Gamma attribute of StyleGAN2/3. So I reached out to PDillis again and he explained a helpful way to think about it. Consider the Gamma a sort of data distribution manipulator. Now imagine your dataset as a mountainous terrain and all of the peaks represent the various modes in your data. So the Gamma will make it easier for the neural network to distinguish or average them into one. For example, looking at the FFHQ face dataset, there would be lots of various groups in the data: facial hair, eyeglasses, smiles, age, and such. So setting a high Gamma value (such as <80) will allow these details to average together into a basic face (2 eyes, nose, mouth, hair) and the Generator will become less expressive as it trains. Setting a low Gamma value (such as <10) will allow more of the unique aspects to be visualized and the Generator will become more expressive as it trains. Yet if you set a low gamma too early in the training process then the model might not ever converge (remain blobby) or perhaps even collapse. So when starting training you should set the gamma to be quite high (such as 80) so that the Generator will learn to create the correct images. Then when the images from that model are looking good, you can resume training with a lower gamma value (such as 10). For me this makes intuitive sense in the context of transfer learning since first you need the model to learn the basics of the new dataset and then at that point you can refine it. I think often 10 is a fine starting point when your dataset is homogenous. But if your dataset is full of unique details and difficult to discern a common pattern in all of the photos, then raising to 80 or more is necessary. So that explains why my models have recently increased in quality. A huge breakthrough for me, so again many thanks to PDillis.

I also learned of the vital importance of having at least 5,000 images in your dataset, preferably 10,000 images. Now that I've been using Stable Diffusion to generate images for my datasets I've been able to see what a huge difference it makes in the training process of StyleGAN2/3. In the past I was often using datasets that were between 200 to 1000 images, but that was frequently resulting in having to stop the training prematurely since the model becoming overfit. So it's very interesting to finally have enough data and how that affects the end result.

I had originally intended on doing some 3D animation of a UFO flying around and traveling at warp speed, so in the future I must return to this theme. I want to believe!

PACK ⬕ Nature Artificial
- This pack contains 113 VJ loops (114 GB)

I’ve had this daydream of watching an alien plant growing and mutating. So for the "Plants" scenes I wanted to train StyleGAN2 on a collection of images of different plants and see what happens. Yet I wanted a perfect black background. After some sleuthing, I realized that using photos of real plants wasn’t ideal and instead I should focus on the botanical drawings. I ended up using the drawings of Pierre-Joseph Redouté (1759-1840). I spent a whole month just downloading, curating, and preparing the 1,309 drawings for this dataset. Since I really didn’t want StyleGAN2 to learn any unnecessary aspects I had to photoshop each image manually. This involved white balancing, painting away any noise, painting out any paper edges/folds, painting out any text, and cropping to a square. It was an large amount of boring work but worth it in the end.

After training the SG2 model to a point where I was happy with the results, I started curating the best seeds and rendering out the videos. then I sat on the videos for a little while due to a simple but annoying issue. All of the videos had a white background... I tried all sorts of experiments with color keying and nothing looked good. But then I remembered a useful trick of simply inverting the colors and then rotating the hue to match the colors of the original video. From there it was all fun compositing in After Effects. I keyed out the black background and then I brought together multiple plants into a single comp and made it feel as though they were slowly growing up from the bottom. I experimented with adding glow to every color except green which resulted in all of the flowers having a rich vitality. I also experimented with a few plugins such as Modulation, Slitscan, Time Difference, Pixel Encoder.

While watching some workshops about StyleGAN2, I become a patron of Derrick Schultz and happened to see that he shared two pretrained models. The StyleGAN-XL 512x512 and SG2 1024x1024 models were trained using botanical drawings from Jane Wells Webb Loudon (1807-1858). These were really fun to jam with, many thanks Derrick! The "Blossom" and "Garden" scenes are the result of rendering out videos from these models and compositing experiments in AE.

For the "Mushrooms" scenes, I had be doing some explorations in Stable Diffusion and was able to nail down a text prompt that consistently output fields and forests full of mushrooms. So I rendered out 7,639 images and used it as a dataset for training SG2. The transfer learning converged rather easily and from there it was all fun in AE. The slitscan AE effect really made this one extra psychedelic. Chefs kiss to the BFX Map Ramp AE effect since it allowed me to tweak the color gradients and then apply Deep Glow.

I then did some further explorations in Stable Diffusion and found a text prompt that would output flowers on bizarre vines. So I rendered out 17,124 images and used it to train SG2. Yet I probably gave the text prompt a bit too much creative freedom and so the dataset was quite diverse. This resulted in some issues with the StyleGAN2 transfer learning being able to converge. This has happened to me on several prior occasions and I typically just move along to something else. But this time I did some research on the Gamma attribute since it's known to be a difficult but important aspect in fixing this type of issue. After numerous Gamma tests I was able to improve it, but not to the level I was hoping for. But I think the "Flowers" scenes are beautiful regardless. It was satisfying to use the Color Range effect in AE to remove certain colors and then apply Deep Glow to the leftover colors.

It's surreal to see these various AI techniques mimicking mother nature and just making a beautiful mess of it. I tried to further push this digital feeling while compositing in AE and make it feel even more glitchy, as if the Matrix code was showing through. Green machined leaves.

PACK ⬕ Internal Rep
- This pack contains 88 VJ loops (93 GB)

The mysterious guts of AI. What happens to a neural network after it's been trained? GAN's have multiple hidden layers but we only see the final result. So I wanted to crack it open and experiment.

I was exploring the StyleGAN3 interactive visualization tool and was curious to see the internal representations of the FFHQ1024 model. Yet the tool didn't include a way to render out video. I started thinking through some hacky options but I really didn't want to do a screencap recording due to the lower quality and I also wanted perfectly smooth morphs between seeds without needing to manually animate the attributes by hand. I happened to be exploring some of the additions to StyleGAN3-fun repo when I saw that PDillis had a todo list posted and adding support for internal representation video exporting was included. So I opened a ticket to show interest and then became a Github Sponsor. Many thanks to PDillis!

Each of the internal nodes of the neural network has a few hundred channels within each layer. So I soloed a single layer, selected 3 specific channels, arbitrarily mapped those channels to RGB, and then rendered out a latent seed walk video. But I wanted to enable a bunch of happy accidents to happen while compositing in After Effects, so I did 2 series of exports:

First I rendered out 48 videos: 8 clips of each layer, each clip has 3 unique channels visualized, and all of the clips locked to a single seed sequence. This enabled me to have 6 layers visualized with 24 channels each, in total 144 unique channels to play with in AE. Which would prove to be very interesting to jam with since they each had the same core animation, yet the coloring and texture was unique for each clip and I could therefore layer them together in wild ways.

Then I rendered out another 48 videos: 8 clips of each layer, each clip has 3 unique channels visualized, and each of the clips were given a unique seed sequence. This enabled me to create 48 videos where each clip had a unique core animation and also have unique coloring and textures.

From there I did a bunch of compositing experiments in After Effects. First I used the ScaleUp plugin to increase the size of all the videos to 2048x2048, which was quite a large jump for considers the original SG3 layer resolutions were: 148x148, 276x276, 532x532, 1044x1044. For the single-seed-videos I combined the videos into different experiments using the soft light or hard light blend modes. Then I rendered out the videos so that I could do further compositing experiments without the render times becoming absurd. From there I did some color range cutouts with added glow. Explored the difference and subtract blend modes paired with PixelSorter and PixelEncoder so that only the added FX was visible. Also experimented with BFX Map Ramp to tweak the colors and get crazy with its cycle attribute.

I always love doing a displacement map experiment in Maya/Redshift. Normally I enjoy using a volume light, but this time I used a point light and it produced some interesting shadows when placed at the edge. I also ran with brute force global illumination since the noise didn't matter in this speedy context. AI secrets are surreal.

PACK ⬕ Error Overflow
- This pack contains 89 VJ loops (70 GB)

Surprise pack for New Year's! What strange warnings and errors will we see in the future? What tech will surround us? Always glitches.

With the rise of everything digital, I've been thinking of how to visualize an intense barrage of surreal computer errors and overcooked modern ramblings. So I put the idea on the backburner for about 6 months since I didn't have a solid foothold yet. But when the Odometer AE plugin released recently, I knew instantly that it was the missing inspiration I had been waiting for.

From there I jumped right in and spent an evening writing out a bunch of very short poetry that felt on the edge of tech nonsense or maybe a few degrees from being real. It felt good to revisit my poetic roots since I haven't written too much in the last few years.

It was refreshing to do the main work in After Effects and quickly produce the onslaught of text. I realized that it would be fun to also do some stop motion experiments with text quickly flashing on the screen. And of course I had to create a 10 second countdown timer while I'm doing this type of text focused work. I then did some remixes in NestDrop for some happy accident hunting. I'm glitch rich!

PACK ⬕ Nanotech Mutations
- This pack contains 83 VJ loops (90 GB)

Years ago I read a short story about morphing chrome on the horizon of the future. It was clearly visible as something approaching at great speed, yet something distant and vague, beckoning us with multifaceted promises, and sharp with vicious nightmares. I always thought it was an evocative metaphor for technology that is increasingly pervasive in our lives. The idea really made an impression on me and so I wanted to finally visualize it.

Some of my very first experiments with DallE2 was creating images of chrome metal in all sorts of mangled futuristic shapes. So when I was looking through some old experiments I stumbled across these early experiments and knew it was something I should revisit. DallE2 has a unique style that often looks different from Stable Diffusion and Midjourney, particularly with chrome metal, and so I decided to continue using DallE2 even though it's a pain to manually save each image. I generated 1,324 images in total using DallE2. If the DallE2 devs are reading this, please add a method to batch output. In the future I want to explore using Stable Diffusion to generate variations based on a image and see how that goes.

Then I took all of the DallE2 images and retrained the FFHQ-512x512 model using StyleGAN2. I also tested out retraining FFHQ-1024x1024 but it's slower and I had trouble getting it to converge due to the gamma attribute being delicate. So I stuck with 512x512 for the model.

The battle continues between the ScaleUp AE plugin and Topaz Labs Video Enhance AI. In this specific context, the ScaleUp AE plugin produced more desirable results when using the "sharp" setting, which significantly increases the render time to 1.2 seconds per frame. Ouch! So these were some heavy renders, but the "sharp" setting conjures up some unique details and I feel it's worth the added render time. I'm very curious to see how these uprez tools continue to mature in the future. Already Stable Diffusion 2 has added uprez capability for images and it'll be interesting to see if the AI models behind these video uprez tools can be trained to better understand all sorts of subjects.

Had lots of fun doing compositing experiments in After Effects. In the last few pack releases I've realized the that removing specific colors and then purposefully blowing out the remaining colors often takes the video to a whole new level. So I've started exploring a bunch of effects that I normally wouldn't touch. For instance, when Time Difference is applied to a layer, it typically looks not very interesting, but it looks great if I remove specific colors via Color Range and add some Deep Glow. I did some similar experiments with using the Exclusion blend mode and created a glitchy iridescent shimmery look. In other experiments tinted the metal a cool blue color and made the highlights glow some various colors, which was tricky to pull off in the way I was imagining. Experimented with the Limiter effect to treat the perfect white highlights in a unique way. And I just cannot stay away from the wonderfully bizarre Slitscan effect.

With all of these amazing DallE2 images sitting unused after the StyleGAN2 training, I wanted to give them a new life but I just felt stumped. I finally landed on treating all of the images as a sequential frame sequence in AE, applying the Echo effect to have them automatically overlap, and then playing with Pixel Sorter 2, RSMB, and tons of Deep Glow. For the "EchoMinStep2_FakeMotionBlur" video I push the RSMB fake motion blur setting to the absolute max and I think it has a electric feeling that works great with intense music. 

Recently I realized that I had given up adding motion blur via ReelSmart Motion Blur since it cannot be forced to work on slow moving footage, which I admit is an unusual request. But seeing as how these videos are designed to be played at 200% or 400% of their original speed... Why not just render out a version that is sped up on purpose? I just needed to speed up the footage in AE, pre-comp it, and then RSMB could work its magic. Such a simple solution that I didn't think of before.

Doing some displacement map experiments in Maya/Redshift produced some tasty results. The displacement map brings out some interesting details and plus the brute force global illumination might be a bit noisy at lower settings but it still looks great. Someday I'll upgrade my GPU and go wild on those GI settings.

I'm growing frustrated with the color banding that happens in encoded videos with dark scenes and lots of subtle gradients from glowing FX. So I did some tests with rendering out 10-bit H264 and H265 MP4 videos from After Effects. They played back in VLC just fine and the gradients were so lush! But when I dragged the 10-bit H264 MP4 into Resolume 7.13.2 then it froze up until I force quit it. And the 10-bit H265 MP4 wasn't even recognized by Resolume. So it doesn't look like 10-bit videos are supported in Resolume yet, which is a drag. I cannot find any Resolume documentation on the subject either. Yet it's without a doubt the most popular VJ software currently and so I gotta support it. Although I'm not entirely sure how much color banding matters when projected due to light bouncing around and washing out the image, but I think it does matter for LED screens which are popping up at concerts with more frequency. Something to revisit in the future. Mirrorshades!

PACK ⬕ Cloud Computing
- This pack contains 79 VJ loops (63 GB)

Get your head in the clouds. It all started with wanting to visualize an intense lightning storm. So I collected 178 real images of lightning and trained them on SG2. But when I rendered out some latent seed walk videos and something didn't look right about it, yet I couldn't put my finger on what it was. So I put it on the backburner for a few months. Then I randomly had the epiphany that it was correctly visualizing the electric bolts and yet it was constantly visible, without the quick flashes in the dark. So I brought the videos into After Effects, used an expression to randomly key the transparency, and then added some glow on top.

After that I realized that it would be interesting to create some beautiful flowing clouds. I collected 1,515 real images of clouds and then trained them on SG2. The results were just what I was hoping for. I did some comparisons between the ScaleUp AE plugin and Topaz Labs Video Enhance AI and found that Topaz produced better details in this instance. So it seems that each uprez tool has different strengths according to the context, which I somewhat expected, and helped to confirm my theory that each uprez AI tool has been trained differently. From there I brought the videos into After Effects and did some experiments with Colorama gradients, Time Difference coloring, loads of accenting with Deep Glow, and some good ole Slitscan to play with time. I also did some layering of the clouds with the flashing lightning that was delicate to figure out. The laser visuals were captured in NestDrop and then composited with the clouds.

From there the only missing piece was an ethereal rainbow. I had this daydream of a rainbow in the sky twisting and looping over itself in wild ways. So I used the NMKD Stable Diffusion GUI app to experiment with text prompts to create just that, but against a black background. It was tricky to figure out how to make SD consistently output what I had in mind without needing to use IMG2IMG, since I needed SD to generate thousands of images unhindered and with tons of diversity. I used SD to generate 9,447 images and then trained them on SG2. I'm very happy with these results since it's rare that something matches exactly what I had initially imagined. From there I played with Slitscan and dd_GlitchAssist to further juice up the rainbows. Somewhere over the rainbow.

PACK ⬕ Mask Oblivion
- This pack contains 51 VJ loops (29 GB)

Just in time for Halloween. Let's get freaky!

With the recent release of Stable Diffusion, I can finally create large image datasets from scratch much easier. I've done some experiments using DallE2 but it's just too much of a slog to manually save 4 images at a time, even if DallE2 follows text prompts more precisely.

So I downloaded NMKD Stable Diffusion GUI and started experimenting with text prompts that I found over on Krea.ai. From there I tweaked the text prompt until I was satisfied it would consistently produce images that had a similar composition. I then rendered out 4,129 images over a few overnight sessions. Then I manually went through all of the images and deleted any weird outliers. Having Stable Diffusion on my computer is a game changer since it means that no longer have to waste tons of time curating and preparing images by hand.

After that I prepared the image dataset and retrained the FFHQ-512x512 model using StyleGAN2. Due to Google Colab recently adding compute credits, I was able to select the premium GPU class and get a A100 GPU node for the first time. This meant that I was able to finish retraining overnight instead of training over several days. Then as per my usual workflow, render out videos as 512x512 MP4's.

An interesting development for me has been using the ScaleUp plugin in After Effects. I've always had to first pre-process the 512x512 MP4's in Topaz Labs Video Enhance AI so as to uprez to 2048x2048. But ScaleUp produces very similar results and plus I can just start doing compositing experiments instantly in After Effects without needing to first uprez. This saves me a substantial chunk of time, creative energy, and savings on free hard drive space. Skeletor would be proud.

PACK ⬕ Orangutan VR
- This pack contains 36 VJ loops (42 GB)

Sometimes when I see a person completely immersed in tech, I remember that our not so distant ancestor is the monkey.

For a while now I've been yearning for some type of AI tool to be released that would enable me to create my own images and then use it as a dataset for training StyleGAN2 or 3. So I was quite excited when I heard about DallE2, applied, and got access to the beta program. After experimenting with various text prompts I finally nailed down a mainstay and then generated 542 images. It was slow going doing it all manually to generate and save 4 at a time, but this was before Stable Diffusion was even in beta. It's wild how fast these tools are maturing. And it's bending my mind that I can use DallE2 to extrapolate new images and then have StyleGAN2 interpolate on top of it. The future for artists is looking so weird.

I was doing some exploring of the various attributes of StyleGAN3 and researching what exactly FreezeD does. Which led me to stumble across this paper explaining that you can freeze the lower levels of the discriminator to improve the fine-tune training process and help to avoid mode collapse in small datasets. I hadn't seen anyone really talking about this online and so I tried it out freezing the first 4 layers.

The results were surprising in that it definitely helped with overfitting (aka mode collapse) along with the huge bonus that it cut the training time in half! Which makes sense as only 4 of the 8 layers of the neural network are being retrained. As I understand it the first 4 layers of the network are quite small in resolution and so retraining them will not change the visuals significantly, so they are simply frozen. Overall better results, with less self repetition, and trains in half the time... Hell yeah! This is a vital finding and I'll need to revisit a few prior trainings where I had to carefully curate and organize the seeds to hide the model being overfit, particularly the Machine Graffiti pack.

After exporting some videos from StyleGAN2, it was fun to first apply dd_GlitchAssist and then composite on top of that using PixelEncoder and Pixel Sorter plugins in After Effects. And of course some slitscan liquidness. Want a banana?

PACK ⬕ Datamosh Madness
- This pack contains 414 VJ loops (123 GB)

More tech means more glitches! Growing up I experienced datamoshing visuals on accident while scrubbing an ASF video and then the motion vector info would get decoupled from the raster color info. And that fascination with odd glitches has stuck with me ever since.

In the past I’ve experimented with datamoshing, but it always felt like a one trick pony in my own experiments and I wanted to explore more types of glitches in the multitudes that I intuitively know are possible. And so when I recently stumbled across some incredible FFglitch frontends it was a dream come true. I’ve been using tools such as dd_GlitchAssist and Datamosher Pro.

What I love about dd_GlitchAssist is that you can easily set up a batch render with a huge amount of videos. And seeing as how I’m all about exploring happy accidents, I would start 700+ datamosh tests, wait a few days for the renders, and then slowly curate through the results. Such an amazing tool for my style of spelunking.

So I revisited some of the videos from the VJ packs: Machine Faces, Machine Eyes, Explosive Surprise, and Machine Hallucinations. Layering glitches on top of prior experiments is so ripe for exploring, hence why this pack is so large. All praise the juicy glitch.

PACK ⬕ Machine Stakeout
- This pack contains 32 VJ loops (19 GB)

Electronic eyes are everywhere all the time. Everyone is algorithmically famous for 15 seconds. So I wanted to explore this aspect of our lives that has become commonplace. And how better to visualize this but to have an AI generate videos of security cameras and paparazzi?

I started off by collecting 238 images of security images on the side of buildings and on metal poles. Interestingly after training using StyleGAN2, it was having an easier time producing good results on the seeds which had blue sky backgrounds and so I focused on these seeds for the generated videos. Then adding it look like an old degraded TV made it feel like security footage from an SCP article.

While collecting those images, I kept running across images of photographers taking a photo of themselves in the mirror. It was perfect because it was effectively the portrait of a paparazzi with a DSLR and lens pointed at you. So I collected another dataset consisting of 224 images. It’s so weird how StyleGAN2 tried to replicate how human fingers hold a DSLR camera. It might have refined that understanding with more training time, but I doubt it with this tiny dataset, and plus I was aiming for this uncanny feeling.

From there it was all gravy experimenting in After Effects with the PixelEncoder and Pixel Sorter plugins. Cheers to the omnipotent AI BBQ.

PACK ⬕ Emoji Wreck
- This pack contains 140 VJ loops (15 GB)

All of the feelings! I've been brewing on this idea of doing liquid simulations on various emoji. Emoji splatting against the wall, swirling down the drain, and smashing into each other. But after some experiments in Blender I just wasn't getting the look that I was hunting for. I wanted to focus on liquid along with glitches.

So I tabled the idea for a few months until I started experimenting with the dd_GlitchAssist frontend. After some initial tests I realized there was so much potential for creating animations that were created specifically for the datamosh processing. So I collected loads of emoji characters, sequenced them, and did some experiments in After Effects to find which animation styles worked best.

At a later point I wondered what would happen if I trained StyleGAN2 on just the emoji faces, but the training quickly reached mode collapse due the dataset being too small (123 images). Luckily I checked out some of the early PKL’s of the FFHQ retraining process and was able to choose a model which showcases a scary mix of human faces and emoji. From there I had the idea of quickly editing back and forth between the uncanny AI faces video and emoji stop motion video, which worked better than I expected. And then I fed them into dd_GlitchAssist to add some cross pollination.

Originally I was going to key out the green background and render out these videos with alpha, but it was removing too much interesting content and so I left it up to the VJ performer to add a real-time chromakey effect to their personal taste. Enjoy the sexting easter egg.

PACK ⬕ Machine Landscapes
- This pack contains 46 VJ loops (70 GB)

What would it look like to see the land change through the eons? After learning that any StyleGAN3 model can be used as a starting point for re-training, I started searching for models which people had freely shared for experimentation. I stumbled across an excellent model by Justin Pinkney which was trained on 90,000 images of natural landscapes. I knew that I would want very slow motion for the landscapes to evolve, so I set the w-frames to a high number to ensure that there were plenty of frames between each seed keyframe. I also had to render out many thousands of seeds images so that I could curate only the best seeds and then arrange the ideal sequence of landscapes.

Due to the model being trained at 256x256, I decided against using it for re-training purposes and instead just render out the videos without changes. Although I almost skipped over it due to the very low resolution until I realized that I could uprez the renders using the Topaz Labs Video Enhance AI software. And then another challenge was that the uprezzed video looked a bit too soft, so I aimed to make it look like an old degraded TV through the use of the Zaebects Signal plugin. Having to jump through these hoops forced me to carefully consider my options and resulted in this feeling that we’re watching an epic timelapse via a shitty security camera feed.

I also found some models which were shared by Lucid Layers Studios. They had the interesting idea of training SG3 using a bunch of images created by another machine learning framework. Love it.

But the timelapse feeling wasn’t enough, so why not also apply a slitscan effect too? It’s as if a literal wave of time is passing in front of your eyes. Now that’s my kind of weirdness.

PACK ⬕ Machine Graffiti
- This pack contains 50 VJ loops (27 GB)

Street art has this unique unchained feeling and futuristic shapes that I have long been inspired by. So I wondered if I could get some interesting results in training StyleGAN2 on a graffiti dataset. I started off by collecting 716 photos of graffiti and then cropped them each to be a square. And instantly the results blew me away and were better than I had hoped for. I also did a few experiments with StyleGAN3, but the StyleGAN2 results had a more realistic feeling probably since it’s more forgiving of being trained on tiny datasets.

From there doing some experiments with slitscan and dd_GlitchAssist processing was a perfect match. I also did some experiments in After Effects where I keyed out specific colors and then added glow onto everything, which allows certain shapes to come alive.

As always, some of my favorite experiments are layering a new experiment on top of prior experiments. So I reengineered the displacement map technique I used for the ‘Mountains Flow’ pack. Except when I duplicated the layer and applied a multiply blend mode, I added even more fast blur, which rounded out the displaced 3D forms beautifully. Then the black hole shader allowed me to retain the cutout alpha channel. I rendered out one version with the true color and another using a glimmering gold shader. The joy of infinite experiments.

PACK ⬕ Surveillance Biz Glitch
- This pack contains 88 VJ loops (26 GB)

Anagram and I have been collaborating on this pack since early 2021. I sent him the 'Surveillance Biz' videos and he used it as fodder for experimenting with his analog video synthesizer. It was interesting to see 3D animation brought into a video synth, glitch it up, and then he would send it back over for further manipulation.

Midway through our collab Anagram proposed a oscilloscope vector re-scan workflow, meaning he would output the video to an oscilloscope and then record the results using a video camera. It resulted in an aesthetic that is quite unique and impossible to recreate otherwise. The analog processing adds noise, sometimes recursion, and CRT bloom is something else. So after some technical experimenting and polishing the recapture method, he recorded a jam and then I edited the gem moments and applied a corner pin warp to make the oscilloscope monitor appear more face-on. I also did some experiments with applying a slitscan effect which was icing on the cake. This is my favorite type of collaboration where we bounce visuals back and forth, each person layering on something the other wouldn't have fathomed.

Anagram also produced some original content in the glitch spirit. The analog video synth has a very specific feeling about it, something about the way it acts feels alive to me. So I was keen to composite these visuals and polish them that extra little bit. After he recorded some various experimental sessions then I edited the gem moments, tweaked the colors, removed moments where the visuals froze, and added radial blur or glow. I also rendered out versions with some added fake motion blur using the RSMB plugin which worked nicely with the fast visuals.

I thought this was a good moment to revisit the 'Surveillance Biz' videos myself and create some of my own remixes. After some experiments I ended up jamming with the Zaebects Modulation and Pixel_Encoder plugins, which worked nicely since the effects respected the alpha channel areas.

Much more to explore with Anagram. To be continued in the future!

PACK ⬕ Machine Hallucinations
- This pack contains 86 VJ loops (28 GB)

When a robot falls asleep and dreams, what does it see? I think it would dream of growing into different modular shapes. So I set out to explore this concept.

I went the rabbit hole and spent so much time collecting, curating, and formatting images from various Gundam Wing websites. The "Dream" videos were generated from a StyleGAN2 model that was trained using 1465 images of black and white blueprint drawings. The "Solid" videos were generated from a StyleGAN2 model that was trained using 3067 images of full color drawings. It was quite dull work and yet the results are exactly what I was hoping for. To keep these videos focused on the abstract robot geometry, I was careful to only show a human silhouette at rare moments.

The "CPU" videos were generated from a StyleGAN2 model that I found online, originally created by Mayur Mistry. It was actually trained using images of floor plans and yet I thought it looked more like a CPU chip that was evolving.

After generating videos out of StyleGAN2, I was concerned of how I was going to deal with the perfect white background since that much brightness tends to ruin the overall vibe. So after some failed experiments with color keying, I finally realized that a basic technique would do the trick. I simply inverted the colors and then rotated the hue to match the colors of the original.

Compositing the "Dream" videos was a unique challenge since they are just black and white. Yet after some experiments I realized there was actually a tiny bit of color in there, either from the machine learning or added compression noise, or perhaps a combo of both. So I cranked the color saturation to 100 and loved the results when glow was applied liberally. I'm such a sucker for how much glow can make things come alive.

The "CutoutDream" videos were some of my favorite experiments since I used one video as a luma matte to cutout a different video and then add heavy glow to make it sing. The "GlowGlitch" videos were the result of playing around and applying Pixel Sorter after Deep Glow and then tweaking the settings in various ways. When in doubt, add glow! I can't stop myself and I have no shame.

Compositing the "Solid" videos was tricky since I had trouble keying out specific colors to apply glow onto. So I experimented with instead using the threshold effect as a way to generate a luma matte for where glow would be applied, which is what the "GlowSolo" videos showcase. In the future I want to return and do some displacement maps in Maya with these videos.

PACK ⬕ Machine Faces
- This pack contains 39 VJ loops (52 GB)

I was in awe when I first saw the morphing faces generated by StyleGAN2. So I was excited when I realized that the extensively pre-trained models has been released for anyone to download and play with. The base videos were rendered using the FFHQ-1024x1024 model through StyleGAN2. I prefer the results from StyleGAN2 rather than StyleGAN3 in this case. I then used the Topaz Labs Video Enhance AI software to uprez the videos to 2048x2048. It's hard to believe that all of the human faces showcased here have been imagined by a computer and have never existed in reality.

To generate the creepy "mutant" scenes, I did some minimal re-training of this model using my own datasets. I say minimal because it only needed a few hours of retraining since it started off with the FFHQ model and then slowly evolving towards my datasets, and yet I stopped the re-training at the very early stages where you can still see the human faces. It was very weird to work on this since I had to pick out the best seeds for the latent walk videos, kinda like how you see Tetris in your dreams if you play it too much.

My initial inspiration for this whole pack was to have the faces glitching out while morphing and that worked beautifully. Then I started exploring some other glitch techniques and tried out the slit-scan effect which was the far out type of bizarre that I hunt for. It's the stuff of a fever dream while sitting in a sauna. The NestDrop experiments were icing on the cake.

I had the idea of taking the FFHQ video into After Effects, adding a multiplied layer with some fast blur, and rendering it to be used as an animated displacement map in Maya. Adding the blur helped to smooth out the features for use during displacement. I then applied this texture onto a sphere and was instantly happy with it. Then I experimented with getting the skin shader working correctly since the settings for subsurface scattering are so finicky.

The TextureDeformerDense and TextureDeformerSparse videos were actually tricky to fully realize. I wanted to convert the FFHQ video to polygons and then render a wireframe of it. But I was having trouble with the displacement map doing what I wanted and so instead I finally switched over to using a texture deformer. And yet then the polygons were a perfect grid due to the tessellation of the flat plane object and so the wireframe just rendered as a grid when seen directly over head, even with the texture deformer applied. So then I applied a poly reduction node and that's when things got interesting.

PACK ⬕ Machine Eyes
- This pack contains 9 VJ loops (6 GB)

Welcome to the uncanny valley! Here we have a selection of human eyes so that you can watch your audience from the front stage. Finally the literal all seeing machine eye. These videos are the result of training StyleGAN3 using a dataset of 217 images.

Machine learning has long intrigued me since I've always been curious of different methods of interpolation. I find the results are often evocative and almost always different from what I initially anticipate. So naturally I've wanted to explore machine learning for art purposes and aim for reality versus uncanny. Yet the GPU requirements have been too heavy and the results too low rez and so I've been waiting for the tech to mature... And that time has finally arrived!

My mind really started reeling when StyleGAN2 was released and so I did some experiments of the feasibility of training at home. But then I stumbled across Google Colab and at first I thought it was really too good to be true... Cheap access to high end GPU's? It felt like a sudden leap into the future. Utilizing a Telsa P100 GPU node on Google Colab, I would get interesting results typically after about 12 to 48 hours of retraining since I'm looking for surreal and glitchy visuals.

I haven't seen much shared about training with really tiny datasets. I've found that the 1000 to 2000 image datasets end up with a decent amount of interpolative potential. Yet for the 200 to 500 range of image datasets I had to ride the line of avoiding mode collapse by hand selecting the seeds prior to rendering out the latent walk video. In other words, the generated visuals would start to repeat itself and so I'd overcome that hand selecting and arranging the gems. Yet even this method would fall apart when using datasets containing less than 200 images and so that was really the absolute minimum necessary, which I found surprising but perfect for my needs. Manually arranging the seeds into a specific order was vital.

In the beginning I was tinkering with a few Colab Notebooks to try and understand the basic pitfalls, but most people are using it for generating media from models that have already been trained. So a huge thanks goes out to Artificial Images for sharing their training focused Notebooks, workshops, and inspiration. This one workshop in particular was helpful in answering questions that I'd been wondering about but hadn't seen shared elsewhere. Getting the StyleGAN2 repo running on Colab proved to be frustrating and then I realized that the StyleGAN3 repo included support for both techniques and is a more mature codebase.

Initially I was frustrated about being limited to 512x512 since the retraining times are so much more realistic. But then I did some uprez testing with the Topaz Labs Video Enhance AI and the results blew me away. I was able to uprez from 512x512 to 2048x2048 and it looked sharp with lots of enhanced details.

Collecting, curating, and preparing my own custom image datasets took a solid 2 months. Then 1 month dedicated to retraining. And finally 1 month of generated the latent walk videos and experimenting with compositing. So that explains why I haven't released any packs recently. Hence I have a bunch more machine learning packs to be released coming up.

PACK ⬕ Warp Factor
- This pack contains 26 VJ loops (17 GB)

I was recently watching the movie "Flight of the Navigator" and was soaking up the nostalgic awe of the futuristic spaceship that it so proudly features. I originally wanted to have this chrome spaceship darting all around the screen, but then I started finding tons of other spaceship models. So then the idea transformed into having 73 different spaceships tumbling while traveling at warp speed. There are so many amazing free 3D models that artists have released. Respect!

An interesting challenge arose: How do you make something look like it's traveling insanely fast without needing tons of actual environment surrounding it? Well my idea was to create an animated reflection map and then apply it to the domeLight in Maya. Although it needed to be an equirectangular map so that it could surround the entire scene, so I realized a fun trick. I downloaded some real astronomy photos from Hubble, imported the photos into After Effects, vertically stretched them 1000%, animate the Y translation very quickly, rendered it out, and linked it into Maya. In this way I was able to create a sense of immense speed by ignoring the equirectangular poles mapping. Then rendering out a motion vector AOV pass for the spaceships was the icing on the cake.

After comping the spaceships renders in After Effects I realized that adding yet another layer of motion would be beneficial. So I started creating a star field so that I could fly the camera through it at a slower speed. But my typical approach in the past would be to use an nParticle emitter to create a sprite star field, yet Redshift cannot render sprites type particles. So I did some brainstorming and realized that I really just needed a way to randomly distribute 20,000 low poly spheres within a given volume. And so of course a MASH simulation was perfect for this.

A project of this type always demands so much prep work. I had to prepare all of the models, group and center all of the poly, scaling everything to the same size, flip UV's, clean up, import, place, and animate everything. But sometimes I enjoy this type of prep work since it allows me to brew on the creative possibilities. Live long and prosper.

PACK ⬕ Cursor Swarm
- This pack contains 69 VJ loops (114 GB)

Sometimes when I see a piece of news skyrocket, I imagine all the billions of people using a computer to digest the same thing. I had this idea of watching a bunch of mouse cursors flocking together and glitching out. So I found this great cursor model and started experimenting.

I have always wanted to explore the Flight node within a MASH simulation of Maya and so this was the perfect opportunity. I thought that I would need to animate a leader for the flock to follow, but I just tweaked the Separation, Alignment, and Cohesion Strength attributes to my liking and there was emergence. I was surprised to learn that I could link any type of dynamic field into the Flight node and I found the Air Field to add the natural kind of turbulence that I was looking for. Also the MASH Orient node (set to Velocity) was critical for having each entity pointing in its correct movement vector.

It was interesting to see how the same simulation changed by simply increasing the total number of total entities (points) from x1,000 to x10,000 to x50,000. For the x1,000 experiments I could utilize the cursor 3D poly and I didn't run out of VRAM. But at x10,000 I had to switch over to using a 1x1 polygon plane, map a image of a cursor onto it, and this enabled me to increase the total count past x200,000. But the visuals got too crowded past x50,000 and so I limited it to that. But I was astounded of how far I could push it.

Since I was using a MASH simulation, the Trails node was an interesting thing to play with especially given the initial inspiration. The Connect to Nearest attribute was exactly the style I had imagined for an extra render layer to jam with in post. The "LinkSolo" scenes kinda took on a life of their own.

After looking at the renders, I was struggling with how to add the glitchy effect I was yearning for. Finally I tried doing some slit-scan experiments and struck gold. For all of the "Flock" scenes I rendered out versions with and without motion blur. Since I used the RSMB plugin to generate the motion blur, it sometimes glitches out and I think it's perfect in this context. But I could see some people just wanting it straight and so I included both options even though it added so many GB's to the pack size.

The "Grid" and "Tumble" scenes were an experiment in trying to make a complex but interesting matrix of cursors. At first I tried working directly in After Effects until I realized that Maya MASH simulations were once again perfect for the task. Also working in Maya allowed me to tuned the max rays in Redshift so that there was no aliasing of the very fine details in the distance. The slit-scan technique again proved to be wild for these scenes.

PACK ⬕ Cellular Auto
- This pack contains 189 VJ loops (177 GB)

I have always been fascinated by cellular automata and the complex structures that emerge from very simple rules. I will often just play and tinker with the initial state to see how it affects the whole simulation. I have this wild idea that nanotech will someday utilize this stuff since things like the ‘OTCA metapixel’ have been engineered.

So I fired up Golly and explored the many categories of simulations. I had originally planned on downloading a bunch of presets from forums and then building my own, but there were so many amazing ones already included with Golly. Props to the Golly community, what an amazing resource. I stand on the shoulders of giants.

I had been dreading the process of recording the simulations since I really didn’t want to rely on an OBS screen capture and possibly introduce encoding artifacts. But luckily someone has shared a LUA script which renders out the raw pixels into a BMP frame sequence. It took some time to render out some of the more complex simulations since it only writes after the CPU has completed each frame, but it was ideal since I needed a perfectly smooth frame rate.

After that I needed to uprez each of the renders since each pixel was a single cell and yet I wanted to visualize them as larger squares. This turned out to be a strange challenge since I wanted to translate the footage, scale it up, prep the colors to be a displacement map, and yet not introduce any aliasing. So I ended up uprezzing the footage into a 1920x1080 canvas and then rendered out using the ‘draft quality’ settings of After Effects and apparently that uses the 'nearest neighbor' algorithm.

That allowed me to finally do some 3D experiments using Maya and Redshift. So I linked the animated displacement map onto a poly plane and also created a separate poly plane with a black hole shader so as to hide any of the boring aspects at rendertime and generate an alpha channel. I have grown frustrated with the lack of abstract IBL maps, so I used colorful flat photos instead of the typical equirectangular map and that resulted in some unique reflections when I animated the rotation. Also an interesting surprise happened when I applied a wave deformer to the poly plane and I think it affected the normals, so the displacement map was affected in surprising ways. Lastly, included in this pack are all of the original displacement map renders, so you can jam with them however you want.

I prepared way too many render layers using different cameras and shaders. So I had a large backlog of renders running for a solid month. Ironically many of the renders didn’t turn out very interesting and I only kept the delicious ones. Even still, I then created even more versions while compositing in After Effects and so this pack ended up enormous. I rendered out “glow” versions for many of the videos since adding a glow effect in Resolume has a heavy processing overhead.

PACK ⬕ Iris Anomaly
- This pack contains 64 VJ loops (42 GB)

I was thinking about how an onstage musician is often performing directly in front of the visuals and so I wanted to create a 3D animation which embraces that fact. So I revisited the tech builder collection that I got a while back and animated this beautiful iris geometry featuring concentric rings.

After animating each of the sequential rings into opposite directions, I keyframed the Y translation and then randomized their location. This allowed me to achieve some complex motion without much trouble.

To further juice up the rings I auto-mapped the UV's for each of the shapes and then applied an animated stripe pattern to the shader opacity. Due to the auto-mapping technique, the stripes were placed algorithmically and I was happily surprised with the results. Then I animated the 'Repeat UV' attribute to have the stripes continually evolve slowly and yet be offset from the oscillating motion of the rings.

I thought it could be interesting to someday have each of the "RingSolo" scenes be projected on physical surfaces at different distances. That idea led me in the direction of rendering each ring segment to its own layer, which opened up some interesting layering possibilities and also the option to change the speed of each ring layer individually while performing live.

I probably went a bit wild with the NestDrop remixes but I just kept stumbling across visuals that I knew would be fun to tinker with in After Effects.

PACK ⬕ Explosive Surprise
- This pack contains 168 VJ loops (28 GB)

Explosions are dazzling when they aren't terrifying. I've long wanted a collection of explosions for my own projects since they are the perfect crowd pleaser for a beat drop. I wasn't going for realism here but instead some exciting stylized explosions. In this pack I've created 7 different types of explosions and each explosion has 5 different variants since I was able to change the seed and render it out. Then I also created 2-3 different options of glow treatments. So that explains the 168 different videos in this pack.

I already had some good experience with fluids in Maya having created some nebulae scenes in the past, but I needed a refresher of how to approach it explicitly for explosions. Turns out I was pretty close but this tutorial helped hone my skills. Also I stopped spinning my wheels when I realized that I could just render using Arnold and not need to worry about the difficulties of rendering fluids in Redshift. Then my first renders out of Arnold just didn't feel right and that's when I realized that adding some glow in post was vital to create that super hot feeling.

I made sure that there are no edges visible, meaning that all of the explosions do not leave the frame. So some of the videos are actually 3k or 4k resolution, but I simply extended the resolution so that the glow gradient wouldn't go out of frame. Since each video includes an alpha channel then you can place the video anywhere you want on your canvas. This also allows you to mash them together, which is how I created the "collage" videos in this pack.

I rendered out the videos at half real-time, so that gives you more freedom to change the speed of the explosion while you're performing live. All of the videos featured in the compilation edit above were sped up to 200%, 300%, or 400% just to make them super short and intense to match the music.

I decided to force fade out each video when the explosion is finished and it's just thick smoke left over. In the context of a live performance I find it distracting when a clip just pops off when it's done. Perhaps not the best solution for some people, but I found it to look great in my layering tests within Resolume.

The "Warning Signs" scenes were inspired by seeing a bunch of road signs stacked on the side of the road. So I collected some open-source imagery, cut each one out, retained the alpha, and then sequenced them up in After Effects. I knew that they would be so juicy when injected into NestDrop and I wasn't disappointed with the results.

PACK ⬕ Chess Sim
- This pack contains 60 VJ loops (52 GB)

I was catching up with Brenna Quirk and Simon Gurvets recently and they shared how they've been experimenting with some chess simulations after learning about the Trapped Knight problem from a Numberphile video. After seeing their various experiments, I thought it would be a fun project to collaborate on. They both agreed and we jumped right in.

So they rendered out the coordinates into a CSV, but how was I going to get the coordinates into Maya in a usable fashion? I lucked out on my first try by reformatting the coordinates slightly and then pasting it directly into a SVG. My first tests looked strange and I eventually realized that the SVG coordinate system has the 0,0 starting point in the top-left corner. So I opened it up in Illustrator and moved the path to be fully on the canvas.

Now I could import the SVG directly into Maya. For the first test I had created the SVG using a line stroke method. And of course that method isn't supported in Maya. So I edited the SVG a bit so that it instead used a polygon shape method. This method worked fine and Maya was able to import the SVG and make a CV curve out of it. From there I did 7 different experiments jamming with the CV curve in various ways.

Along with the Trapped Knight problem, Simon did some experiments with custom chess piece movements and also having two knights moving concurrently on the same infinite chess board. Each knight considers their starting point to be the origin and they each have their own spiral which they use to decide which square to jump to. Although a knight cannot jump to a square that's already been occupied by the other knight. The simulated result was some really beautiful interactions. Check out Simon's Github to explore the Python code that generated did the simulations. Included within the torrent are the SVG's that were used to create all of these scenes. Below are descriptions of the different simulations.

Knight 1-2:
Standard chess knight that moves 1 square in one direction and 2 squares in a different direction.
Knight 1-2 vs 1-2 Frame0:
Two standard chess knights with their own spiral numbering, but a single point on the lattice can't be visited twice. This graph has both knights starting from (0,0).
Knight 1-2 vs 1-2 Frame2:
Two standard chess knights with their own spiral numbering, but a single point on the lattice can't be visited twice. This graph has one knight starting at (0,0) and the other starting at (20,20).
Knight 11-12:
Knight that can move 11 squares in one direction and 12 squares in a different direction.
Knight 19-24:
Knight that can move 19 squares in one direction and 24 squares in a different direction.
Knight Dragon Curve 1-0:
A chess piece that can only move 1 square up, down, left, or right. Unlike a king, it cannot move diagonally. The dragon curve was implemented by creating 4 copies of the dragon curve, all starting at the origin. The spiral number were assigned round-robin style between the 4 curves, but not overwriting the spiral number assigned to a coordinate.

The "Column Trails" was the result of running an MASH simulation on the CV curve and having 500 cubes trace along the curve. Also each cube leaves a short trail behind it, which was a pain since it needed to be cached out. The tall columns were achieved by having all of the cubes connected via a single point and then moving that point far behind the camera. Then I played with various camera paths.

The "Connections" scenes were the result of running a similar but much more intense MASH simulation on the CV curve. So you're looking at 10,000 cubes moving along the curve. Which was very slow to simulate and it took longer to move between each frame rather than the time it took to render the frame. The orange lines are connecting the 2 nearest cubes and the blue-green lines are connecting the 10 nearest cubes.

The "Crystals" scenes was a minimal MASH simulation of cubes tracing along the CV curve. I limited the amount of cubes and then elongated them to be very tall. Then I applied an emerald material shader and setup some various AOV render layers for some options in post. I also animated the rotation of the dome light quickly so as to enhance the sparkly effect for the gem refractions.

The "Boolean Mutate" scenes were the result of wondering if I could interpolate between these two CV curves. So I separated the CV curves by some distance than the other and used the 'loft' command to automatically create polygon walls between the curves. At this point I knew there was some complex beauty happening in there but it was hidden, so I used a Redshift shader to create an animated boolean. Kinda like when you cut an orange in half and can see the middle, except I slowly cutaway more and more of it as time goes on. So what you're seeing in essence is the interpolation from one of your knight coordinates path into a different knight coordinates path. To achieve the animated boolean, there is a texture-grid that defines where the boolean is applied onto the model. So I parented this texture-grid onto the camera so that the animated boolean perfectly matches the forward motion of the camera. I also animated an area light to revolve quickly around the camera and parented this to the camera too. These scenes were ripe for some NestDrop remixes and loved the intense glitchy results that happened.

The "Links" scenes was an exploration to see what kind of algorithmic interaction I could get happening between two curves. So I raised one of the curves to be higher than the other, added both curves into a single MASH simulation, and had the MASH network automatically connect the nearest cubes based on proximity. I had to carefully tune the proximity to be not too close (no links) or not too distant (too many links) and find the sweet spot. Then I created a bunch of render layers, each render layer having a slightly different value for the proximity, so that I had a range to play with in post. I also had to create two poly planes (with a black hole shader) to hide all of the links happening on a single curve, since I was only interested in seeing the links between the two curves. After the renders were completed, since each render layer was a duplicate except for the unique links, I combined different renders together in After Effects with a 'classic difference' blend mode to automatically remove the duplicate links and only be left with the unique links for each render.

The "Boolean Ripple" scenes was an experiment with a Knight Dragon Curve coordinates that Simon had sent me. This was a very similar approach as the "Boolean Mutate" scene, except instead of lofting it into polygons, instead I revolved along a center axis. Kinda like how a lathe machine acts to cut out metal or wood, except I used the coordinates as the input. Then I applied a circular wave deformer and applied the animated boolean.

The "Knots" scene was the result of importing the 'two knights' simulation into Inkscape and experimenting with some of its path effects. I found that the 'interpolate' command would make it look almost like natural rope and then the 'knots' command would give it some depth. Then I took them into After Effects and added a shimmer effect that was achieved by creating some fractal noise, changing the scale to be quite small, animating the evolution attribute, and then using it to luma cutout the SVG. Then a final touch of 'turbulence displace' added on top of everything to give it some subtle motion.

PACK ⬕ Metal Vista
- This pack contains 21 VJ loops (39 GB)

Josiah Lowe and I have been collaborating on some still vector art lately and so we decided to see what could happen if I animated it. We agreed on a theme of an abstract jungle and see where it would lead us. Josiah created some fresh artwork in Illustrator, including a range of beautiful shapes that were reminiscent of plants. I then converted it into an SVG and imported it into Maya.

The “hills grass” scenes used one of Josiah’s drawings and replicated it x50,000 onto a hilly terrain using MASH. The hilly terrain was created by applying some wave noise into a texture deformer on a poly plane. The “hills arms” scenes use another drawing by Josiah. Due to the way he laid out the shapes, it inspired me to rig each section and make it almost like a robotic arm. Surprisingly the end result reminds me of a stork bird looking for bugs to eat.

The “cave” scenes started by trying to use many of Josiah’s drawings as fruit hanging from a tree, but I couldn’t get a result that I was happy with. So then I started experimenting with the ‘duplicate special’ tool in Maya which allowed me to add a tiny amount of rotation to each new duplicated shape and then repeat x1000 to create a long column. Then I applied that same technique to the 17 other drawings. From there I just applied some slow animated rotation to each group and arranged them to make a tunnel, making sure that the camera didn’t crash into any of the shapes. I found some interesting reflections using glass and metal material combinations and placing area lights at even intervals within the tunnel.

The “tree fly” scenes were originally created to layer on top of the “hills grass” renders, but it just didn’t feel right to me and so I kept it separate. The “moving lights” scenes were an interesting test of Redshift since I wanted to have columns of light moving through the scene. So I created a directional light and a bunch of polygon strips, leaving empty space between each strip, animated the whole group of strips, and then enabled global illumination. The Japanese Maple trees were sourced from Mantissa.

This time around I took a different approach for the NestDrop remixes. Instead of injecting the videos into NestDrop, I recorded the NestDrop visuals by itself and then brought everything in After Effects. This allowed me to test out various compositing ideas and use some fun layering tricks and added effects.

PACK ⬕ Surveillance Biz
- This pack contains 18 VJ loops (7 GB)

I was thinking that an appropriate symbol for social media is the security camera. It's interesting how the symbol has transformed from passive surveillance into mass data collection. Tech spirit of our times.

In these experiments I wanted to explore the idea of security cameras that have come alive. So I created three different scenes in Maya. The "popup" scene is a group of security cameras swiveling to gaze at the viewer. The "crowd" scene has a bunch of security cameras oscillating in several different layers, my attempt to have surveillance coverage from every angle in an absurd fashion. The "closeup" scene is a single security camera floating in the distance towards the viewer and continues right into the viewers face. Here is the security camera model that I used.

After so much technical spelunking of late, it was refreshing to get back to my roots and do some homespun 3D animation. Did some interesting experiments with abstract textures for the dome light. The Redshift material shaders still seem to slow down my creative process, but maybe I should fully light the scene first and that would naturally change my approach. That's what I love about 3D animation, always new stuff to learn and explore.

PACK ⬕ Soaring Eagle
- This pack contains 43 VJ loops (74 GB)

Inspired by a dream I had a while back. I was flying in the pink sunset clouds and saw some birds soaring among the clouds and connected to their wings were long golden silk that was following the movement of the wings flapping. When I awoke the image was still clear in my mind.

I started off by doing tinkering with a diamond shader with a super high dispersion factor to create some wild colors. But the real breakthrough happened when I added a field of spheres below the bird, set them to illuminate, and yet only visible in the refractions. The eagle is rigged using bend deformers for the wing and tail motions. To give the bird a little bit more life when it was just soaring and not flapping its wings, I also used a wave deformer to add some feeling of undulating movement.

Interesting to note that the rays were done entirely in After Effects, thanks to carefully keying out specific colors and then applying many layers of the fast radial blur effect. I had planned on doing the cloth in Maya but wasn't in the mood to deal with a cloth simulation, so some quick experiments out of desperation proved to be fruitful. It's a different effect than what I originally had in mind but I'm pleased with the result.

For the clouds I was trying to get Mental Ray working again but didn't wanna deal with an older version of Maya. But then I realized that I could easily render the cloud using the Maya Software render engine. I normally stay far away from the Maya Software render engine but rendering Maya fluids in Redshift is a total pain. I'm surprisingly happy with the result and gotta explore more abstract Maya fluids sometime in the future.

It's been satisfying to render my own 3D animations and then inject them into NestDrop to hunt for some gems. It's playtime! The bird renders were perfect for this and I had to limit myself to only a few absolute favorites.

PACK ⬕ Mountains Flow
- This pack contains 35 VJ loops (41 GB)

These were some really fun experiments. Seeing as how much I've enjoyed the happy accidents of injecting Maya renders into NestDrop lately, I had the crazy idea of going the other direction... Injecting NestDrop recordings into Maya. I have been wanting to explore animated displacement maps and so this was a good reason to try it out.

It took some trial and error to nail down a good workflow. If I imported the raw NestDrop frame sequence directly into Maya then the displacement appeared too harsh in the Maya renders. So after recording the Nestdrop visuals then I treated it in After Effects. I simply made it black and white, duplicated the layer and applied a multiply blend mode along with some fast blur. This blur was critical for rounding out some of the harsh forms of when the displacement was rendered over in Maya. Then rendered out each of the videos to a PNG frame sequence for linking into Maya.

Empty space is so useful especially when it comes to alpha. But when I linked the animated displacement map then it rendered as an utter solid. So I created another poly plane and applied a black hole shader so that I could hide any of the boring aspects at rendertime, along with generating an alpha channel. This worked marvelously and allowed even more happy accidents in a process already fully guided by happy accidents.

And of course, what's to stop me from then taking the resulting Maya renders and doing another round of NestDrop remixes? Since each of the Maya renders have an alpha channel, I couldn't help myself. What an ouroboros this pack has become.

PACK ⬕ Outbreak Nodes
- This pack contains 18 VJ loops (29 GB)

Nauté is a friend of mine that has been working on simulation and visualization of infectious disease outbreaks. This data simulates the spread of a virtual pathogen with COVID-19-like epidemiological parameters in a college campus. It's a raw topic since we're currently living through a pandemic but best to digest it through artwork.

We have long wanted to collaborate on a project and so it was refreshing to jam with real data. Nauté sent me the pandemic visualization and then I did a screen capture of the animation. I had to manually remove any duplicate frames, since my computer couldn't keep up with something in the pipeline. Then I processed it in After Effects to carefully remove the background to create an alpha channel.

My original plan was to do some animated displacement map experiments in Maya and explore some MASH networks. But then I started trying a few different ideas in NestDrop and getting some good results that matched the intensity that I was looking for. A quick but satisfying collab.

PACK ⬕ Recursion Stack
- This pack contains 30 VJ loops (29 GB)

For a long time I've wanted to revisit the Vector Recursion Workbench software collaboration I did with Nathan Williams. So I generated an SVG from the software and imported it into Maya. My first experiments proved fruitful in extruding each of the shapes individually and then moving each shape to create a stepped pyramid. From there I did different iterations with animation, have the shapes rotate in unison, have the shapes rotate out of phase, animate the shapes using the vertical Z dimension. Every other shape has a black hole shader applied, so the alpha is cutout at rendertime.

Towards the end I wasn't enjoying how everything was constantly visible and it needed some mystery. So I created a 3D watery plane, applied the black hole shader, and then animated it oscillate up and down. So it occasionally hides the recursion shapes and yet the water texture ensures that it's always slightly randomized for what is hidden.

I normally render a motion vector AOV pass so that I can use the RSMB Vectors plug-in to add realistic motion blur in post and avoid the heavy hit in Redshift render time. But the motion vector AOV pass doesn't consider the transparency part of the shader, so it wasn't fully accurate. Instead I just let the RSMB plug-in analyize the beauty pass directly to calculate the motion blur on its own. The visuals are moving so fast in this scene that the RSMB plug-in occasionally glitches out, actually in a very pleasing way. But I rendered out alternate versions without motion blur just for some options depending on the look you're going after.

I had an utter bonanza when I injected the loops into NestDrop. I'm such a sucker for the glitched out feedback loops that mix a look of digital versus organic. Stupid Youtube... its compression really kills the detail for some of these, but the MOV versions are so juicy.

PACK ⬕ Series of Tubes
- This pack contains 10 VJ loops (10 GB)

Started off with some experiments using subsurface scattering to create a plastic material which light could shine through. The 'wires' and 'spinners' came about from wanting objects of different thicknesses to see how it reacted to the plastic material. The original models came from this super useful tech builder collection, worth every penny.

After jamming with the lighting and I ended up with a long row of evenly spaced non-visible area lights, grouped them together, and then animated the whole group to move in the same axis as the camera. I originally wanted an glowing orb to be at the middle of each area light but overall it was feeling more like a organic reaction inside within each of the wires and spinners. Had to apply some limitations on the area lights so that I could limit their range to a specific distance.

I never enjoy the delicate preparations necessary to make a 3D scene loop seamlessly and this one was difficult with all of those 'wires' moving at different rates, but I pulled it off with some careful thinking.

Had tons of fun injecting these loops into NestDrop. Since these loops effectively wipe the screen quickly, this makes for some very interesting reactions since the Milkdrop engine often uses heavy visual feedback loops.

PACK ⬕ Crystal Bounty
- This pack contains 13 VJ loops (8 GB)

Who doesn't love some shiny crystals? I spent many years using Mental Ray, so to finally jump into a cutting edge render engine like Redshift was pretty incredible. Same ideas but with years of advancement, so the knowledge transferred easily.

So I was curious to explore the caustics in Redshift and see how far I could push the true refractions. I had originally wanted to crank the dispersion up super high and get some wild rainbows, but I was just so entranced by this more realistic look. I tried some different lighting setups and also played with a few HDRI environment maps, but in the end the best look was a simple area light in the background acting kinda like a photo lightbox used for negatives.

An absurd amount of polygons are in this scene, just to get the refractions going really crazy, and that made doing an animated poly-reduce very slow. But it came all together with some patience. The shards scene was a nice surprise when playing with the poly-reduce and I was amazed by the beautiful colors of the dispersion.

The 'sparkle' scenes were a last minute addition when I realized that I was missing some necessary twinkling that crystals demand. But I didn't render the sparkles into any of the scenes directly so as to be more useful during a live performance. Sparkles at your command!

PACK ⬕ Hands Cascade
- This pack contains 12 VJ loops (12 GB)

I started off by playing with hundreds of arms moving out of phase with each other, kinda like the images of Hindu gods. But I ended up experimenting with animated UV maps at different speeds to create mesmerizing striped patterns and that took over the focus. It was particularly interesting to apply the UV map as a alpha channel to have it cut out a gold metal shader. The bubble shader was a happy accident and ran with it. Here is the hands model that I used.

I enjoy playing with the nonlinear deformers in Maya since they can be stacked together to create this type of warped appearance. I've been exploring some different area light setups since the global illumination renders so quickly in Redshift and in the past I've never been able to afford the heavy render times on my own personal computer.

Also did some exploratory experiments by injecting the loops into NestDrop to get some generative action happening. Always full of surprises.

What are the usage rights for these videos?

You have permission to use these VJ loops within your live performances. Please do not sell or redistribute the raw VJ loops. For other use cases then please me.

Why does the transparency look weird in Resolume?

For each MOV (with alpha) that you import into Resolume, you must go into the clip settings and manually change the 'Alpha Type' to 'Premultiplied'. Using the 'Straight' option will result in a dark halo around the alpha cutouts.

Can you release these packs using a different codec?

▸ I often use the H.264 codec since the files sizes are decent.
▸ When I need to distribute a video which includes an alpha channel then I use the HAP codec.
▸ Why not DXV? Unfortunately the 'DXV alpha' codec bakes a dark halo around the alpha cutout which is not remedied by setting the 'Alpha Type' to 'Premultiplied' in Resolume. Bummer!
▸ Both the HAP and DXV codecs have a very similar implementation. What makes HAP and DXV perfect for VJ-ing is that within the MOV container is an image sequence that utilizes GPU decompression and so that allows you to easily scrub, speed up, and reverse the video in realtime. So I'd suggest installing the HAP codec since it plays back perfectly in Resolume and has identical functionality to the DXV codec.
Download Tips
Available only via BitTorrent

Since I'm distributing tons of videos, downloads are supplied solely through BitTorrent. So I've rented a seedbox to be online 24/7 with a 1000 Mbit connection.

How do I download these packs?

You need to use a BitTorrent client, such as qBittorrent.

Is there a limit to how much I can download?

Feel free to download as much as you want.

Why is my torrent download stuck at 99%?

Try doing a "Force Re-Check" of the download from within your BitTorrent client and then "Resume".

Why not host these packs in the cloud?

Google Drive, Dropbox, and web hosts don't offer enough bandwidth. AWS and B2 would result in large fees.
↩️ Go to the home page