VJ Loops by
ISOSCELES
Connect
Patreon /// Instagram /// Facebook /// Youtube ///
Release Schedule
A new VJ pack will be released at the beginning of every month.
PACK ⬕ Machine Graffiti
- This pack contains 50 VJ loops (27 GB)

Street art has this unique unchained feeling and futuristic shapes that I have long been inspired by. So I wondered if I could get some interesting results in training StyleGAN2 on a graffiti dataset. I started off by collecting 716 photos of graffiti and then cropped them each to be a square. And instantly the results blew me away and were better than I had hoped for. I also did a few experiments with StyleGAN3, but the StyleGAN2 results had a more realistic feeling probably since it’s more forgiving of being trained on tiny datasets.

From there doing some experiments with slitscan and dd_GlitchAssist processing was a perfect match. I also did some experiments in After Effects where I keyed out specific colors and then added glow onto everything, which allows certain shapes to come alive.

As always, some of my favorite experiments are layering a new experiment on top of prior experiments. So I reengineered the displacement map technique I used for the ‘Mountains Flow’ pack. Except when I duplicated the layer and applied a multiply blend mode, I added even more fast blur, which rounded out the displaced 3D forms beautifully. Then the black hole shader allowed me to retain the cutout alpha channel. I rendered out one version with the true color and another using a glimmering gold shader. The joy of infinite experiments.

PACK ⬕ Surveillance Biz Glitch
- This pack contains 88 VJ loops (26 GB)

Anagram and I have been collaborating on this pack since early 2021. I sent him the 'Surveillance Biz' videos and he used it as fodder for experimenting with his analog video synthesizer. It was interesting to see 3D animation brought into a video synth, glitch it up, and then he would send it back over for further manipulation.

Midway through our collab Anagram proposed a oscilloscope vector re-scan workflow, meaning he would output the video to an oscilloscope and then record the results using a video camera. It resulted in an aesthetic that is quite unique and impossible to recreate otherwise. The analog processing adds noise, sometimes recursion, and CRT bloom is something else. So after some technical experimenting and polishing the recapture method, he recorded a jam and then I edited the gem moments and applied a corner pin warp to make the oscilloscope monitor appear more face-on. I also did some experiments with applying a slitscan effect which was icing on the cake. This is my favorite type of collaboration where we bounce visuals back and forth, each person layering on something the other wouldn't have fathomed.

Anagram also produced some original content in the glitch spirit. The analog video synth has a very specific feeling about it, something about the way it acts feels alive to me. So I was keen to composite these visuals and polish them that extra little bit. After he recorded some various experimental sessions then I edited the gem moments, tweaked the colors, removed moments where the visuals froze, and added radial blur or glow. I also rendered out versions with some added fake motion blur using the RSMB plugin which worked nicely with the fast visuals.

I thought this was a good moment to revisit the 'Surveillance Biz' videos myself and create some of my own remixes. After some experiments I ended up jamming with the Zaebects Modulation and Pixel_Encoder plugins, which worked nicely since the effects respected the alpha channel areas.

Much more to explore with Anagram. To be continued in the future!

PACK ⬕ Machine Hallucinations
- This pack contains 86 VJ loops (28 GB)

When a robot falls asleep and dreams, what does it see? I think it would dream of growing into different modular shapes. So I set out to explore this concept.

I went the rabbit hole and spent so much time collecting, curating, and formatting images from various Gundam Wing websites. The "Dream" videos were generated from a StyleGAN2 model that was trained using 1465 images of black and white blueprint drawings. The "Solid" videos were generated from a StyleGAN2 model that was trained using 3067 images of full color drawings. It was quite dull work and yet the results are exactly what I was hoping for. To keep these videos focused on the abstract robot geometry, I was careful to only show a human silhouette at rare moments.

The "CPU" videos were generated from a StyleGAN2 model that I found online, originally created by Mayur Mistry. It was actually trained using images of floor plans and yet I thought it looked more like a CPU chip that was evolving.

After generating videos out of StyleGAN2, I was concerned of how I was going to deal with the perfect white background since that much brightness tends to ruin the overall vibe. So after some failed experiments with color keying, I finally realized that a basic technique would do the trick. I simply inverted the colors and then rotated the hue to match the colors of the original.

Compositing the "Dream" videos was a unique challenge since they are just black and white. Yet after some experiments I realized there was actually a tiny bit of color in there, either from the machine learning or added compression noise, or perhaps a combo of both. So I cranked the color saturation to 100 and loved the results when glow was applied liberally. I'm such a sucker for how much glow can make things come alive.

The "CutoutDream" videos were some of my favorite experiments since I used one video as a luma matte to cutout a different video and then add heavy glow to make it sing. The "GlowGlitch" videos were the result of playing around and applying Pixel Sorter after Deep Glow and then tweaking the settings in various ways. When in doubt, add glow! I can't stop myself and I have no shame.

Compositing the "Solid" videos was tricky since I had trouble keying out specific colors to apply glow onto. So I experimented with instead using the threshold effect as a way to generate a luma matte for where glow would be applied, which is what the "GlowSolo" videos showcase. In the future I want to return and do some displacement maps in Maya with these videos.

PACK ⬕ Machine Faces
- This pack contains 39 VJ loops (52 GB)

I was in awe when I first saw the morphing faces generated by StyleGAN2. So I was excited when I realized that the extensively pre-trained models has been released for anyone to download and play with. The base videos were rendered using the FFHQ-1024x1024 model through StyleGAN2. I prefer the results from StyleGAN2 rather than StyleGAN3 in this case. I then used the Topaz Labs Video Enhance AI software to uprez the videos to 2048x2048. It's hard to believe that all of the human faces showcased here have been imagined by a computer and have never existed in reality.

To generate the creepy "mutant" scenes, I did some minimal re-training of this model using my own datasets. I say minimal because it only needed a few hours of retraining since it started off with the FFHQ model and then slowly evolving towards my datasets, and yet I stopped the re-training at the very early stages where you can still see the human faces. It was very weird to work on this since I had to pick out the best seeds for the latent walk videos, kinda like how you see Tetris in your dreams if you play it too much.

My initial inspiration for this whole pack was to have the faces glitching out while morphing and that worked beautifully. Then I started exploring some other glitch techniques and tried out the slit-scan effect which was the far out type of bizarre that I hunt for. It's the stuff of a fever dream while sitting in a sauna. The NestDrop experiments were icing on the cake.

I had the idea of taking the FFHQ video into After Effects, adding a multiplied layer with some fast blur, and rendering it to be used as an animated displacement map in Maya. Adding the blur helped to smooth out the features for use during displacement. I then applied this texture onto a sphere and was instantly happy with it. Then I experimented with getting the skin shader working correctly since the settings for subsurface scattering are so finicky.

The TextureDeformerDense and TextureDeformerSparse videos were actually tricky to fully realize. I wanted to convert the FFHQ video to polygons and then render a wireframe of it. But I was having trouble with the displacement map doing what I wanted and so instead I finally switched over to using a texture deformer. And yet then the polygons were a perfect grid due to the tessellation of the flat plane object and so the wireframe just rendered as a grid when seen directly over head, even with the texture deformer applied. So then I applied a poly reduction node and that's when things got interesting.

PACK ⬕ Machine Eyes
- This pack contains 9 VJ loops (6 GB)

Welcome to the uncanny valley! Here we have a selection of human eyes so that you can watch your audience from the front stage. Finally the literal all seeing machine eye. These videos are the result of training StyleGAN3 using a dataset of 217 images.

Machine learning has long intrigued me since I've always been curious of different methods of interpolation. I find the results are often evocative and almost always different from what I initially anticipate. So naturally I've wanted to explore machine learning for art purposes and aim for reality versus uncanny. Yet the GPU requirements have been too heavy and the results too low rez and so I've been waiting for the tech to mature... And that time has finally arrived!

My mind really started reeling when StyleGAN2 was released and so I did some experiments of the feasibility of training at home. But then I stumbled across Google Colab and at first I thought it was really too good to be true... Cheap access to high end GPU's? It felt like a sudden leap into the future. Utilizing a Telsa P100 GPU node on Google Colab, I would get interesting results typically after about 12 to 48 hours of retraining since I'm looking for surreal and glitchy visuals.

I haven't seen much shared about training with really tiny datasets. I've found that the 1000 to 2000 image datasets end up with a decent amount of interpolative potential. Yet for the 200 to 500 range of image datasets I had to ride the line of avoiding mode collapse by hand selecting the seeds prior to rendering out the latent walk video. In other words, the generated visuals would start to repeat itself and so I'd overcome that hand selecting and arranging the gems. Yet even this method would fall apart when using datasets containing less than 200 images and so that was really the absolute minimum necessary, which I found surprising but perfect for my needs. Manually arranging the seeds into a specific order was vital.

In the beginning I was tinkering with a few Colab Notebooks to try and understand the basic pitfalls, but most people are using it for generating media from models that have already been trained. So a huge thanks goes out to Artificial Images for sharing their training focused Notebooks, workshops, and inspiration. This one workshop in particular was helpful in answering questions that I'd been wondering about but hadn't seen shared elsewhere. Getting the StyleGAN2 repo running on Colab proved to be frustrating and then I realized that the StyleGAN3 repo included support for both techniques and is a more mature codebase.

Initially I was frustrated about being limited to 512x512 since the retraining times are so much more realistic. But then I did some uprez testing with the Topaz Labs Video Enhance AI and the results blew me away. I was able to uprez from 512x512 to 2048x2048 and it looked sharp with lots of enhanced details.

Collecting, curating, and preparing my own custom image datasets took a solid 2 months. Then 1 month dedicated to retraining. And finally 1 month of generated the latent walk videos and experimenting with compositing. So that explains why I haven't released any packs recently. Hence I have a bunch more machine learning packs to be released coming up.

PACK ⬕ Warp Factor
- This pack contains 26 VJ loops (17 GB)

I was recently watching the movie "Flight of the Navigator" and was soaking up the nostalgic awe of the futuristic spaceship that it so proudly features. I originally wanted to have this chrome spaceship darting all around the screen, but then I started finding tons of other spaceship models. So then the idea transformed into having 73 different spaceships tumbling while traveling at warp speed. There are so many amazing free 3D models that artists have released. Respect!

An interesting challenge arose: How do you make something look like it's traveling insanely fast without needing tons of actual environment surrounding it? Well my idea was to create an animated reflection map and then apply it to the domeLight in Maya. Although it needed to be an equirectangular map so that it could surround the entire scene, so I realized a fun trick. I downloaded some real astronomy photos from Hubble, imported the photos into After Effects, vertically stretched them 1000%, animate the Y translation very quickly, rendered it out, and linked it into Maya. In this way I was able to create a sense of immense speed by ignoring the equirectangular poles mapping. Then rendering out a motion vector AOV pass for the spaceships was the icing on the cake.

After comping the spaceships renders in After Effects I realized that adding yet another layer of motion would be beneficial. So I started creating a star field so that I could fly the camera through it at a slower speed. But my typical approach in the past would be to use an nParticle emitter to create a sprite star field, yet Redshift cannot render sprites type particles. So I did some brainstorming and realized that I really just needed a way to randomly distribute 20,000 low poly spheres within a given volume. And so of course a MASH simulation was perfect for this.

A project of this type always demands so much prep work. I had to prepare all of the models, group and center all of the poly, scaling everything to the same size, flip UV's, clean up, import, place, and animate everything. But sometimes I enjoy this type of prep work since it allows me to brew on the creative possibilities. Live long and prosper.

PACK ⬕ Cursor Swarm
- This pack contains 69 VJ loops (114 GB)

Sometimes when I see a piece of news skyrocket, I imagine all the billions of people using a computer to digest the same thing. I had this idea of watching a bunch of mouse cursors flocking together and glitching out. So I found this great cursor model and started experimenting.

I have always wanted to explore the Flight node within a MASH simulation of Maya and so this was the perfect opportunity. I thought that I would need to animate a leader for the flock to follow, but I just tweaked the Separation, Alignment, and Cohesion Strength attributes to my liking and there was emergence. I was surprised to learn that I could link any type of dynamic field into the Flight node and I found the Air Field to add the natural kind of turbulence that I was looking for. Also the MASH Orient node (set to Velocity) was critical for having each entity pointing in its correct movement vector.

It was interesting to see how the same simulation changed by simply increasing the total number of total entities (points) from x1,000 to x10,000 to x50,000. For the x1,000 experiments I could utilize the cursor 3D poly and I didn't run out of VRAM. But at x10,000 I had to switch over to using a 1x1 polygon plane, map a image of a cursor onto it, and this enabled me to increase the total count past x200,000. But the visuals got too crowded past x50,000 and so I limited it to that. But I was astounded of how far I could push it.

Since I was using a MASH simulation, the Trails node was an interesting thing to play with especially given the initial inspiration. The Connect to Nearest attribute was exactly the style I had imagined for an extra render layer to jam with in post. The "LinkSolo" scenes kinda took on a life of their own.

After looking at the renders, I was struggling with how to add the glitchy effect I was yearning for. Finally I tried doing some slit-scan experiments and struck gold. For all of the "Flock" scenes I rendered out versions with and without motion blur. Since I used the RSMB plugin to generate the motion blur, it sometimes glitches out and I think it's perfect in this context. But I could see some people just wanting it straight and so I included both options even though it added so many GB's to the pack size.

The "Grid" and "Tumble" scenes were an experiment in trying to make a complex but interesting matrix of cursors. At first I tried working directly in After Effects until I realized that Maya MASH simulations were once again perfect for the task. Also working in Maya allowed me to tuned the max rays in Redshift so that there was no aliasing of the very fine details in the distance. The slit-scan technique again proved to be wild for these scenes.

PACK ⬕ Cellular Auto
- This pack contains 189 VJ loops (177 GB)

I have always been fascinated by cellular automata and the complex structures that emerge from very simple rules. I will often just play and tinker with the initial state to see how it affects the whole simulation. I have this wild idea that nanotech will someday utilize this stuff since things like the ‘OTCA metapixel’ have been engineered.

So I fired up Golly and explored the many categories of simulations. I had originally planned on downloading a bunch of presets from forums and then building my own, but there were so many amazing ones already included with Golly. Props to the Golly community, what an amazing resource. I stand on the shoulders of giants.

I had been dreading the process of recording the simulations since I really didn’t want to rely on an OBS screen capture and possibly introduce encoding artifacts. But luckily someone has shared a LUA script which renders out the raw pixels into a BMP frame sequence. It took some time to render out some of the more complex simulations since it only writes after the CPU has completed each frame, but it was ideal since I needed a perfectly smooth frame rate.

After that I needed to uprez each of the renders since each pixel was a single cell and yet I wanted to visualize them as larger squares. This turned out to be a strange challenge since I wanted to translate the footage, scale it up, prep the colors to be a displacement map, and yet not introduce any aliasing. So I ended up uprezzing the footage into a 1920x1080 canvas and then rendered out using the ‘draft quality’ settings of After Effects and apparently that uses the 'nearest neighbor' algorithm.

That allowed me to finally do some 3D experiments using Maya and Redshift. So I linked the animated displacement map onto a poly plane and also created a separate poly plane with a black hole shader so as to hide any of the boring aspects at rendertime and generate an alpha channel. I have grown frustrated with the lack of abstract IBL maps, so I used colorful flat photos instead of the typical equirectangular map and that resulted in some unique reflections when I animated the rotation. Also an interesting surprise happened when I applied a wave deformer to the poly plane and I think it affected the normals, so the displacement map was affected in surprising ways. Lastly, included in this pack are all of the original displacement map renders, so you can jam with them however you want.

I prepared way too many render layers using different cameras and shaders. So I had a large backlog of renders running for a solid month. Ironically many of the renders didn’t turn out very interesting and I only kept the delicious ones. Even still, I then created even more versions while compositing in After Effects and so this pack ended up enormous. I rendered out “glow” versions for many of the videos since adding a glow effect in Resolume has a heavy processing overhead.

PACK ⬕ Iris Anomaly
- This pack contains 64 VJ loops (42 GB)

I was thinking about how an onstage musician is often performing directly in front of the visuals and so I wanted to create a 3D animation which embraces that fact. So I revisited the tech builder collection that I got a while back and animated this beautiful iris geometry featuring concentric rings.

After animating each of the sequential rings into opposite directions, I keyframed the Y translation and then randomized their location. This allowed me to achieve some complex motion without much trouble.

To further juice up the rings I auto-mapped the UV's for each of the shapes and then applied an animated stripe pattern to the shader opacity. Due to the auto-mapping technique, the stripes were placed algorithmically and I was happily surprised with the results. Then I animated the 'Repeat UV' attribute to have the stripes continually evolve slowly and yet be offset from the oscillating motion of the rings.

I thought it could be interesting to someday have each of the "RingSolo" scenes be projected on physical surfaces at different distances. That idea led me in the direction of rendering each ring segment to its own layer, which opened up some interesting layering possibilities and also the option to change the speed of each ring layer individually while performing live.

I probably went a bit wild with the NestDrop remixes but I just kept stumbling across visuals that I knew would be fun to tinker with in After Effects.

PACK ⬕ Explosive Surprise
- This pack contains 168 VJ loops (28 GB)

Explosions are dazzling when they aren't terrifying. I've long wanted a collection of explosions for my own projects since they are the perfect crowd pleaser for a beat drop. I wasn't going for realism here but instead some exciting stylized explosions. In this pack I've created 7 different types of explosions and each explosion has 5 different variants since I was able to change the seed and render it out. Then I also created 2-3 different options of glow treatments. So that explains the 168 different videos in this pack.

I already had some good experience with fluids in Maya having created some nebulae scenes in the past, but I needed a refresher of how to approach it explicitly for explosions. Turns out I was pretty close but this tutorial helped hone my skills. Also I stopped spinning my wheels when I realized that I could just render using Arnold and not need to worry about the difficulties of rendering fluids in Redshift. Then my first renders out of Arnold just didn't feel right and that's when I realized that adding some glow in post was vital to create that super hot feeling.

I made sure that there are no edges visible, meaning that all of the explosions do not leave the frame. So some of the videos are actually 3k or 4k resolution, but I simply extended the resolution so that the glow gradient wouldn't go out of frame. Since each video includes an alpha channel then you can place the video anywhere you want on your canvas. This also allows you to mash them together, which is how I created the "collage" videos in this pack.

I rendered out the videos at half real-time, so that gives you more freedom to change the speed of the explosion while you're performing live. All of the videos featured in the compilation edit above were sped up to 200%, 300%, or 400% just to make them super short and intense to match the music.

I decided to force fade out each video when the explosion is finished and it's just thick smoke left over. In the context of a live performance I find it distracting when a clip just pops off when it's done. Perhaps not the best solution for some people, but I found it to look great in my layering tests within Resolume.

The "Warning Signs" scenes were inspired by seeing a bunch of road signs stacked on the side of the road. So I collected some open-source imagery, cut each one out, retained the alpha, and then sequenced them up in After Effects. I knew that they would be so juicy when injected into NestDrop and I wasn't disappointed with the results.

PACK ⬕ Chess Sim
- This pack contains 60 VJ loops (52 GB)

I was catching up with Brenna Quirk and Simon Gurvets recently and they shared how they've been experimenting with some chess simulations after learning about the Trapped Knight problem from a Numberphile video. After seeing their various experiments, I thought it would be a fun project to collaborate on. They both agreed and we jumped right in.

So they rendered out the coordinates into a CSV, but how was I going to get the coordinates into Maya in a usable fashion? I lucked out on my first try by reformatting the coordinates slightly and then pasting it directly into a SVG. My first tests looked strange and I eventually realized that the SVG coordinate system has the 0,0 starting point in the top-left corner. So I opened it up in Illustrator and moved the path to be fully on the canvas.

Now I could import the SVG directly into Maya. For the first test I had created the SVG using a line stroke method. And of course that method isn't supported in Maya. So I edited the SVG a bit so that it instead used a polygon shape method. This method worked fine and Maya was able to import the SVG and make a CV curve out of it. From there I did 7 different experiments jamming with the CV curve in various ways.

Along with the Trapped Knight problem, Simon did some experiments with custom chess piece movements and also having two knights moving concurrently on the same infinite chess board. Each knight considers their starting point to be the origin and they each have their own spiral which they use to decide which square to jump to. Although a knight cannot jump to a square that's already been occupied by the other knight. The simulated result was some really beautiful interactions. Check out Simon's Github to explore the Python code that generated did the simulations. Included within the torrent are the SVG's that were used to create all of these scenes. Below are descriptions of the different simulations.

Knight 1-2:
Standard chess knight that moves 1 square in one direction and 2 squares in a different direction.
Knight 1-2 vs 1-2 Frame0:
Two standard chess knights with their own spiral numbering, but a single point on the lattice can't be visited twice. This graph has both knights starting from (0,0).
Knight 1-2 vs 1-2 Frame2:
Two standard chess knights with their own spiral numbering, but a single point on the lattice can't be visited twice. This graph has one knight starting at (0,0) and the other starting at (20,20).
Knight 11-12:
Knight that can move 11 squares in one direction and 12 squares in a different direction.
Knight 19-24:
Knight that can move 19 squares in one direction and 24 squares in a different direction.
Knight Dragon Curve 1-0:
A chess piece that can only move 1 square up, down, left, or right. Unlike a king, it cannot move diagonally. The dragon curve was implemented by creating 4 copies of the dragon curve, all starting at the origin. The spiral number were assigned round-robin style between the 4 curves, but not overwriting the spiral number assigned to a coordinate.

The "Column Trails" was the result of running an MASH simulation on the CV curve and having 500 cubes trace along the curve. Also each cube leaves a short trail behind it, which was a pain since it needed to be cached out. The tall columns were achieved by having all of the cubes connected via a single point and then moving that point far behind the camera. Then I played with various camera paths.

The "Connections" scenes were the result of running a similar but much more intense MASH simulation on the CV curve. So you're looking at 10,000 cubes moving along the curve. Which was very slow to simulate and it took longer to move between each frame rather than the time it took to render the frame. The orange lines are connecting the 2 nearest cubes and the blue-green lines are connecting the 10 nearest cubes.

The "Crystals" scenes was a minimal MASH simulation of cubes tracing along the CV curve. I limited the amount of cubes and then elongated them to be very tall. Then I applied an emerald material shader and setup some various AOV render layers for some options in post. I also animated the rotation of the dome light quickly so as to enhance the sparkly effect for the gem refractions.

The "Boolean Mutate" scenes were the result of wondering if I could interpolate between these two CV curves. So I separated the CV curves by some distance than the other and used the 'loft' command to automatically create polygon walls between the curves. At this point I knew there was some complex beauty happening in there but it was hidden, so I used a Redshift shader to create an animated boolean. Kinda like when you cut an orange in half and can see the middle, except I slowly cutaway more and more of it as time goes on. So what you're seeing in essence is the interpolation from one of your knight coordinates path into a different knight coordinates path. To achieve the animated boolean, there is a texture-grid that defines where the boolean is applied onto the model. So I parented this texture-grid onto the camera so that the animated boolean perfectly matches the forward motion of the camera. I also animated an area light to revolve quickly around the camera and parented this to the camera too. These scenes were ripe for some NestDrop remixes and loved the intense glitchy results that happened.

The "Links" scenes was an exploration to see what kind of algorithmic interaction I could get happening between two curves. So I raised one of the curves to be higher than the other, added both curves into a single MASH simulation, and had the MASH network automatically connect the nearest cubes based on proximity. I had to carefully tune the proximity to be not too close (no links) or not too distant (too many links) and find the sweet spot. Then I created a bunch of render layers, each render layer having a slightly different value for the proximity, so that I had a range to play with in post. I also had to create two poly planes (with a black hole shader) to hide all of the links happening on a single curve, since I was only interested in seeing the links between the two curves. After the renders were completed, since each render layer was a duplicate except for the unique links, I combined different renders together in After Effects with a 'classic difference' blend mode to automatically remove the duplicate links and only be left with the unique links for each render.

The "Boolean Ripple" scenes was an experiment with a Knight Dragon Curve coordinates that Simon had sent me. This was a very similar approach as the "Boolean Mutate" scene, except instead of lofting it into polygons, instead I revolved along a center axis. Kinda like how a lathe machine acts to cut out metal or wood, except I used the coordinates as the input. Then I applied a circular wave deformer and applied the animated boolean.

The "Knots" scene was the result of importing the 'two knights' simulation into Inkscape and experimenting with some of its path effects. I found that the 'interpolate' command would make it look almost like natural rope and then the 'knots' command would give it some depth. Then I took them into After Effects and added a shimmer effect that was achieved by creating some fractal noise, changing the scale to be quite small, animating the evolution attribute, and then using it to luma cutout the SVG. Then a final touch of 'turbulence displace' added on top of everything to give it some subtle motion.

PACK ⬕ Metal Vista
- This pack contains 21 VJ loops (39 GB)

Josiah Lowe and I have been collaborating on some still vector art lately and so we decided to see what could happen if I animated it. We agreed on a theme of an abstract jungle and see where it would lead us. Josiah created some fresh artwork in Illustrator, including a range of beautiful shapes that were reminiscent of plants. I then converted it into an SVG and imported it into Maya.

The “hills grass” scenes used one of Josiah’s drawings and replicated it x50,000 onto a hilly terrain using MASH. The hilly terrain was created by applying some wave noise into a texture deformer on a poly plane. The “hills arms” scenes use another drawing by Josiah. Due to the way he laid out the shapes, it inspired me to rig each section and make it almost like a robotic arm. Surprisingly the end result reminds me of a stork bird looking for bugs to eat.

The “cave” scenes started by trying to use many of Josiah’s drawings as fruit hanging from a tree, but I couldn’t get a result that I was happy with. So then I started experimenting with the ‘duplicate special’ tool in Maya which allowed me to add a tiny amount of rotation to each new duplicated shape and then repeat x1000 to create a long column. Then I applied that same technique to the 17 other drawings. From there I just applied some slow animated rotation to each group and arranged them to make a tunnel, making sure that the camera didn’t crash into any of the shapes. I found some interesting reflections using glass and metal material combinations and placing area lights at even intervals within the tunnel.

The “tree fly” scenes were originally created to layer on top of the “hills grass” renders, but it just didn’t feel right to me and so I kept it separate. The “moving lights” scenes were an interesting test of Redshift since I wanted to have columns of light moving through the scene. So I created a directional light and a bunch of polygon strips, leaving empty space between each strip, animated the whole group of strips, and then enabled global illumination. The Japanese Maple trees were sourced from Mantissa.

This time around I took a different approach for the NestDrop remixes. Instead of injecting the videos into NestDrop, I recorded the NestDrop visuals by itself and then brought everything in After Effects. This allowed me to test out various compositing ideas and use some fun layering tricks and added effects.

PACK ⬕ Surveillance Biz
- This pack contains 18 VJ loops (7 GB)

I was thinking that an appropriate symbol for social media is the security camera. It's interesting how the symbol has transformed from passive surveillance into mass data collection. Tech spirit of our times.

In these experiments I wanted to explore the idea of security cameras that have come alive. So I created three different scenes in Maya. The "popup" scene is a group of security cameras swiveling to gaze at the viewer. The "crowd" scene has a bunch of security cameras oscillating in several different layers, my attempt to have surveillance coverage from every angle in an absurd fashion. The "closeup" scene is a single security camera floating in the distance towards the viewer and continues right into the viewers face. Here is the security camera model that I used.

After so much technical spelunking of late, it was refreshing to get back to my roots and do some homespun 3D animation. Did some interesting experiments with abstract textures for the dome light. The Redshift material shaders still seem to slow down my creative process, but maybe I should fully light the scene first and that would naturally change my approach. That's what I love about 3D animation, always new stuff to learn and explore.

PACK ⬕ Soaring Eagle
- This pack contains 43 VJ loops (74 GB)

Inspired by a dream I had a while back. I was flying in the pink sunset clouds and saw some birds soaring among the clouds and connected to their wings were long golden silk that was following the movement of the wings flapping. When I awoke the image was still clear in my mind.

I started off by doing tinkering with a diamond shader with a super high dispersion factor to create some wild colors. But the real breakthrough happened when I added a field of spheres below the bird, set them to illuminate, and yet only visible in the refractions. The eagle is rigged using bend deformers for the wing and tail motions. To give the bird a little bit more life when it was just soaring and not flapping its wings, I also used a wave deformer to add some feeling of undulating movement.

Interesting to note that the rays were done entirely in After Effects, thanks to carefully keying out specific colors and then applying many layers of the fast radial blur effect. I had planned on doing the cloth in Maya but wasn't in the mood to deal with a cloth simulation, so some quick experiments out of desperation proved to be fruitful. It's a different effect than what I originally had in mind but I'm pleased with the result.

For the clouds I was trying to get Mental Ray working again but didn't wanna deal with an older version of Maya. But then I realized that I could easily render the cloud using the Maya Software render engine. I normally stay far away from the Maya Software render engine but rendering Maya fluids in Redshift is a total pain. I'm surprisingly happy with the result and gotta explore more abstract Maya fluids sometime in the future.

It's been satisfying to render my own 3D animations and then inject them into NestDrop to hunt for some gems. It's playtime! The bird renders were perfect for this and I had to limit myself to only a few absolute favorites.

PACK ⬕ Mountains Flow
- This pack contains 35 VJ loops (41 GB)

These were some really fun experiments. Seeing as how much I've enjoyed the happy accidents of injecting Maya renders into NestDrop lately, I had the crazy idea of going the other direction... Injecting NestDrop recordings into Maya. I have been wanting to explore animated displacement maps and so this was a good reason to try it out.

It took some trial and error to nail down a good workflow. If I imported the raw NestDrop frame sequence directly into Maya then the displacement appeared too harsh in the Maya renders. So after recording the Nestdrop visuals then I treated it in After Effects. I simply made it black and white, duplicated the layer and applied a multiply blend mode along with some fast blur. This blur was critical for rounding out some of the harsh forms of when the displacement was rendered over in Maya. Then rendered out each of the videos to a PNG frame sequence for linking into Maya.

Empty space is so useful especially when it comes to alpha. But when I linked the animated displacement map then it rendered as an utter solid. So I created another poly plane and applied a black hole shader so that I could hide any of the boring aspects at rendertime, along with generating an alpha channel. This worked marvelously and allowed even more happy accidents in a process already fully guided by happy accidents.

And of course, what's to stop me from then taking the resulting Maya renders and doing another round of NestDrop remixes? Since each of the Maya renders have an alpha channel, I couldn't help myself. What an ouroboros this pack has become.

PACK ⬕ Outbreak Nodes
- This pack contains 18 VJ loops (29 GB)

Nauté is a friend of mine that has been working on simulation and visualization of infectious disease outbreaks. This data simulates the spread of a virtual pathogen with COVID-19-like epidemiological parameters in a college campus. It's a raw topic since we're currently living through a pandemic but best to digest it through artwork.

We have long wanted to collaborate on a project and so it was refreshing to jam with real data. Nauté sent me the pandemic visualization and then I did a screen capture of the animation. I had to manually remove any duplicate frames, since my computer couldn't keep up with something in the pipeline. Then I processed it in After Effects to carefully remove the background to create an alpha channel.

My original plan was to do some animated displacement map experiments in Maya and explore some MASH networks. But then I started trying a few different ideas in NestDrop and getting some good results that matched the intensity that I was looking for. A quick but satisfying collab.

PACK ⬕ Recursion Stack
- This pack contains 30 VJ loops (29 GB)

For a long time I've wanted to revisit the Vector Recursion Workbench software collaboration I did with Nathan Williams. So I generated an SVG from the software and imported it into Maya. My first experiments proved fruitful in extruding each of the shapes individually and then moving each shape to create a stepped pyramid. From there I did different iterations with animation, have the shapes rotate in unison, have the shapes rotate out of phase, animate the shapes using the vertical Z dimension. Every other shape has a black hole shader applied, so the alpha is cutout at rendertime.

Towards the end I wasn't enjoying how everything was constantly visible and it needed some mystery. So I created a 3D watery plane, applied the black hole shader, and then animated it oscillate up and down. So it occasionally hides the recursion shapes and yet the water texture ensures that it's always slightly randomized for what is hidden.

I normally render a motion vector AOV pass so that I can use the RSMB Vectors plug-in to add realistic motion blur in post and avoid the heavy hit in Redshift render time. But the motion vector AOV pass doesn't consider the transparency part of the shader, so it wasn't fully accurate. Instead I just let the RSMB plug-in analyize the beauty pass directly to calculate the motion blur on its own. The visuals are moving so fast in this scene that the RSMB plug-in occasionally glitches out, actually in a very pleasing way. But I rendered out alternate versions without motion blur just for some options depending on the look you're going after.

I had an utter bonanza when I injected the loops into NestDrop. I'm such a sucker for the glitched out feedback loops that mix a look of digital versus organic. Stupid Youtube... its compression really kills the detail for some of these, but the MOV versions are so juicy.

PACK ⬕ Series of Tubes
- This pack contains 10 VJ loops (10 GB)

Started off with some experiments using subsurface scattering to create a plastic material which light could shine through. The 'wires' and 'spinners' came about from wanting objects of different thicknesses to see how it reacted to the plastic material. The original models came from this super useful tech builder collection, worth every penny.

After jamming with the lighting and I ended up with a long row of evenly spaced non-visible area lights, grouped them together, and then animated the whole group to move in the same axis as the camera. I originally wanted an glowing orb to be at the middle of each area light but overall it was feeling more like a organic reaction inside within each of the wires and spinners. Had to apply some limitations on the area lights so that I could limit their range to a specific distance.

I never enjoy the delicate preparations necessary to make a 3D scene loop seamlessly and this one was difficult with all of those 'wires' moving at different rates, but I pulled it off with some careful thinking.

Had tons of fun injecting these loops into NestDrop. Since these loops effectively wipe the screen quickly, this makes for some very interesting reactions since the Milkdrop engine often uses heavy visual feedback loops.

PACK ⬕ Crystal Bounty
- This pack contains 13 VJ loops (8 GB)

Who doesn't love some shiny crystals? I spent many years using Mental Ray, so to finally jump into a cutting edge render engine like Redshift was pretty incredible. Same ideas but with years of advancement, so the knowledge transferred easily.

So I was curious to explore the caustics in Redshift and see how far I could push the true refractions. I had originally wanted to crank the dispersion up super high and get some wild rainbows, but I was just so entranced by this more realistic look. I tried some different lighting setups and also played with a few HDRI environment maps, but in the end the best look was a simple area light in the background acting kinda like a photo lightbox used for negatives.

An absurd amount of polygons are in this scene, just to get the refractions going really crazy, and that made doing an animated poly-reduce very slow. But it came all together with some patience. The shards scene was a nice surprise when playing with the poly-reduce and I was amazed by the beautiful colors of the dispersion.

The 'sparkle' scenes were a last minute addition when I realized that I was missing some necessary twinkling that crystals demand. But I didn't render the sparkles into any of the scenes directly so as to be more useful during a live performance. Sparkles at your command!

PACK ⬕ Hands Cascade
- This pack contains 12 VJ loops (12 GB)

I started off by playing with hundreds of arms moving out of phase with each other, kinda like the images of Hindu gods. But I ended up experimenting with animated UV maps at different speeds to create mesmerizing striped patterns and that took over the focus. It was particularly interesting to apply the UV map as a alpha channel to have it cut out a gold metal shader. The bubble shader was a happy accident and ran with it. Here is the hands model that I used.

I enjoy playing with the nonlinear deformers in Maya since they can be stacked together to create this type of warped appearance. I've been exploring some different area light setups since the global illumination renders so quickly in Redshift and in the past I've never been able to afford the heavy render times on my own personal computer.

Also did some exploratory experiments by injecting the loops into NestDrop to get some generative action happening. Always full of surprises.

FAQ
What are the usage rights for these videos?

You have permission to use these VJ loops within your live performances. Please do not sell or redistribute the raw VJ loops.

Why does the transparency look weird in Resolume?

For each MOV (with alpha) that you import into Resolume, you must go into the clip settings and manually change the 'Alpha Type' to 'Premultiplied'. Using the 'Straight' option will result in a dark halo around the alpha cutouts.

Can you release these packs using a different codec?

I often use the HAP codec since it's supported by all VJ software's, utilizes GPU decompression, clean of compression artifacts, and allows for an alpha channel to be included. But the downside is the large file size of these HAP videos. So for some packs I instead utilize the H264 codec when the file size is prohibitively large. Feel free to convert the videos into any format you need.
Download Tips
Available only via BitTorrent

Since I'm distributing tons of videos, downloads are supplied solely through BitTorrent. So I've rented a seedbox to be online 24/7.

How do I download these packs?

You need to use a BitTorrent client, such as qBittorrent.

Is there a limit to how much I can download?

Feel free to download as much as you want.

Why is my torrent download stuck at 99%?

Try doing a "Force Re-Check" of the download from within your BitTorrent client and then "Resume".

Why not host these packs in the cloud?

Google Drive, Dropbox, and web hosts don't offer enough bandwidth. AWS and B2 would result in large fees.