Acid Pauli | Ahmed: Happy Holi (A Stop-Motion Process)

Watch on Instagram

Acid Pauli | Ahmed: "Happy Holi" is a 30-second stop-motion animation that illustrates the fundamentals of stop-motion animation. It features a ball rolling through the frame, a bouncing ball, and an inchworm, set to the theme of one of South Asia's most celebrated festivals, Holi. The project was also an experiment in using new AI image-generating software tools in art.

What follows describes the creation of a stop-motion animation music video inspired by the Holi festival in Nepal using Dragonframe, Ae and images created from the AI image-generating software DALL-E 2. The video evolved from three animation prompts during an introduction to stop-motion animation course I'm currently enrolled in at RISD. The main character throughout the video was a watercolour palette we all know and love from childhood.

"Stop-Motion Animation: Combining Prompts to Create a Holi-Inspired Music Video"

This music video evolved from three different animation prompts given to me during a class I'm taking at RISD on an introduction to stop-motion animation.

The three prompts:

1. Ball rolling through the frame (animate the ball entering on one side of the frame and exiting the other).

2. Bouncing ball (three bounces)

3. Inchworm moving through the frame (animate an inchworm moving from one side to another).

I decided to put all three prompts together into a short music video of around thirty seconds, and as I was making this on the Eve of Holi here in Nepal, it would be Holi inspired.

“Holi is a spring festival celebrated by Hindus and Nepali people worldwide. In Nepal, it’s called “Fagu Purnima”. The festival originated from the victory of good over evil, where Lord Vishnu defeated demon king Hiranyakashipu. Celebrations include smearing coloured powder while singing and dancing. Holi in Nepal can be seen as a celebration of unity, friendship, and love.”

The first assignment was simple

Ball “Plastic Lid” Rolling Through The Frame

The second assignment was not! A bouncing ball that squishes & stretches upon impact? It was like the teacher was testing us, throwing in a tough test on the first week. After some research online, I put together this sketch in photoshop of the different stages of bouncing a ball based on some drawings I found online.

I then decided to sculpt each stage of the ball separately. I cut the block of polymer clay into ten different sections equally, rolled them into balls and then sculpted ten sections based on the drawing. I then photographed them in sequence, removing one after another trying to mimic the bouncing ball. I then would align them after every subsequent shot in the software Dragonframe using the feature "onion skin".

This was the final result

Bouncing Ball (Three Bounces) Blue Polymer Clay

The third prompt was as easy as the first

Inchworm Moving Through The Frame Blue Polymer Clay

Exploring the Possibilities of AI in Stop-Motion Animation: Using DALL-E 2 to Generate Creative Assets for a Music Video

The video I made is pretty straightforward, but this was the first time I used AI assets I created in DALL-E 2. I picked the song because it went nicely with the syncopated rhythm of the inchworm.

I returned to our main character, the watercolour palette, and experimented with the main frame by putting it into DALL-E 2 to see what would happen.

“DALL-E 2 is a neural network-based image generation system developed by OpenAI, an extension of the original DALL-E model that generates images from textual descriptions. Using contrastive learning, it can create more detailed images than its predecessor by training on a large corpus of text and image data. DALL-E 2’s remarkable feature is generating highly detailed and creative images from abstract concepts, such as “an armchair in the shape of an avocado” or “a giraffe made of sandpaper.” The name “DALL-E” refers to the surrealist artist Salvador Dali, known for his unique and imaginative artwork, and the robot WALL-E, the titular character of the Pixar movie of the same name, known for its curious and adventurous nature.”

Although the above is an abbreviated definition of DALL-E 2, it doesn't mention that it also allows users to upload images without textual prompts as inputs. That is precisely what I did with the main image of the watercolour palette.

When a user uploads an image, DALL-E 2 can modify or generate a new image based on the content of the uploaded image alone, without the need for any textual prompts. The model can create images similar in style and content to the uploaded photo or combine elements of the uploaded image with other concepts or objects.

“Contrastive learning is a machine learning technique that trains a model to learn representations of data similar for instances of the same class but different for others. In contrastive learning, the model is trained to compare pairs of data points and learn to differentiate between them. The model is presented with a pair of data points, and it must decide whether the two data points belong to the same class or different classes. Contrastive learning aims to learn representations of data valid for downstream tasks, such as classification, without requiring any labels for the training data. Contrastive learning is often used in deep learning models that require large amounts of data for training, such as image recognition, natural language processing, and speech recognition. In the case of DALL-E 2, contrastive learning is used to train the model to generate images that match the given textual descriptions by learning to differentiate between different image and text pairs. This allows the model to generate images that are more accurate and detailed than would be possible with traditional supervised learning methods.”

When you upload an image to DALL-E 2, it gives you four variations based on your original image. I chose to work with variation four in my video, which was the most aesthetically pleasing to my taste.

Since I succeeded with my first endeavour with AI, I decided to experiment with the images I always use in my video from the Smithsonian Open Access collection. I chose to upload a Chinese 18th-century illustration of insects from the Cooper Hewitt collection.

I then needed images of Holi for the piece and decided to use the function of DALL-E 2, which generates images from textual descriptions.

I simply asked it to create images based on the text "Holi festival in Nepal", and it gave me these four images.

I only liked image number two. I then asked DALL-E 2 to give me variations of images based on image two, and I received a selection of illustrative photos that I was happy with, including the previous image that the new images were based on.

I now had most of the creative assets I would use in the video and was off to create it.

Other images I used in the video as is were from my personal collection of scanned books and Gifs.

In conclusion, this music video was an opportunity to improve my skills through a class at RISD by combining three animation prompts into a Holi-inspired video that used the iconic watercolour palette as a recurring element and incorporated AI-generated images from DALL-E 2 to enhance the visuals. Using contrastive learning in DALL-E 2 allowed for more accurate and detailed photos, which were utilised in the video alongside my own collection of images. The result is a short but hopefully visually captivating music video that showcases my love for stop-motion and newfound interest in experimentation with AI.

If you want to follow more of my work check out my personal blog at www.thepurpledurian.com

More by Joey Foster Ellis

View profile