Doodly Do
ZKM | Hertz Lab , Kamuna 2023 August 15 - 2023
Doodly Do is an interactive installation that brings hand-drawn characters to life through real-time animation. Visitors submit their drawings, which are then processed and animated using machine learning models. The installation playfully bridges the physical and digital worlds, allowing static doodles to transform into moving, expressive characters.
The core animation pipeline is powered by the “Animated Drawings” open-source repository developed by Meta. The system captures user-submitted drawings and processes them through a combination of object detection, pose estimation, and image segmentation techniques to extract a usable skeleton and animate the character.
Technical Write-Up
The animation model is deployed locally on a Mac M1, achieving an average response time of 8–10 seconds between drawing submission and screen display. The drawings are introduced on screen with a visual transition (appearing “in and out”) and animated within a custom-built environment created in TouchDesigner.
The animation process includes:
Detection of limbs and torso from the drawing.
Pose estimation to create a simple rig.
Animation of predefined motion patterns (currently 4–5 variations).
Further Development
Future iterations of Doodly Do aim to:
Deploy the model on AWS, reducing latency and enabling faster processing.
Expand the animation repertoire, allowing for a wider variety of character movements.
Enable user-generated motion, so participants can not only draw their characters but also choreograph their animations.
By making the system more responsive and interactive, the goal is to create a more immersive and customizable experience for participants of all ages.