Generate a unique short-film each time

This project intends to reflect on the possibilities of generating automated, pseudo-aleatory cuts of a 25'' short-film. The director's cut becomes a software's cut created, by a viewer's demand, as a unique random instance of the potential combinations of the short.


A reflection on the virtualization of the being; Identity 2.0, addiction and dependence. A complex, living and contemporary issue, closer to hypertext culture than to the discourses rooted in the Enlightenment and inherited from modernity. Incorporating this paradigm shift in how we generate and disseminate knowledge, this project proposes a liquid, deconstructed narrative with multiple paths.

How is it done?

The artifact involves different disciplines, and has 3 main parts: the shooting of an audiovisual repository according to a specific script, a web-based software interface based on a visualization of the amount of footage and its possible instances/combinations, and a server-side software that dynamically edits the selected shots in real time and encodes the result into a web-friendly format for online viewing.

The audiovisual content of the shots reflects on how technology is changing our daily lives and often shifting us from natural contexts to stressful landscapes of information overload. It questions who is really in control: humans or machines. The very design of the project reinforces this dilemma by generating an automated cut, never edited before and only conditioned by the pattern set up by the director, but out of his control.

The content has 6 different situations/sets, with a total of 123 different takes, 5 sound tracks, 5 sub-themes with 32 written sentences and more than 20 minutes or archive footage that can be randomly combined in a structured manner to generate a 25-second almost unique resulting clip.

The interface of the project intends to visually convey the amount of combinations. The timeline as graphical pie represents the fact that, although there is only one starting point, the center, the possible endpoints (on the perimeter) and the paths to reach them are virtually infinite.

The server-side software is an open-source code that simply takes the data generated by the interface to mix the selected footage into the resulting short-film. Besides the interest of the narration, interface and data visualization of this experimental audiovisual, other theses arises; It is known that viewing the same video many times give us different information and our perception of it evolves. Would viewing many instances of a pattern be more efficient on communicating an abstract concept than the repetition of an specific instance several times? How is the protagonist of this artifact; the viewer, the author or the code? Is the message as relevant as the interface we look through? Are there better ways to visualize the repository? Is it possible to create a self-explanatory interface?

Source code: iam

Code by Julià Minguillón, Sergi Lario and Quelic Berga, created with Processing i Processing.js