for latest works and updates please visit for latest works and updates please visit 
tugrulsalci︎ 



   

PLEIZEL LIVE AV 

2025
Realtime Generative AI  + AV
Collabrators: Arian & Pleizel
Place: Bayblon





For Pleizel, we designed a live audiovisual performance that merges real-time prompt-based generation with data-driven visual systems to evoke a dreamlike, ambiguous atmosphere.


At the core of the work is a dynamic feedback loop between human input, algorithmic behavior, and environmental stimuli — all orchestrated in real time.

Utilizing a custom pipeline built in TouchDesigner, we generate point cloud noise structures and evolving generative visuals, driven by a curated stream of text prompts. These prompts, selected and cycled live, shape the system’s behavior — both visually and conceptually — by guiding transformations across space, texture, and rhythm.

Using a Kinect camera, we captured Pleizel’s performance in 3D space, placing her figure inside a responsive point cloud system composed of evolving generative noise. This allowed us to distort, fragment, and recompose her image dynamically — turning the performer into a living waveform.

We also used virtual cameras to frame Pleizel from multiple perspectives as she played her synthesizers. These views were composited into real-time simulations: sometimes placing her inside natural environments like crashing waves or windy forests, and at other times in chaotic cityscapes shaped by glitch and density.


Our setup integrates sound analysis, OSC/MIDI interfaces, and NDI protocols, allowing us to modulate parameters such as density, displacement, color field, and time-based variation, all in sync with the sonic landscape. Audio-reactive elements connect closely with external signals, creating a layered experience where noise, gesture, and signal blend into a fluctuating digital ecology.

The result is a live-coded, data-reactive environment where dreams dissolve into glitch, and systems behave like bodies.

© Tuğrul Şalcı 2025