Soap And Milk

interactive data visualization

Glittering atoms and fragile droplets of data

 spray across soap and milk.

 

Soap And Milk

soap and milk is an interactive experience of data, allowing the observer to perceive social media as an overwhelming and organic figment. Each microscopic droplet represents a tweet that refers to the installation. Once an entity gets spawned – the viewer is invited to physically interact and explore its behaviour, till they vanish into a vivid setting.

 

Unbenannt-2_0000_vvvv-2016-04-23-14-02-04-09

Project

soap and milk is a visual experience to portray information in social media as an organic landscape of fluids. We envisioned droplets spawning on surreal liquid surfaces, melting and interacting with each other till they fade into the depth of their micro cosmos.
Tweets referring to a predefined hashtag, seem to become alive as bubbles as they bounce, playfully simplified and yet unpredictable within their behaviour. To witness the lifespan of this data, we wanted the observer to be able to touch and manipulate its visual representation.
The installation is designed as a visual oxymoron, between a larger-than-life experience and the microscopic insight into interacting droplets. By using a giant LED screen to expose the interactions of tiny bubbles, we wanted the viewer to shrink into the fluids – allowing new perspectives and interactions towards complex and detailed procedures.

Unbenannt-2_0011_vvvv-2016-02-10-02-13-03-70

Visual Exploration

The visual elements of soap and milk consists of more than forty layers of different rendering effects and GPU based calculations. Techniques such as fluid calculations, compute shaders, raymarching, ambient occlusion, texture effects and post processing combine to a surreal look and feel of detailed and fragile impressions. No polygons were used for the implementation of the graphics. All visual layers consist of fragment shaders, allowing us accurate calculation of shapes, shadows and reflections for each pixel. Hence we were able to create the illusion of endless details. The efficiency of the DX11 render pipeline of vvvv, enabled us to quickly investigate, experiment and retry different visual scenarios and concepts.

Interaction

We used an industrial infrared camera by ximea to capture the movements of the users. Our system is able to shoot 2k videos in 170 frames per second, allowing our algorithm to never lose track of a moving body or its motion. In order to process this data, we wrote a custom, gpu based optical flow solution to analyse and compare 2 million pixels in under 5 milliseconds.
Within our research we found high framerate and resolution to be more than just a boost of quality. Basic Computer vision for interactive systems is often about visual debugging. The coder perceives a hand or a body within a camera image – and ports his skill of perception to the algorithm. Hence the implementation is limited to the perception of the engineer. In case of 170 fps, our solution reconstructs more than the human eye is able to see. The resulting data reveals even the tiniest and fastest motion, allowing the user to feel his dynamics immediately.

Insights

The hole system runs on one computer, equipped with a strong processor and a titan X. The camera image is uploaded to GPU, as soon as a new frame is detected. From this point on, all tasks ranging from image analysis to real-time rendering is performed on GPU only – to avoid bottle necks of data processing and latency. The motiondetection is done in Openframeworks and the rendering passes in vvvv. The communication between the applications is realized via spout. The following software and libraries are in use:

OpenFrameworks 0.9 http://openframeworks.cc/
ofxXimea (Nathan Wade) https://github.com/nwadedx/ofxXimea
ofxBlur (Kyle McDonald) https://github.com/kylemcdonald/ofxBlur
ofxSpout (Elliot Woods) https://github.com/elliotwoods/ofxSpout
vvvv https://vvvv.org/
dx11 (Mr. VUX) https://vvvv.org/users/vux
Evvvvil Tweet Engine (evvvvil) https://vvvv.org/contribution/evvvvil-tweet-engine

 

Credits:

Artistic Direction: Christian Mio Loclair
Technical Research: Jeremias Volker
Music: kling klang klong
Kamera: Julian Voltmann
Photo: Felix Albertin
Commisioned by: Identitätsstiftung

 

Unbenannt-2_0010_vvvv-2016-02-13-18-53-13-21 Unbenannt-2_0012_vvvv-2016-02-09-17-39-27-49 Unbenannt-2_0013_vlcsnap-2016-05-04-12h24m07s926 Unbenannt-2_0015_vlcsnap-2016-05-02-18h31m32s439 Unbenannt-2_0016_vlcsnap-2016-05-02-18h31m06s865 Unbenannt-2_0021_newFinale.00_00_16_24.Standbild012 Unbenannt-2_0023_4HD Unbenannt-2_0027_yo twitter_0006_vvvv-2016-05-11-02-54-52-14 Unbenannt-2_0001_vvvv-2016-04-22-20-18-27-60 Unbenannt-2_0005_vvvv-2016-04-22-18-00-21-58Unbenannt-2_0008_vvvv-2016-02-13-18-55-52-45 Unbenannt-2_0006_vvvv-2016-04-19-15-18-48-05 Unbenannt-2_0009_vvvv-2016-02-13-18-55-03-32