Art project translates music from Teenage Engineering’s OP-Z synth into AI-generated imagery


AI-generated art is a new frontier rife with potential. But for every thorny question about and the potential for widespread manipulation, generated art can also inspire wonder and awe. For example, look no further than that creates kaleidoscopic visual landscapes for composed music.

A collaboration between quirky synth brand Teenage Engineering and design studios Modem and Bureau Cool, draws inspiration from the neurological condition synesthesia. This rare phenomenon leads the brain to perceive sensory input for several senses instead of one. For example, a listener with synesthesia may see music instead of only hearing it, observing color, movement and shape in response to musical patterns. Conversely, a synesthetic person may taste shapes, feel words from a novel or hear an abstract painting.

The audiovisual experiment uses the Teenage Engineering as the music source that is then translated into AI art. In real-time, Modem and Bureau Cool’s “digital extension” translates musical properties into text prompts describing colors, shapes and movements. Those prompts then feed into Stable Diffusion (an open-source tool similar to and Midjourney) to produce dreamy and synesthetic animations.

Modem co-founder Bas van de Poel sees the experiment as fuel for artists’ imaginations. “With the project, we see the potential for musicians to explore new forms of creativity, facilitating a joint performance between human and machine,” van de Poel told Engadget today.

If you’re a musician who owns Teenage Engineering’s OP-Z, you can’t yet use the extension yourself — but that may eventually change. Van de Poel tells Engadget that the companies are “exploring the potential of launching a public version.”

This AI-based project isn’t the first to bring synesthetic properties to the masses. Last year, Google Arts & Culture created an exhibition that , bringing machine-learning-produced sound to Vassily Kandinsky’s paintings.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.


AI-generated art is a new frontier rife with potential. But for every thorny question about and the potential for widespread manipulation, generated art can also inspire wonder and awe. For example, look no further than that creates kaleidoscopic visual landscapes for composed music.

A collaboration between quirky synth brand Teenage Engineering and design studios Modem and Bureau Cool, draws inspiration from the neurological condition synesthesia. This rare phenomenon leads the brain to perceive sensory input for several senses instead of one. For example, a listener with synesthesia may see music instead of only hearing it, observing color, movement and shape in response to musical patterns. Conversely, a synesthetic person may taste shapes, feel words from a novel or hear an abstract painting.

The audiovisual experiment uses the Teenage Engineering as the music source that is then translated into AI art. In real-time, Modem and Bureau Cool’s “digital extension” translates musical properties into text prompts describing colors, shapes and movements. Those prompts then feed into Stable Diffusion (an open-source tool similar to and Midjourney) to produce dreamy and synesthetic animations.

Modem co-founder Bas van de Poel sees the experiment as fuel for artists’ imaginations. “With the project, we see the potential for musicians to explore new forms of creativity, facilitating a joint performance between human and machine,” van de Poel told Engadget today.

If you’re a musician who owns Teenage Engineering’s OP-Z, you can’t yet use the extension yourself — but that may eventually change. Van de Poel tells Engadget that the companies are “exploring the potential of launching a public version.”

This AI-based project isn’t the first to bring synesthetic properties to the masses. Last year, Google Arts & Culture created an exhibition that , bringing machine-learning-produced sound to Vassily Kandinsky’s paintings.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
AIGeneratedartEngineeringsImagerylatest newsMusicOPZProjectsynthTechnoblenderTeenageTop StoriesTranslates
Comments (0)
Add Comment