Olhos music fest _branding

Abstract — This project is about the creation of a music festival’s dynamic brand, which reacts to music and customises itself to any person participating in the event.
Keywords  brand, dynamic, live, music visualisation,  face detection, eye detection, festival.

 

I. Introduction

Olhos (eyes) is small village in Portugal where there is a music festival called Olhos Music Fest (Eyes Music Fest). This project is about the creation of that festival’s graphic identity, which works dynamically and even live, reacting to music (sound) and customising itself to any person participating in this event, either it be a musician, part of the staff or a visitor.

 

The main concept behind the festival’s branding was to pass the energy and feeling of “what it is like to participate in the festival” — an extrovertive mood, with people dancing free.

 

First of all, a logotype was created because it was important to communicate the name of the festival (Figure below). However, branding and graphic identity is much more than a logo. Thus, it was not taken as the main feature of this brand.

 

We tried to understand how to translate our concept (what it is like to participate in the festival”) into graphics. In other words: how to visualise that experience. So, we started to define the very elementary units that compose the festival — music, space and people — and we used those as the starting point to develop the brand. Just like Neue Design Studio used information visualisation techniques to create the Norwegian Meteorological Institute’s logo [1] (to create graphics), which reacts to weather statistics, we setted up to use information visualisation to create not only graphics but (and primarily) animations.

 

 

Visualising music:

There are at least two type of inputs to visualise music — amplitude and frequency of the audio signal. From these two, amplitude is probably the easiest one for humans to perceive. Thus, that was the one to go. However, because a brand must be unpaired and it was meant to translate our concept into the graphics, the solution for this visualisation could never be a trivial sound visualiser. Instead, we decided to visualise music by developing a dancing artefact, which reacts to sound’s amplitude — the “dancing free” part of the concept. This artefact mimics a boneless arm (chain) capable to spin itself around infinitely.

 

Visualising space:

 

Visualising space may seem redundant, unless we can add a new layer and turn to see things in a different way.

 

To visualise space, we opted for simplifying the environment into four ranges of brightness and represent them with geometric symbols. Using cameras to render the space around, we were able to create a whole new way of visualising it, and create an “extrovertive mood” (part of the concept), almost mimicking a psychoactive experience. This layer was used as a background pattern.

 

Visualising people:

 

Because this is “Eyes Music Fest”, the brand could not be completed without making the eyes, the main graphic motif. Though, for people visualisation, we  only use their eyes so we may recognise a friend, although he/she remains anonymous for the crowds. Those eyes were then used as the main unit to build the dancing sounds, which works over the space visualisation.
Changing the inputs will consequently change outputs. So, by applying this visualisation techniques into the brand’s graphics, we could create a highly dynamic and personalisable brand.

 

II. How it is made

 

The dancing eyes (music and people visualisation):

 

The technical process of the project started by using Processing’s OpenCV library to detect eyes on photographs. However, it is very likely for the algorithm to pick other elements in the image that are not eyes. As such, our solution consisted in detecting first the faces in the images/frames and then looking for eyes inside the detected region of interest.

 

Once a good eye detection was achieved, the next step was to create an array of eyes for each eye that has been found and then place each one relating to the one before (its parent), with a linear distance between them. In other words, we form a chain of eyes for each eye that has been detected (Figure below).

 

 

After that, we setted them up to move autonomously around each respective “father eye”, with a linear speed (linear angle increment), backward or forward depending on a probability. Also depending on a probability, each element may change its moving direction (backward or forward).

 

Now that the eyes could move, they just needed o be moving on rhythm — dancing. Thus, besides its own natural movements, for each frame, the amplitude  from a song playing was added to each eye “moving angle”. That means that the tips of each chain move faster than its origin, and that each one moves as faster as the song’s amplitude. However, it also means they will never stop dancing even if there is silence (they will just slow down, waiting to be energetic again) (Videos below).

 

 

 

The environment patterns (space visualisation):

 

The goal of this space visualisation was to make it in a way that it could tell something about the objects in space (you can  identify darker or lighter zones and so even identify some silhouettes) but also that it could be abstract enough to let the viewer to interpret it in his own way, and be away from the real and objective world. So, this filter was made out of geometric symbols.

 

First, we picked up the brightness of each pixel in a linearly spaced grid. Then, we divided the brightness spectrum in four ranges and for each of those ranges we have associated a symbol (of lack of), whose density reflects the brightness level (Figure below).

 

Because the system may work with real time video, it has a great potential to be used live on VJ performances, interactive posters, or any other kind of physical installations in the festival’s venue.

 

It can be capturing the same image as the “eyes algorithm” is (“background is the eye’s owner”) or it can be used to capture some other different images. In both alternatives, eyes and background can then be mashed up (eyes over background). Figure 4 shows the first alternative.

 

 

III. Applications

 

The generated artifacts can be exported and then used to compose static or animated design objects such as videos, posters or merchandising (post-production). However, they can also be used in real-time applications such as interactive posters, VJ performances or photo-spots in the festival’s venue.

 

In terms of post-production, two principal graphic elements were developed — an animated one and a non-animated one.

 

The non-animated one was a printed poster showing the artists coming, side by the respective day and time. The poster used the festival’s logotype, complemented by the three generated artifacts together (First figure below). Then, all the typography and composition is properly in tune with the brand’s rules created (the brand is not only about the logotype of even only about this generated images; it is a whole collection of pieces together that identify the festival) (Second to fourth figures below).

 

 

 

 

The animated element was a Facebook banner where, as well as in the poster, we introduced the invited artists. However, here it was possible to use all the potential of the sound, space and people visualisations. So, we used a picture from each artist/band to pick their eyes and make them dance at the rhythm of a song (from Whales — one of the invited bands) at the same time we were introducing them. Because we were working with static images (photos from artists) in the banner, we used a different input to the background filter (alternative 2) — a video from someone dancing. (Video below).

 

 

Inspired by works as “Oto Nové Swiss Poster” [2], “Camera Postura” [3] or “EV NT interactive poster wall concept” [4], the proposed real-time application was an interactive poster. Figure 5 depicts a digital version of the poster, where the static image was replaced by the output of the real-time algorithm, working with the input of a webcam. This way, the poster can customise itself to any person or group of persons looking at it (Video below).

 

 

 As already mentioned, this algorithm can have many other applications such as VJ performances, by making it working with the input of a camera on stage, pointing it at musicians or to the audience (Video below).

 

 

 [1] Armin, “Where the Cold Wind Blows”, 29 September 2010. [Online]. Available: www.underconsideration.com/brandnew/
archives/where_the_cold_wind_blows.php
[2] Studio Feixen and J. Giger, “Oto Nové Swiss Poster,” 9 March 2017. [Online]. Available: www.otonoveswiss.co.uk. [15/03/2018].
[3] LUST, “Camera Postura,” LUST, 2014. [Online]. Available:
https://lust.nl/#projects-5939. [15/03/2018].

[4] Peder Eskild, Silje Barth and Aleksander Wassum, “EV NT interactive poster wall concept”, 2012. [Online]. Available: https://vimeo.com/43598671

 

Daniel Lopes, oriented by Pedro Martins and Penousal Machado
Jan 2018