Evolving L-Systems with Musical Notes

 

 

Over the years there has been a strong interest in devising computational approaches for the generation of music and images. More recently, some authors questioned the interplay of music and images, that is, how we can use one type to drive the other. In this work we present a new method for the algorithmic generations of images that result in a visual interpretation of a rewriting system (L-system). The main novelty of our approach is that the aesthetics of the L-system itself is the result of an evolutionary process guided by musical elements. In a multi-agent simulation environment there musical notes are decomposed into elements – pitch, duration and volume – and once they are captured they are mapped into visual parameters in the L-system – line length, width, color and turning angle. We further present the results of some experiments that provide support to our approach.

 

Keywords: Evolutionary Environment, Generative Music, Interactive Genetic Algorithms, L-systems, Sound Visualization

 

L-Systems

 

Lindenmayer Systems, or L-systems, are parallel rewriting systems operating on strings of symbols, originally proposed to study the development processes that occur in multicellular organisms like plants [1]. Formally, an L-system is a tuple G = (V,ω,P), where V is a non-empty set of symbols, ω is a special sequence of symbols of V called axiom, and P is a set of productions, also called rewrite rules, in the form LHS → RHS. LHS is a non-empty sequence of symbols of V and RHS a sequence of symbols of V.

 

In a generative system like ours, the L-system works by starting with the axiom, and then iteratively rewriting in parallel all symbols that appear in a string using the production rules.

 

System Overview

 

Figure 1

Environment overview.

 

In our environment there are two types of entities: notes and agents. Notes have immutable attributes: their position and value. They do not die or evolve over time. Agents are entities with two components: (1) an L-system that drives its visual expression and (2) a sequence of notes that define the L-system’s parameters at each level of rewriting. Agents move in the world by random walk, looking for notes that they copy internally and append to their sequence. These notes change over time through an Interactive Genetic Algorithm (IGA) [2]. A new individual can be generated from the current one through two different processes: mutation and crossover.

 

In short, our audiovisual environment provides a visual representation to a sequence of notes and visual pattern association to the musical contents which can be identified as pleasant or not pleasant. This also means that users do not have to listen to every individual present in the environment to understand its musical relevance.

 

Audiovisual Mappings

 

We divided the visual representation of music into two distinct parts: (i) the visual representation of the notes spread across the environment that individuals may catch, and (ii) the notes that the L-systems effectively catch.

The first part is composed of static notes in the environment that are represented with circles filled with several levels of grey (volume) and size (note duration) (see Figure 2).

 

Figure 2

Visual representation of notes before they are captured.

 

The second part, regarding L-systems, each note caught will affect them through the following visual features: (i) branch angle, (ii) branch length, (iii) branch weight, and (iv) color. Here, the note duration of the caught notes is mapped into branch length, note volume into branch stroke, and consonance into branch color. (see Figure 3)

 

Figure 3

Visual translation in the L-System of notes after they are captured.

 

Every time a note is caught its pitch is compared to the previous note. From there, we calculate its consonance or dissonance. To the first note caught by an individual (level 1) is attributed a pitch color corresponding to its pitch height (see Figure 4). If the sequence of notes is consonant then it is applied a tonality based on the color of the previous note caught. In case it is dissonant, a random color tonality is applied. Looking at Fig. 4 we can realize that consonance can be distinguished by its subtle change of color. On the contrary, a dissonant melody will produce changes of color and color tonalities with bigger steps.

 

Figure 4

Pitch color (left) and sonance visual expression (right).

 

Furthermore, since there is no term of comparison to other notes when the L-system catches its first note, the color assigned corresponds to the pitch (lower pitch has warmer colors and higher pitch has colder colors) of the caught tone. To the other notes color is assigned accordingly to the classification of consonant or dissonant depending on the note that has been previously caught.

 

The Evolutionary Algorithm

 

An IGA [2] was used to assign the quality of a given candidate solution. The solutions favoured by the user have a better chance of prevailing in the gene pool, since they are able to reproduce in higher amount.

 

The musical sequence caught by an individual consists in its genotype, and its phenotype is composed of sound and image, i.e., L-system. The order of the genotype is defined by the order in which notes are caught. We apply both crossover and mutation in our system for evolution to progress with diversity. While crossover allows a global search on the solutions space, mutation allows a local search. Each element has a chance (probability) of being mutated. Offspring resulting from mutations or crossover are incrementally inserted into the current population and original chromosomes are kept.

 

Mutation [3] allowed changes in pitch, duration and volume. Our mutation mechanism receives two parameters: the sequence of notes that will be modified and the probability of mutation of each note in the genotype. Each element in the sequence of notes caught by the individual had equal chance of being chosen (uniform probability). Each chosen note for mutation could have the following parameters of pitch, duration and volume changed randomly. As for crossover [3], in this case, made possible the selection of two parents to then give birth to two children. By selecting random cut points on each parent, it is given birth to two children based on these cut points. The resulting size of each child is variable since the cut points made in the parents are random.

 

Discussion

 

The key idea that makes our approach different from others studies is the con- cern of mapping sound into image and image into sound. More specifically, our L-systems develop and grow according to the musical notes that were collected by them. At the same time, visual patterns aim to reflect musical melodies built in this process.

 

It is however far from trivial to conciliate both musical and pleasant aesthetic results with L-systems due to the small level of control of the structure. We have tried to solve this problem by providing the user the chance to interactively choose the survival chance of individuals. Although this system has been mostly guided through user interaction, we must question ourselves if it is possible to reach the same quality of results without user guidance. This audiovisual environment provides the user with a visual representation to a sequence of notes and visual pattern association to the musical contents which can be identified as pleasant or not pleasant. This also means that the user does not have to listen to every individual present in the environment to understand its musical relevance.

 

Other future explorations could include L-system with a major diversity of expression or even the use of other biological organisms.

 

The following video presents a small demonstration of the system:

 

 

References

 

[1] Prusinkiewicz, P., Lindenmayer, A.: The algorithmic beauty of plants. Springer, New York (1990)

[2] Sims, K.: Interactive evolution of dynamical systems. In: Toward a practice of autonomous systems, Proceedings of the First European Conference on Artificial Life, pp. 171–178 (1992)

[3] Holland, J.H.: Genetic algorithms. Sci. Am. 267(1), 66–72 (1992)

 

Publication

  • A. Rodrigues, E. Costa, A. Cardoso, P. Machado, and T. Cruz, “Evolving L-Systems with Musical Notes,” in Evolutionary and Biologically Inspired Music, Sound, Art and Design – 5th International Conference, EvoMUSART 2016, Porto, Portugal, March 30 – April 1, 2016, Proceedings, 2016, pp. 186-201.

Author

Ana Rodrigues

Ernesto Costa


Date

17/03/2016