EvoDesigner: Towards aiding creativity in graphic design

In Graphic Design (GD), finding disruptive solutions that attract people’s attention is of the utmost importance. However, to deliver faster and cheaper, often designers adopt trendy solutions rather than exploring innovative ones.
EvoDesigner (see Figure 2) is an automatic system to assist the creative process of graphic designers by alternately collaborating with these in the evolution/edition of pages and page items within the Adobe InDesign environment, e.g. for creating posters. Figure 1 showcases examples of posters generated using different versions of the system.

 

Figure 1

Examples of posters generated using: various setups, regarding various generations and slightly different versions of the system.

 

Figure 2


Schematic representation of EvoDesigner.

 

To use the system, the user must 1) create a blank document, 2) insert elements into pages (see Figure 4), 3) set up desired preferences (e.g. set the pages to evolve and set keywords) and click “Generate” to start; 4) the module Keywords-to-visuals translation will try to find properties/tools that match the inserted keywords; 5) Each property/tool will be assigned with a probability to be used by the system to mutate pages (individuals); 6) the evolutionary engine will evolve pages 7) the resulting pages will be made available as normal and editable InDesign pages; lastly, 8) the designer may edit the results and 9) export final artefact. From any stage of the user interaction, the parameters might be changed and the evolution restarted. See Figure 3 for a scheme of the genotype.

 

Figure 3

Schematic representation of an individual’s genotype, (property names and value-types might not be fully accurate).

 

Testing the evolutionary engine by evolving towards target images

 

The first iteration of EvoDesigner consisted of the implementation of an automatic evolutionary engine based on a conventional genetic algorithm. Experiments were done for evolving page layouts towards given target images, using the Mean Squared Error (MSE) metric for assessing fitness. Figure 4 presents the 3 manually created phenotypes evolved. Figure 5 showcases 10 initial posters, automatically generated from the posters in Figure 4.
 

Figure 4

Evolved manually created pages.

 

Figure 5

Example of an initial population of 10 individuals, generated out of the 3 selected pages of Figure 4.

 

These preliminary experiments mainly focused on targeting sketches of posters (see Figure 6). Sketched targets might be helpful, for example, whenever a graphic designer aims to generate artefacts that describe a given page balance and colour pallet. Nevertheless, utilising images of finished GD artefacts might also be helpful, for example, for resembling the page balance of the targets without culminating in results that are too similar to the originals.
For instance, the generated artefacts might include page items that are completely different from the ones in the target posters.
 

Figure 6

Best individuals from 4 different runs (100 generations), for 3 different target images: a) Figure 5.a.1; b) Figure 5.a.2; c) Figure 5.a.3

 

The experimental results suggested the viability of such an approach in the evolution of GD artefacts that resemble the page balance of the target images, but which are different enough not to be deemed as the same. Besides user testing must be needed, we believe the presented approach might be worth being included in the GD workflow for assisting the generation of new GD solutions since the system is able to take given layouts and consider these to dispose and edit page items in relatively unexpected manners

 

Towards evolving innovative visual solutions

 

In the following experiments, we have proposed a more robust fitness assignment method, aiming to assess the degree of dissimilarity of the generated pages compared to a given set of existing posters. The present method is based on an auto-encoder trained with a data set of 4620 gd posters, posted at the website typographicposters.com by graphic designers worldwide. The goal is for the model to memorise common features of the posters present in the training data set, avoiding the need to compare N to N images, i.e. avoiding the need to compare a generated poster to the whole data set of existing posters and therefore compute a minimum similarity value, which would be time and resource-consuming.

 
In theory, the poorer the ability of the auto-encoder to reconstruct (recall) a given poster (and other types of images or even noise), the more distinct it might be from the posters in the data set, mainly, the most ordinary ones. Thus, contrary to conventional auto-encoders, our model should slightly overfit the training data, i.e. the most dissimilar an input is, the worst must be its reconstruction.

 
The results suggested the viability of using an auto-encoder to assess whether or not a PNG poster resembles existing ones on a given training set or whether its aesthetics differ enough to be deemed as more or less innovative, compared to the same data set. Also, the results revealed that the reconstruction loss returned by the auto-encoder can be successfully used for fitness assignment in evolutionary systems, aiding the exploration of GD solutions that differ from a given data set. Nevertheless, human designers might still be crucial in the process to identify results with potential and post-edit them to create final gd artefacts.

 
Figure 8 showcases an initial population of 10 individuals, automatically generated out of the 3 pages of Figure 7. During the experiments, populations of 50 individuals were generated. Figure 9.a showcases some evolved posters selected from last generations. Figure 9.b presents the respective versions of those posters after manual post-editions. Figure 10 showcases examples of final arts created based on the post-edited posters.

 

Figure 7

Unknown (manually created) pages selected to be evolved.

 

Figure 8

Example of an initial population of 10 individuals, automatically generated out of the 3 selected pages of Figure 7.

 

Figure 9

Generated individuals (selected from last generations): a) non-edited, b) manually post-edited.

 
Future developments must comprise: (i) including posters created by non-designers in the training set of the auto-encoder (e.g. less visually pleasing examples); (ii) further optimising the architecture of the auto-encoder;
(iii) finding a method for continually retraining the auto-encoder (e.g. learning the generated posters); (iv) including a method for evaluating legibility; (v) implementing a grid system and a snap-to-grid method; (vi) including a method for evaluation of visual balance; (vii) including an evaluation method for exploring solutions of a given graphical style, e.g. another auto-encoder might be considered, by clustering the data set into different styles and approximating the aesthetics of a chosen one; (viii) creating a method for mapping keywords to the mutation operators that must more likely perform, limiting the search space according to given concepts; (ix) creating methods for considering the hierarchy of the page items; (x) including further technical and user testing.

 

Figure 10

GD artefacts based on the graphism of the posters of Figure 10.b: a) and f) posters; b) and d) book covers; c) and e) tote bags.

 

Publications

 

  • D. Lopes, J. Correia, and P. Machado, “EvoDesigner: Towards Aiding Creativity in Graphic Design,” in Artificial Intelligence in Music, Sound, Art and Design – 11th International Conference, EvoMUSART 2022, Held as Part of EvoStar 2022, Madrid, Spain, April 20-22, 2022, Proceedings, 2022, pp. 162-178.

  • D. Lopes, J. Correia, and P. Machado, “EvoDesigner: Aiding the exploration of innovative graphic design solutions,” in (to be published in) Artificial Intelligence in Music, Sound, Art and Design – 12th International Conference, EvoMUSART 2023, Held as Part of EvoStar 2023, Brno, Czech Republic, April 12-14, 2023, Proceedings, 2023.

  • D. Lopes, J. Correia, and P. Machado, “EvoDesigner: Evolving Poster Layouts,” Entropy, vol. 24, iss. 12, 2022.

Author

Daniel Lopes