Emojinating: Evolving Emoji Blends

Computational systems that address the visual representation of concepts have great usage potential in tasks such as icon design, stimulating creativity and aiding in brainstorming activities. One of such systems is Emojinating, a Visual Blending-based system capable of generating visual representations for concepts introduced by the user. However, search strategies employed in the initial version lacked effectiveness in exploring the space of possible solutions. In order to address this issue, we propose an Interactive Evolutionary approach.

 

Emojinating Architecture

 

Emojinating combines data from three resources (ConceptNet [1], EmojiNet [2] and Twitter’s Twemoji [3]) and has three main components:

  1. Concept Extender (CE): searches ConceptNet for related concepts to the one introduced;
  2. Emoji Searcher (ES): searches emoji based on words given, using semantic data provided by EmojiNet;
  3. Emoji Blender (EB): receives two emoji as input and returns a list of blends.

 

Despite being able to achieve a higher conceptual coverage than the one from official emoji system [4], the initially implemented system does not employ an effective strategy for exploring the search space – it only considers the best semantically matched emoji for blend generation. This approach ignores most of the search space and does not guarantee that the solutions are the most satisfactory for the user – one of the shortcomings of [5].

 

Figure 1

Individual Representation and weight update system, showing the chromosomes (c1 and c2) of two individuals’ genotypes, and detail of gene 1 (g1) of c2 from individual #2. Individual #1 is being “liked”, directly increasing the weights of the concepts / emoji marked in green and indirectly increasing the ones of concepts / emoji marked in grey.

 

Evolutionary Approach

 

As an attempt to improve the exploration of the search space, we propose an interactive evolutionary approach that combines a standard Evolutionary Algorithm (EA) with a method inspired in Estimation of Distribution Algorithms (EDA). The approach has a two-level evolution: on a macro level, it uses a method that takes inspiration from EDAs to direct the search to areas that match the user preference; on a micro and more specific level, it uses a standard EA to focus the evolution on certain individuals.

 

Figure 2

Evolutionary framework diagram, showing tasks (T1-9) and objects, e.g. concept tree (CT)


 

The evolutionary framework, schematically represented in Fig. 2, uses the following procedure:
 

  • T1 System initialisation: the concept introduced by the user is used to gather concepts and emoji semantically related to it. The output is the Concept Tree (CT), a graph-like structured object that stores the conceptual and emoji data produced from the analysis to the concept (Fig. 1) It also stores a weight value for each concept and emoji, which are used in the generation of new individuals;
  • T2 Generate initial population using CT: the concept and the two emoji for each individual are selected using CT weights – the higher the weight, the greater chances it has of being selected;

 

do:

  • T3 User evaluation by marking individuals as “liked” and storing individuals by marking them as “locked”;
  • T4 Store the “locked” individuals in the archive;
  • T5 Retrieve the individuals marked as “liked” from the population and from the archive;
  • T6 The weights from the CT are updated based on the “liked” individuals;
  • T7 The “liked individuals are mutated to produce offspring, using three types of mutation (emoji, layer and type of blend);
  • T8 New individuals are generated from scratch with the CT, in order to match user preferences;
  • T9 The new population is constituted by merging mutated individuals (T7) with new individuals produced using the CT (T8);

while (user is not satisfied)
 

The system was implemented as a web-based application, which allows user interaction (Fig. 3). The interface has three areas: the search area, the population area and the archive area (1–3 in Fig. 3). The search area is where the user introduces the concept (e.g. bread animal in Fig. 3). The population area presents the current population. For each individual there are two main buttons: the “like”, which is used to evaluate the individual (d); and “lock”, which stores the individual in the archive, or a “cross” button (e), which removes it from the archive.

 

Figure 3

Web-based application interface, showing 3 areas – search area (1), population area (2), archive area (3) – and 6 buttons – (a) next generation, (b) download, (c) lock, (d) like, (e)remove from archive and (f) activated like button.


 
 

User-study #1: double-word concepts

 

The quality of the evolutionary approach was evaluated using two user-studies. User-study #1 was used to assess if our approach could lead to better solutions than the non-evolutionary deterministic version of the system, presented in [5]. We compared visual blends produced by the two approaches for a set of randomly generated concepts, using a multiple choice survey conducted with 31 participants. All in all, our approach was competitive in 4 out of the 10 concepts, given that in two of them the evolutionary approach was clearly better (the blend was selected as better by the majority of the participants).

 

User-study #2: single-word concepts

 

User-study #2 assessed the efficiency of the system in the production of visual representations for single-word concepts from a set of nouns from the New General Service List and compares the results with the ones described in [4]. We conducted a survey with 8 participants in which each participant used the system to generate visual representations for 9 randomly selected concepts. The system was able to produce at least one solution that represented the concept in 56 out of the 72 runs. In 38 runs, the solution considered as the best was not produced by the approach from [4]. In addition, in 30 out of these 38 our solution was considered better than any of the solutions obtained with the system from [4] and in 5 was considered equally good.
 

Figure 4

Examples of blends considered as good solutions


 

The results show that our evolutionary system is able to better explore the search space, obtaining solutions of higher quality in terms of concept representativeness, having clear advantages in comparison to the approach presented in [4-5].

 
 

Related links

 

 
 

References

 

[1] Speer, R., and Havasi, C. 2012. Representing general relational knowledge in conceptnet 5. In LREC, 3679–3686.

[2] Wijeratne, S., and Balasuriya, L., Sheth, A., Doran, D. 2017. EmojiNet: An Open Service and API for Emoji Sense Discovery. In 11th International AAAI Conference on Web and Social Media (ICWSM 2017). Montreal, Canada; 2017.

[3] http://github.com/twitter/twemoji

 

[4-5]

  • J. M. Cunha, P. Martins, and P. Machado, “How Shell and Horn make a Unicorn: Experimenting with Visual Blending in Emoji,” in Proceedings of the Ninth International Conference on Computational Creativity, Salamanca, Spain, June 25-29, 2018., 2018, pp. 145-152.

  • J. M. Cunha, P. Martins, and P. Machado, “Emojinating: Representing Concepts Using Emoji,” in Workshop Proceedings from The 26th International Conference on Case-Based Reasoning (ICCBR 2018), Stockholm, Sweden, 2018, p. 185.