Emojinating: How Shell and Horn make a Unicorn

Emoji are becoming increasingly popular, both among users and brands. Their impact is such that some authors even mention a possible language shift towards visuality. We decided to take advantage of the emoji connection between pictorial representation and associated semantic knowledge for the production of visual representations.

 

We present a Visual Blending-based system which is capable of generating visual representations for concepts introduced by the user. Our approach combines data from ConceptNet1, EmojiNet2 and Twitter’s Twemoji3 dataset to explore Visual Blending of emoji.

 

The system searches existing emoji semantically related to an user-introduced concept and complements this search with a visual blending process which generates new emoji (more about it).

 

 

Figure 1

Visual representations for “drug money” and “burger shame”, posted in the instagram account Emojinatingthings

The Instagram account Emojinatingthings posts results generated by the system (see Fig. 1).

 
 

How it works

 

There are two main tasks – retrieval of existing emoji that match the introduced concept (T1) and generation of new ones through visual blending (T2) – which are conducted using three components (see Fig. 2):

 

  1. Concept Extender (CE): searches ConceptNet for related concepts to the one introduced;
  2. Emoji Searcher (ES): searches emoji based on words given, using semantic data provided by EmojiNet;
  3. Emoji Blender (EB): receives two emoji as input and returns a list of possible blends.

 

Figure 2

Generation of visual representations (T2) for two concepts: car and game theory

 

The system output produces a set of visual representations for the introduced concept, composed of existing emoji and generated blends.

 

T1: Retrieval of Existing Emoji 

In order to conduct T1, the system mainly makes use of the Emoji Searcher (ES) component, which uses EmojiNet dataset to find emoji based on words (e.g. the red and orange cars for the word car in Fig. 2). The word searching is conducted in different places: emoji name and definition, keywords associated with the emoji and senses related to it (e.g. in Fig. 2 the coffin emoji is retrieved for the word go due to its presence in the sense “go, pass away,…”).

 

T2: Generation of visual representations 

Firstly, the Concept Extender and the Emoji Searcher components are used to get the emoji to blend. The Concept Extender (CE) component is used to query ConceptNet for a given word, obtaining related concepts. It is used in two situations: (i) when the user introduces single-word concepts to retrieve double-word concepts to use in the blending (e.g. in Fig. 2 for car the system obtains go fast); (ii) when the ES component does not find any emoji for a given word (e.g. in Fig. 2 theory does not have any matching emoji so the system uses the related concept idea, obtained with CE).

 

After getting an emoji for each word, a blending process occurs in the Emoji Blender component. Two methods of blending are currently used:

 

  1. Juxtaposition: two emoji are put side by side or one over the other;
  2. Replacement: part of emoji A is replaced by emoji B.

 

 

Evaluating results: double-word concepts

 

In order to assess the quality of system in terms of blend production, a study with 22 participants was conducted. The main goal was to present the participants with blends and ask them to answer a series of questions related to blend quality. Firstly, a list of ten concepts was produced (see Fig. 3).

 

 

Figure 3

Best blends for each concept (top 12). From left to right, top to bottom: Frozen Flower, Secrets in the Future, Serpent of the Year, Silent Snake, Storm of the Teacher, The Darkest Rose, The Flame of the Swords, The Laughing Blade (2), The Sexy Moon (2), and The Sharp Silk. Below each blend is the number of participants who selected it and the total number of participants who selected a blend for that concept.

 
 

Then the blends produced by the system for these concepts were shown to the participants. For each concept, the participants were asked to execute the following tasks:

 

  1. Introduce the concept and generate the blends (presented all at once, side by side);
  2. Answer if there is a blend that represents the concept (yes or no);
  3. Evaluate quality of representation from 1 (very bad) to 5 (very good);
  4. Identify degree of surprise from 1 (very low) to 5 (very high);
  5. Select the best blend (only if a positive answer was given to T2).

 

The system was able to generate blends that represented the concepts in 71.36% (157 out 220) of the cases and in 46.81% of the answers (103 out of 220) the quality was classified as above or equal to high (4).

 

 

General analysis

 

Overall, we consider that the results obtained are visually and conceptually interesting (even though no conceptual blending is performed). The system is able to generate variable results, both with the same emoji and with different ones. The blending process, through the use of Juxtaposition and Replacement, produces blends that represent the concept behind them and vary in terms of degree of conceptual complexity.

 

One major advantage of the system is that it has a very wide conceptual reach (depending only on the emoji knowledge). On the other hand, the present work does not involve Conceptual Blending, which is implemented in previous research work (see A Pig, an Angel and a Cactus Walk Into a Blender).

 

 

Read more about it!

 

 

 


 

References

 

[1] Speer, R., and Havasi, C. 2012. Representing general relational knowledge in conceptnet 5. In LREC, 3679–3686.

[2] Wijeratne, S., and Balasuriya, L., Sheth, A., Doran, D. 2017. EmojiNet: An Open Service and API for Emoji Sense Discovery. In 11th International AAAI Conference on Web and Social Media (ICWSM 2017). Montreal, Canada; 2017.

[3] http://github.com/twitter/twemoji