
Emojinating: Visually Representing Concepts Using Emoji
Emoji are becoming increasingly popular, both among users and brands. Their impact is such that some authors even mention a possible language shift towards visuality. We decided to take advantage of the emoji connection between pictorial representation and associated semantic knowledge for the production of visual representations. We present a Visual Blending-based system which is capable of generating visual representations for concepts introduced by the user.
emojinating.dei.uc.pt
Instagram: @Emojinatingthings
Twitter: @Emojinating
Emojinating Architecture
Emojinating combines data from three resources (ConceptNet [1], EmojiNet [2] and Twitter’s Twemoji [3]) and has three main components:
- Concept Extender (CE): searches ConceptNet for related concepts to the one introduced;
- Emoji Searcher (ES): searches emoji based on words given, using semantic data provided by EmojiNet;
- Emoji Blender (EB): receives two emoji as input and returns a list of blends.

Generation of visual representations (T2) for two concepts: car and game theory
Retrieval of Existing Emoji
In order to retrieve existing emoji, the system mainly makes use of the Emoji Searcher (ES) component, which uses EmojiNet dataset to find emoji based on words (e.g. the red and orange cars for the word car in Fig. 1). The word searching is conducted in different places: emoji name and definition, keywords associated with the emoji and senses related to it (e.g. in Fig. 1 the coffin emoji is retrieved for the word go due to its presence in the sense “go, pass away,…”).
Generation of visual representations
Firstly, the Concept Extender and the Emoji Searcher components are used to get the emoji to blend. The Concept Extender (CE) component is used to query ConceptNet for a given word, obtaining related concepts. It is used in two situations: (i) when the user introduces single-word concepts to retrieve double-word concepts to use in the blending (e.g. in Fig. 1 for car the system obtains go fast); (ii) when the ES component does not find any emoji for a given word (e.g. in Fig. 1 theory does not have any matching emoji so the system uses the related concept idea, obtained with CE).
After getting an emoji for each word, a blending process occurs in the Emoji Blender component. Three methods of blending are currently considered:
- Juxtaposition: two emoji are put side by side or one over the other;
- Replacement: part of emoji A is replaced by emoji B;
- Fusion: the two emoji are merged together by exchange of individual parts.
Deterministic Approach
The first version of the system [4] consisted in a deterministic approach, in which only the emoji most semantically related to the input-concept were used in the blending process. We assessed the system’s performance using a set of 1509 nouns and the results show a higher conceptual coverage than the one from the official emoji set [5]. Despite these results, the initially implemented system did not employ an effective strategy for exploring the search space, thus not guaranteeing that the obtained solutions were the most satisfactory for the user.
Evolutionary Approach
As an attempt to improve the exploration of the search space, in [6] we propose an interactive evolutionary approach that combines a standard Evolutionary Algorithm (EA) with a method inspired in Estimation of Distribution Algorithms (EDA). The approach has a two-level evolution: on a macro level, it uses a method that takes inspiration from EDAs to direct the search to areas that match the user preference; on a micro and more specific level, it uses a standard EA to focus the evolution on certain individuals.
Individual Representation and weight update system, showing the chromosomes (c1 and c2) of two individuals’ genotypes, and detail of gene 1 (g1) of c2 from individual #2. Individual #1 is being “liked”, directly increasing the weights of the concepts / emoji marked in green and indirectly increasing the ones of concepts / emoji marked in grey.
Each individual is encoded using a genotype of two chromosomes, which codify the combination between two emoji parents (Fig. 2). A visual blend is seen as the phenotype of an individual. The emoji used in the blend are stored in the first chromosome (c1 in Fig. 2). The second chromosome (c2) is composed of an undefined number of genes, each codifying an exchange between the two emoji parents. Each gene corresponds to a set of two numbers that refers to emoji 1 and to emoji 2, and define how the exchange is conducted.
Evolutionary framework diagram, showing tasks (T1-9) and objects, e.g. concept tree (CT)
In this version, the user is able to mark solutions as good (“liked”) and also store them in an archive, avoiding their loss in the evolutionary process. The solutions marked as “liked” are used to generate offspring (mutation and crossover) and also to update the weights of an object that we refer to as Concept Tree (CT). The CT object stores conceptual and emoji data produced from the analysis of the user-introduced concept, and is used to produce individuals from scratch. It also stores weight values for each concept and emoji, which are updated in each generation based on the “liked” solutions.
The evolutionary framework, schematically represented in Fig. 3, uses the following procedure:
- T1 System initialisation: the concept introduced by the user is used to gather concepts and emoji semantically related to it. The output is the Concept Tree (CT), a graph-like structured object that stores the conceptual and emoji data produced from the analysis to the concept (Fig. 2) It also stores a weight value for each concept and emoji, which are used in the generation of new individuals;
- T2 Generate initial population using CT: the concept and the two emoji for each individual are selected using CT weights – the higher the weight, the greater chances it has of being selected;
do:
- T3 User evaluation by marking individuals as “liked” and storing individuals by marking them as “locked”;
- T4 Store the “locked” individuals in the archive;
- T5 Retrieve the individuals marked as “liked” from the population and from the archive;
- T6 The weights from the CT are updated based on the “liked” individuals;
- T7 The “liked individuals are mutated to produce offspring, using three types of mutation (emoji, layer and type of blend);
- T8 New individuals are generated from scratch with the CT, in order to match user preferences;
- T9 The new population is constituted by merging mutated individuals (T7) with new individuals produced using the CT (T8);
while (user is not satisfied)
The evolutionary approach is able to better explore the search space, evolving solutions of higher quality (concept representativeness) that match the user taste, having clear advantages in comparison to the deterministic approach
Interface
The system was implemented as a web-based application, which allows user interaction (Fig. 4). The interface has three areas: the search area, the population area and the archive area (1–3 in Fig. 4). The search area is where the user introduces the concept (e.g. bread animal in Fig. 4). The population area presents the current population. For each individual there are two main buttons: the “like”, which is used to evaluate the individual (d); and “lock”, which stores the individual in the archive, or a “cross” button (e), which removes it from the archive.

Web-based application interface, showing 3 areas – search area (1), population area (2), archive area (3) – and 6 buttons – (a) next generation, (b) download, (c) lock, (d) like, (e)remove from archive and (f) activated like button.
Co-creativity: self-evaluation and context-adaptation
In [9], we attempt to instil a more creative behaviour to the system – allowing it to self-evaluate and adapt to context – and we introduced the fusion blend type. With these changes, the blend type probabilities depend on the population and blending choices depend on perceptual features and user preference (e.g. if the user prefers small replaced parts, the system will tend to produce blends that match this preference).
Assessing Usefulness
Our motivation behind Emojinating is the idea that computational systems that address the visual representation of concepts may have great usage potential in tasks such as icon design, by stimulating creativity and aiding in brainstorming activities.
Despite that, the question arises whether there really is any need for such a computational approach, in particularly using visual blending. In [7], we address the topic using different points of view and conduct two user studies to assess the usefulness of a visual blending system.
Exhibitions
Awards
References
[1] Speer, R., and Havasi, C. 2012. Representing general relational knowledge in conceptnet 5. In LREC, 3679–3686.
[2] Wijeratne, S., and Balasuriya, L., Sheth, A., Doran, D. 2017. EmojiNet: An Open Service and API for Emoji Sense Discovery. In 11th International AAAI Conference on Web and Social Media (ICWSM 2017). Montreal, Canada; 2017.
[3] http://github.com/twitter/twemoji
Publications
-
J. M. Cunha, P. Martins, and P. Machado, “How Shell and Horn make a Unicorn: Experimenting with Visual Blending in Emoji,” in Proceedings of the Ninth International Conference on Computational Creativity, Salamanca, Spain, June 25-29, 2018., 2018, pp. 145-152.
- Bibtex
- |
@inproceedings{cunha2018iccc,
Author = {Cunha, Jo{\~{a}}o Miguel and Martins, Pedro and Machado, Penousal},
Booktitle = {Proceedings of the Ninth International Conference on Computational Creativity, Salamanca, Spain, June 25-29, 2018.},
Crossref = {DBLP:conf/icccrea/2018},
Pages = {145–152},
Title = {How Shell and Horn make a Unicorn: Experimenting with Visual Blending in Emoji},
Year = {2018}} -
J. M. Cunha, P. Martins, and P. Machado, “Emojinating: Representing Concepts Using Emoji,” in Workshop Proceedings from The 26th International Conference on Case-Based Reasoning (ICCBR 2018), Stockholm, Sweden, 2018, p. 185.
- Bibtex
- |
@inproceedings{cunha2018emojinating,
Author = {Cunha, Joao Miguel and Martins, Pedro and Machado, Penousal},
Booktitle = {Workshop Proceedings from The 26th International Conference on Case-Based Reasoning {(ICCBR} 2018), Stockholm, Sweden},
Pages = {185},
Title = {Emojinating: Representing Concepts Using Emoji},
Year = {2018}} -
J. M. Cunha, N. Lourenço, J. Correia, P. Martins, and P. Machado, “Emojinating: Evolving Emoji Blends,” in Computational Intelligence in Music, Sound, Art and Design – 8th International Conference, EvoMUSART 2019, Held as Part of EvoStar 2019, Leipzig, Germany, April 24-26, 2019, Proceedings, 2019, pp. 110-126.
- Bibtex
- |
- Link
- |
@inproceedings{cunha2019evo,
author = {Jo{\~{a}}o Miguel Cunha and
Nuno Louren{\c{c}}o and
Jo{\~{a}}o Correia and
Pedro Martins and
Penousal Machado},
title = {Emojinating: Evolving Emoji Blends},
booktitle = {Computational Intelligence in Music, Sound, Art and Design – 8th International
Conference, EvoMUSART 2019, Held as Part of EvoStar 2019, Leipzig,
Germany, April 24-26, 2019, Proceedings},
pages = {110–126},
year = {2019},
crossref = {DBLP:conf/evoW/2019musart},
url = {https://doi.org/10.1007/978-3-030-16667-0\_8},
doi = {10.1007/978-3-030-16667-0\_8},
timestamp = {Fri, 12 Apr 2019 09:24:15 +0200},
biburl = {https://dblp.org/rec/bib/conf/evoW/CunhaLCMM19},
bibsource = {dblp computer science bibliography, https://dblp.org}
} -
J. M. Cunha, S. Rebelo, P. Martins, and P. Machado, “Assessing Usefulness of a Visual Blending System: “Pictionary Has Used Image-making New Meaning Logic for Decades. We Don’t Need a Computational Platform to Explore the Blending Phenomena”, Do We?,” in Proceedings of the Tenth International Conference on Computational Creativity, UNC Charlotte, North Carolina, June 17-21, 2019., 2019.
- Bibtex
- |
@inproceedings{cunha2019iccc,
Author = {Cunha, Jo{\~a}o Miguel and Rebelo, S{\’e}rgio and Martins, Pedro and Machado, Penousal},
Booktitle = {Proceedings of the Tenth International Conference on Computational Creativity, UNC Charlotte, North Carolina, June 17-21, 2019.},
Title = {Assessing Usefulness of a Visual Blending System: “Pictionary Has Used Image-making New Meaning Logic for Decades. We Don’t Need a Computational Platform to Explore the Blending Phenomena”, Do We?},
Year = {2019}} -
J. M. Cunha, P. Martins, N. Lourenço, and P. Machado, “Emojinating Co-Creativity: Integrating Self-Evaluation and Context-Adaptation,” in Proceedings of the Eleventh International Conference on Computational Creativity, ICCC 2020, Coimbra, Portugal, September 7-11, 2020, 2020, pp. 85-88.
- Bibtex
- |
@inproceedings{cunha2020cocreative,
author = {Jo{\~{a}}o Miguel Cunha and
Pedro Martins and
Nuno Louren{\c{c}}o and
Penousal Machado},
editor = {F. Am{\'{\i}}lcar Cardoso and
Penousal Machado and
Tony Veale and
Jo{\~{a}}o Miguel Cunha},
title = {Emojinating Co-Creativity: Integrating Self-Evaluation and Context-Adaptation},
booktitle = {Proceedings of the Eleventh International Conference on Computational
Creativity, {ICCC} 2020, Coimbra, Portugal, September 7-11, 2020},
pages = {85–88},
publisher = {Association for Computational Creativity {(ACC)}},
year = {2020},
url = {http://computationalcreativity.net/iccc20/papers/173-iccc20.pdf},
biburl = {https://dblp.org/rec/conf/icccrea/Cunha00M20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}} -
J. M. Cunha, N. Lourenço, P. Martins, and P. Machado, “Visual Blending for Concept Representation: A Case Study on Emoji Generation,” New Generation Computing, 2020.
- Bibtex
- |
- Link
@article{cunha2020visual,
doi = {10.1007/s00354-020-00107-x},
url = {https://rdcu.be/b7zz9},
year = {2020},
month = sep,
publisher = {Springer Science and Business Media {LLC}},
author = {Jo{\~{a}}o M. Cunha and Nuno Louren{\c{c}}o and Pedro Martins and Penousal Machado},
title = {Visual Blending for Concept Representation: A Case Study on Emoji Generation},
journal = {New Generation Computing}
} -
J. M. Cunha, P. Martins, and P. Machado, “Emojinating: Hooked Beings,” in Proceedings of the 9th International Conference on Digital and Interactive Arts, 2019.
- Bibtex
- |
- Link
- |
@inproceedings{cunha2019hooked,
author = {Cunha, Jo\~{a}o M. and Martins, Pedro and Machado, Penousal},
title = {Emojinating: Hooked Beings},
year = {2019},
isbn = {9781450372503},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3359852.3359964},
doi = {10.1145/3359852.3359964},
booktitle = {Proceedings of the 9th International Conference on Digital and Interactive Arts},
articleno = {Article 88},
numpages = {3},
keywords = {Computational Creativity, Visual Blending, Emoji},
location = {Braga, Portugal},
series = {ARTECH 2019}
}