
Roadmap for Visual Conceptual Blending
The technique of Visual Blending (VB) consists in merging two or more visual representations (e.g. images) to produce new ones. On the other hand, Conceptual Blending consists in integrating two or more mental spaces – knowledge structures – in order to produce a new one, the blend(ed) space. When CB and VB are used together, the process can be referred to as Visual Conceptual Blending [1], which we consider to have an important role in the production of Visual metaphors.
Visual Conceptual Blending has been referred by several authors but, as far as we know, no concrete computational model has been proposed. We aim to take a step closer to outlining a model for visual conceptual blending that can be instantiated in a fully operational computational system.
From Visual Blending to Visual Conceptual Blending
A visual blend is normally considered to be based on the following principles:
- Two concepts are given as input and each is mapped to an object that visual represents or symbolises it;
- The visual blend is an object that integrates the two initial objects, in a way that they are still recognisable and allow the user to infer an association between the concepts.
Cunha and Cardoso [3] highlight the importance of having a conceptual ground for producing visual blends, creating a connection between conceptual blending and visual blending.
In contrast to Visual Blending, the process of Visual Conceptual Blending does not merely consist in the task of producing a merge of two initial visual representations. Instead, the core of the process has to do with conceptual reasoning, which serves as base for the actual process of visual blending, avoiding the production of nonsense blends.
Moreover, a visual conceptual blend has context, it is grounded on a justification which should indicate the relevance of the blend. It can also be given a name or a description that may not even be aligned with the original concept.
Roadmap to Visual Conceptual Blending
We outline a model for the production of visual conceptual blends. Our roadmap is composed of four main stages:
- Conceptualisation: “what is behind the blend?”
- Visual Blending: “which objects to combine and how to combine them?”
- Quality Assessment: “how good is the blend?” The quality of a blend can be assessed based on several aspects, such as user perception and even optimality principles.
- Elaboration: “what is there beyond the blend?” It involves a conceptual process that occurs after the visual blending concerns development of a context of the blend.

The four stages of the Visual Conceptual Blending process
Despite presenting it as a series of stages that may seem to occur in a linear sequence, the reader should understand that order may vary, not being fixed and allowing the repetition of some of the stages.
For more detail on the proposed roadmap, we refer the reader to: [2]
Publications
-
J. M. Cunha, P. Martins, and P. Machado, “How Shell and Horn make a Unicorn: Experimenting with Visual Blending in Emoji,” in Proceedings of the Ninth International Conference on Computational Creativity, Salamanca, Spain, June 25-29, 2018., 2018, pp. 145-152.
- Bibtex
- |
@inproceedings{cunha2018iccc,
Author = {Cunha, Jo{\~{a}}o Miguel and Martins, Pedro and Machado, Penousal},
Booktitle = {Proceedings of the Ninth International Conference on Computational Creativity, Salamanca, Spain, June 25-29, 2018.},
Crossref = {DBLP:conf/icccrea/2018},
Pages = {145–152},
Title = {How Shell and Horn make a Unicorn: Experimenting with Visual Blending in Emoji},
Year = {2018}} -
J. M. Cunha, P. Martins, and P. Machado, “Let’s Figure This Out: A Roadmap for Visual Conceptual Blending,” in Proceedings of the Eleventh International Conference on Computational Creativity, ICCC 2020, Coimbra, Portugal, September 7-11, 2020, 2020, pp. 445-452.
- Bibtex
- |
@inproceedings{cunha2020roadmap,
author = {Jo{\~{a}}o Miguel Cunha and
Pedro Martins and
Penousal Machado},
editor = {F. Am{\'{\i}}lcar Cardoso and
Penousal Machado and
Tony Veale and
Jo{\~{a}}o Miguel Cunha},
title = {Let’s Figure This Out: {A} Roadmap for Visual Conceptual Blending},
booktitle = {Proceedings of the Eleventh International Conference on Computational
Creativity, {ICCC} 2020, Coimbra, Portugal, September 7-11, 2020},
pages = {445–452},
publisher = {Association for Computational Creativity {(ACC)}},
year = {2020},
url = {http://computationalcreativity.net/iccc20/papers/044-iccc20.pdf},
biburl = {https://dblp.org/rec/conf/icccrea/Cunha0M20a.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}} -
J. M. Cunha and A. Cardoso, “From Conceptual Blending to Visual Blending And Back,” in Computational Creativity Meets Digital Literary Studies (Dagstuhl Seminar 19172) — Dagstuhl Reports, T. R. Besold, P. Gervás, E. Gius, and S. Schulz, Eds., , 2019, vol. 9, p. 92.
- Bibtex
@incollection{cunha2019dagstuhl,
annote = {in besold2019dagstuhl},
author = {Cunha, Jo{\~a}o Miguel and Cardoso, Am{\’\i}lcar},
booktitle = {Computational Creativity Meets Digital Literary Studies (Dagstuhl Seminar 19172) — Dagstuhl Reports},
editor = {Tarek Richard Besold and Pablo Gerv{\’a}s and Evlyn Gius and Sara Schulz},
keywords = {0 noPDFcollection},
number = {4},
pages = {92},
title = {From Conceptual Blending to Visual Blending And Back},
volume = {9},
year = {2019}}