Most face detectors employ classifiers created using learning techniques that rely heavily in the quality and quantity of the dataset examples. As such, the dataset plays a key role not only for attaining competitive performances but also for assessing the strengths and shortcomings of the classifier. This means that the creation of adequate datasets for training, testing, and validation of the classifier becomes a crucial and troublesome process.

In this work, we propose an evolutionary approach to autonomously generate positive examples of frontal faces out of existing ones. The idea is to recombine the elementary parts of the faces, i.e. mouths, noses, eyes and eyebrows, using computer vision techniques. A genetic algorithm is used to automatically recombine these parts and this way create new faces that are different from the original ones. To guide the evolutionary process we use an automatic-fitness assignment scheme that employs a classifier (more info). The evolutionary engine is designed so the activation response of the classifier is minimised. As a result, the evolutionary process tends to evolve new valid faces that are no longer considered as faces by the classifier — eXploit faces.

Figure 1


This work is part of the Evolutionary FramEwork for Classifier assessmenT and ImproVemenT (EFECTIVE) and integrates an annotation tool, an evolutionary engine, and a classifier. Thus, the approach comprises the following steps:

  1. Annotation of training examples;
  2. Training of a classifier with these examples;
  3. Automatic evolution of new examples using the classifier to assign fitness.


Annotation Tool


We have developed a general-purpose image annotation tool that allows the user to annotate objects present on images. One can annotate an object by positioning a sequence of points along its contour and by choosing the corresponding category. New categories can be added at any moment. The annotations created by the user are automatically saved in output files, particularly in one XML file for each image and in one text file for each object category. The tool also exports the mask of each annotated object. When one opens a folder with images the tool loads the corresponding annotations saved in files if they exist.

Figure 2

The annotation tool

We used this tool to annotate the parts of faces on a set of images. Each face is annotated by indicating the bounds of its eyes, eyebrows, nose, mouth, as well as the bounds of the face itself.




The classifier consists of a cascade classifier based on the work of Viola and Jones that uses linear binary pattern features trained to detect frontal faces. It uses features in combination with a variant of Adaboost to attain an efficient classifier. This type of classifier assumes the form of a cascade of small and simple classifiers that use linear binary pattern features.


Evolutionary Engine


The evolutionary engine is a conventional genetic algorithm where the individuals are faces constructed from parts of different faces. Each genotype is mapped into a phenotype by creating a composite face, i.e. the parts of faces encoded in the genotype are placed over a base face that is also encoded in the genotype.

Figure 3

Genotype and phenotype of an individual. The genotype consists of a tuple of integers (face, left eye, right eye, nose, mouth, left eyebrow, right eyebrow). Each integer encodes an index of an annotated example. The phenotype consists of a composite of the face parts encoded in the genotype.

This generation of the phenotype is accomplished by using a clone algorithm that allows the seamless placement of an image upon another. The following video shows parts of different faces placed over the same face.

Figure 4

Different face parts placed over the same face

By combining different facial features we aim to achieve new valid faces. The following figure depicts four faces, wherein two of them are generated by our system. As you can see, in some cases it is not easy to identify the examples that are generated.

Figure 5

Four valid faces, wherein two of them are generated.




The gallery at the top of this page contains a selection of results attained using this approach. They show the ability of the approach to explore the search space and exploit the vulnerabilities of the classifier in an automatic and tractable way. As such, one could expect simple recombinations of faces that the classifier has not “seen” before or exploits of lighting and contrast conditions. Nevertheless, the system produces atypical faces with unexpected features. For instance, one can see convincing images of babies with piercings, cases of gender ambiguity, and mixtures of interracial attributes that are at least visually uncommon and peculiar.

Figure 6

Different combinations

Some of the generated faces are so realistic but disturbing at the same time that one could relate with the uncanny valley problem, i.e. the phenomenon where computer generated figures or virtual humanoids that approach photorealistic perfection make real humans uncomfortable.

A final comment goes for the potential use of this approach for data augmentation. The generated examples could be used to further improve the quality of the training dataset and thus the quality of the classifier, a path that is already being pursued. For more info check out the paper and the references.


Related projects



In Proceedings

  • J. Correia, T. Martins, P. Martins, and P. Machado, “X-Faces: The eXploit Is Out There,” in Proceedings of the Seventh International Conference on Computational Creativity (ICCC 2016), 2016, pp. 164-182.

  • J. Correia, P. Machado, J. Romero, and A. Carballal, “Evolving Figurative Images Using Expression-Based Evolutionary Art,” in Proceedings of the fourth International Conference on Computational Creativity (ICCC), 2013, pp. 24-31.

  • P. Machado, J. Correia, and J. Romero, “Expression-Based Evolution of Faces,” in Evolutionary and Biologically Inspired Music, Sound, Art and Design – First International Conference, EvoMUSART 2012, Málaga, Spain, April 11-13, 2012. Proceedings, 2012, pp. 187-198.