TensorGP: A Data Parallel Approach to Genetic Programming

Genetic Programming (GP) is an automatic problem-solving technique inspired by nature, that optimizes solutions through the evolution of computer programs. Though a powerful tool, GP is also known to suffer from the burden of being computationally expensive by design. This is especially true during the domain evaluation phase where we have to run our program for every fitness case. Currently, due to the rise of parallel computing in recent years, data vectorization is arguably the most attractive strategy to accelerate the evolutionary process of GP applications.

 

The main goal of this project is to investigate the advantages of applying data vectorization and fitness caching to the fitness evaluation phase using throughput-oriented architectures such as the GPU. To accomplish this, we developed, implemented, and explored a general-purpose GP engine capable of catering to our vectorization needs – TensorGP.

 
 

The Framework

 

As the name implies, TensorGP works with tensors, which in the context of Machine Learning, are often defined as a multidimensional array of elements. TensorGP takes the classical approach of most other GP systems and expands on it by using TensorFlow to vectorize operations, consequently speeding up the domain evaluation process by seamlessly distributing computational efforts across different hardware architectures. Additionally, TensorFlow allows for the caching of intermediate fitness results, which accelerates the evolutionary process by avoiding the re-execution of highly fit code.

 

In its simplest form, each individual can be represented as a mathematical expression, as exemplified in Figure 1. TensorGP follows a tree-based approach, internally representing individuals as a tree graph. This implies a first translation phase from string to tree representation, which is only performed at the beginning of the evolutionary process.

 

framework
Figure 1

Possible genotype to phenotype translation phases in TensorGP.

 

TensorFlow can either execute in an eager or graph-oriented mode. When it comes to graph execution, TensorFlow internally converts the tree structure into a graph before actually calculating any values. This is done in order to cache potential intermediate results from subtrees, effectively generalizing our tree graph structure to a Directed Acyclic Graph (DAG). On the other hand, the eager execution model allows for the immediate execution of vectorized operations, eliminating the overhead of explicitly generating the intermediate DAG of operations. As the individuals being evolved are constantly changing from generation to generation, it is reasonable to assume that the eager execution mode would instead be a good fit for GP.

 
 

Experimental Results

 

Our early experiments saw the speed comparison between different approaches for both raw domain evaluation and the evolution of a fixed initial population for the symbolic regression problem of approximating the Pagie Polynomial function. In this set of initial benchmarks, an average of 5 runs was performed for every test case. Figure 2 shows the average and standard deviation of timing values (in seconds) across domain sizes ranging from 4,096 points to over 4 million points for domain evaluation.

 

tabela
Figure 2

Speed comparison amongst several iterative and vectorized GP approaches for the evaluation of domains with an exponential increase in size.

 

These preliminary experimental results demonstrated a preference towards eager execution for our vectorized approach using TensorFlow, verifying the hypothesis that GP does not greatly benefit from graph execution for the evaluation of large domains. Besides, we show that performance gains in excess of 100 times are attainable in certain scenarios when comparing TensorGP to other iterative frameworks such as DEAP.

 

To furnish our results with statistical significance, a second comparative study was conducted on the same problem, but this time including a wider range of frameworks commonly used in Evolutionary Computation research and with 30 evolutionary runs for each test case.

 

grafico
Figure 3

Speed comparison between TensorGP and other widely used GP frameworks for the execution of a controlled evolutionary experiment across domain sizes..

 

The experimental results shown in Figure 3 demonstrate that TensorGP consistently achieves faster execution times within the same controlled setup when compared to other systems, managing speedups of up to two orders of magnitude. The relative speedup was shown to increase with the evaluation of larger domains, with some iterative frameworks not being able to complete the same evolutionary runs in a feasible amount of time. This performance increase further makes the case for vectorized evaluation in GP, especially when run on parallel-capable hardware.

 
 

Evolutionary Art

 

TensorGP is not restricted to Symbolic Regression problems. As a matter of fact, due to the increase in tensor evaluation speed, our framework facilitates the exploration of certain parameters such as depth and domain size, which happen to be particularly desirable when applying GP to evolutionary art.

 

In this context, we integrated our engine with the Neural Image Assessment (NIMA) classifier, ranking images according to how visually appealing they appear to a human subject. NIMA was trained using the Aesthetic Visual Analysis (AVA) dataset, which uses images related to human activities with a fixed size of 224 by 224 pixels. The artifacts generated (i.e. images presented atop of this post) are clear indicators of TensorGP’s potential for artistic production.
Besides, regarding execution time, the evolutionary process involved in the generation of these images with TensorGP was substantially faster (40 seconds) in comparison to DEAP (5 hours and 46 minutes).

 
 


 
 

References

 

TensorGP repository on GitHub

 
 

Publications

 

  • F. Baeta, J. Correia, T. Martins, and P. Machado, “TensorGP – Genetic Programming Engine in TensorFlow,” in Applications of Evolutionary Computation – 24th International Conference, EvoApplications 2021, Held as Part of EvoStar 2021, Virtual Event, April 7-9, 2021, Proceedings, 2021, pp. 763-778.