Neuroevolution is a form of machine learning that makes use of evolution as another form of adaptation in addition to learning. Evolutions of ANN occur via evolutionary algorithms (EA). These evolutionary algorithms has the roles of performing various task, such as rule extractions, connection weight training architecture designs and so on. All these process leads to the adaptability of the evolved ANN to change in their surrounding environment and also adapt to the environment itself. Several evolutionary algorithms have been developed over the years. The developments of these evolutionary algorithms are based on a specific framework as shown in figure 9. The various ...view middle of the document...
In this way, the efficacy of GA is not compromised.
A. Neuroevolution of Augmented Topologies (NEAT)
Neuroevolution of Augmented Topologies (NEAT) has been developed by K. Stanley in 2002. The basic concept developed was to start from minimalistic topologies and gradually evolve until an optimal structure, which can effectively solve a problem, is achieved. The idea has been borrowed from the way organism in nature increased in complexity since the first cell. In this way, NEAT is allowed to find highly sophisticated and complex neural network. Prior to NEAT, a topology was chosen for evolving networks before the experiment was started.
The weight space was explored through the crossover and mutation. This concept of fixed topology focused on the weight optimization determined the functionality of a network. Another instance was to evolve both the weight and the topology at the same time . It was argued whet er evolving both weight and topology provided an advantage over a fixed topology. In fact, since a fully connected network could in principle approximate any function,  it was regarded as wastage of valuable effort permuting over different topologies. Moreover, several technical challenges, such as searching for a genetic representation that allowed disparate topologies to cross over in a meaningful way  and finding a way whereby topological innovation that needed a few generations that required to be optimized be protected from disappearing from the population prematurely  had to be tackled. NEAT was developed to address these challenges. In fact, the advent of NEAT allowed neuroevolution to make one step closer to real life evolution.
B. Radial Basis Function (RBF)
Radial Basis functions (RBF) are a class of functions consisting of a hidden layer of radial kernels and an output layer of linear neurons. More specifically, RBF networks can be regarded as a special two layer network which is linear in the parameters by fixing all RBF centers and nonlinearities in the hidden layer. The hidden layer maps the input space onto a new space. Geometrically, a Radial Basis function represents a bump in the multidimensional space, whose dimension is given by the number of entries. The weights represent the contribution of a hidden unit to the respective out-put units. In addition to performing classification task, RBF networks have been successfully applied to a diversity of application channel equalization , speech recognition .
Typical example of a RBF network
C. Fractured domain
By definition, fractured problems have a highly discontinuous mapping between states and optimal actions. As an agent moves from state to state, the best action that the agent can take, changes frequently and abruptly. While almost every real life problem can be grouped as reinforcement learning problems, most of them are fractured. As such, most of the artificial intelligence agents developed is unable to efficiently solve such problems....