Graph Coloring with Adaptive Evolutionary Algorithms
A.E. Eiben
Leiden University
J.K. van der Hauw
Leiden University
J.I. van Hemert
Leiden University
To appear in the Journal of Heuristics, volume 4:1
Abstract
This paper presents the results of an experimental investigation on solving graph coloring
problems with Evolutionary Algorithms (EA). After testing di erent algorithm variants we
conclude that the best option is an asexual EA using order-based representation and an
adaptation mechanism that periodically changes the tness function during the evolution.
This adaptive EA is general, using no domain speci c knowledge, except, of course, from
the decoder ( tness function). We compare this adaptive EA to a powerful traditional graph
coloring technique DSatur and the Grouping GA on a wide range of problem instances with
di erent size, topology and edge density. The results show that the adaptive EA is superior to
the Grouping GA and outperforms DSatur on the hardest problem instances. Furthermore,
it scales up better with the problem size than the other two algorithms and indicates a linear
computational complexity.
Keywords: evolutionary algorithms, genetic algorithms, constraint satisfaction, graph coloring,
grouping problem, penalty functions, adaptive parameters
1 Introduction
The main goal of this paper is to present the results of an experimental study on solving graph
coloring problems by Evolutionary Algorithms (EAs). In particular we show the working of a
new, problem independent adaptive mechanism for constraint handling in EAs. In Section 2 a
brief overview of graph coloring problems and graph coloring techniques is given. We decide to
investigate graph instances that are 3-colorable and identify the instances around the so-called
phase transition as the most challenging ones. Two graph coloring algorithms are selected to
serve as competitors to our EAs. Among the traditional graph coloring techniques we choose
DSatur from Brelaz for its reported high performance, (Brelaz, 1979). Besides, we also test the
Grouping GA (GGA) of Falkenauer, because graph coloring can be seen as a grouping problem and
the GGA shows excellent performance on grouping problems, such as bin packing (Falkenauer,
1994). In Section 3 performance measures for comparison of di erent algorithms are discussed.
Thereafter, in Section 4 the Grouping GA is described; in Section 5 genetic algorithms with integer
representation and with order-based representation are compared. In Section 6 we present our
adaptive mechanism that is modifying the penalty function during the evolution, and we show that
this adaptive mechanism highly increases performance. In Section 7 a big comparison between our
adaptive EA, the Grouping GA and DSatur is made. We observe that the Grouping GA exhibits
the lowest performance and that the adaptive EA outperforms DSatur on the hardest problem
instances, moreover it scales up better with the problem size. Finally, we draw conclusions in
Section 8.
2 Graph Coloring
The main motivation of the present research is our interest in the applicability of genetic, or more
generally evolutionary, algorithms to constraint satisfaction problems. In general, a constraint
satisfaction problem (CSP) is a pair hS; i, where S is a free search space, i.e. a Cartesian product
of sets S = D1 : : : Dn, and is a formula, a Boolean function on S , (Eiben and Ruttkay, 1997).
A solution of a constraint satisfaction problem is an s 2 S with (s) = true. Usually a CSP is
stated as a problem of nding an instantiation of variables v1 ; : : : ; vn within the nite domains
D1 ; : : : ; Dn such that constraints (relations) c1 ; : : : ; cm prescribed for (some of the) variables hold.
The formula is then the conjunction of the given constraints. It has been proved that every CSP
can be equivalently transformed to a binary one, i.e. to a CSP where each constraint concerns
exactly two variables, (Nudel, 1983). For graph coloring problems this property is natural, if we
envision nodes as variables and edges as constraints. This suggests that the ndings on using a
genetic algorithm with no speci c knowledge on the problem structure can be applicable to other
CSPs as well.
In the family of graph coloring problems an undirected graph G = (V; E ) is given and the
problem is to color each node in such a way that no two nodes connected by an edge are colored
with the same color. There exist several variations of this problem, like nding the least number
of colors that is needed to color the graph, or to nd the largest subgraph in G that can be
colored with the given number of colors. All of these problems are known to be NP-complete
2
(Garey and Johnson, 1979), so it is unlikely that a polynomial-time algorithm exists that solves
any of these problems (Arora et al., 1992). A pure constraint satisfaction problem is the graph
3-coloring variant, where each node in the graph is to be colored with one of three given colors. In
this problem variant, there is no optimization involved, such as minimizing the number of colors
used for a con ict-free coloring. We have chosen to perform an in-depth investigation of this
variant, restricting ourselves to one problem type, but studying three di erent graph topologies,
several problem sizes and a whole range of edge connectivity values, yet keeping the number of
experiments manageable.
2.1 Problem Instances
In the literature there are not many benchmark 3-colorable graphs and therefore we create graphs
to be tested with the graph generator1 written by Joe Culberson. This generator creates various
classes of k-colorable quasi-random graphs. The classes we want to investigate are the following.
Arbitrary 3-colorable graphs where vertices are randomly assigned to one of the 3 partition elements (colors) uniformly and independently. This is a class that has widely been investigated. We
use the term `arbitrary' to distinguish these kind of graphs from the following ones. Equi-partite
3-colorable graphs where the three color sets are as nearly equal in size as possible (the smallest
sets having one element less than the largest). Culberson reports that these graphs present the
most diculty in obtaining the speci ed coloring for a suite of various algorithms, (Culberson
and Luo, 1996). Because the sets are almost equal in size, an algorithm has less information to
make use of. In case of at 3-colorable graphs also the variation in degree for each node is kept to
a minimum, so this class is even tougher, since even less information is available to the algorithm.
Creating test graphs happens by rst pre-partitioning the vertices in three sets (3 colors) and
then drawing edges randomly. For the rst two types of graphs, once the pre-partitioning has
been done, a vertex pair v; w is assigned an edge with xed probability p, provided that v and
w are not in the same color set. So there is some variation in the degree for each vertex. This
measure p is called the edge connectivity of a graph instance. When creating the test graphs a
random number generator is needed. This generator is fed with a random seed, which obviously
in uences the created graphs, and | as it turns out from the experiments | also e ects the
performance of the graph coloring algorithms.
In our preliminary experiments comparing di erent genetic algorithms equi-partite graphs
will be used (Section 5 and Section 6), while using all three classes in a nal comparison with
two other methods (Section 7). For specifying the investigated graph instances we use the notation Geq;n=200;p=0:08;s=5, standing for an equi-partite 3-colorable graph with 200 vertices, edge
probability 8% and seed 5 for the random generator.
It is known that for many NP-complete problems, typical instances can be very easy to solve.
Turner found that many k-colorable graphs are easy to color, so comparisons between algorithms
based on these graphs are not very meaningful, (Turner, 1988). For an interesting comparison
one should look for hard problem instances that pose some challenges for candidate algorithms.
Cheeseman et al. (Cheeseman et al., 1991) found that NP-complete problems have an `order
1
Source code in C is available via ftp://ftp.cs.ualberta.ca/pub/GraphGenerator/generate.tar.gz
3
parameter' and that the hard problems occur at a critical value or phase transition of such a
parameter. For graph coloring, this order parameter is the edge probability or edge connectivity
p. Using the cost function estimation of Clearwater and Hogg, (Clearwater and Hogg, 1996),
one can determine the approximate location of the phase transition depending on the number of
nodes n, in the graph. The phase transition for our type of problems will occur when the edge
connectivity values are around 7=n { 8=n. These values are a bit di erent from the estimate in
(Cheeseman et al., 1991), but our experiments con rm this range and indicate that the hardest
graphs are those with an edge connectivity around 7=n { 8=n, independently from the applied
graph coloring algorithm. In this investigation we concentrate on graphs in this region.
2.2 Graph Coloring Algorithms
There are many traditional graph k-coloring techniques, based on heuristics. Some existing algorithms are: an O(n0:4 )-approximation algorithm by Blum (Blum, 1989), the simple Greedy
algorithm (Kucera, 1991), DSatur from Brelaz (Brelaz, 1979), Iterated Greedy(IG) from Culberson and Luo (Culberson and Luo, 1996), XRLF from Johnson et al. (Johnson et al., 1991).
Probably the most simple and best known algorithm is the Greedy algorithm, which takes some
ordering of the nodes of a graph and colors the nodes in this order with the smallest color2 that
is possible without violating constraints. Grimmet and McDiarmid (Grimmet and McDiarmid,
1975) have shown that for almost all random graphs the Greedy algorithm uses no more than
about twice the optimal number of colors. Nevertheless, several studies showed that in practice
and in theory the Greedy algorithm performs poorly (Kucera, 1991; Turner, 1988). We only
mention it here because our GA with order-based representation uses it as a decoder.
DSatur from Brelaz (Brelaz, 1979) uses a heuristic to dynamically change the ordering of the
nodes and then applies the Greedy method to color the nodes. It works as follows.
A node with highest saturation degree (= number of di erently colored neighbors) is chosen
and given the smallest color that is still possible.
In case of a tie, the node with highest degree (= number of neighbors that are still in the
uncolored subgraph) is chosen.
In case of a tie a random node is chosen.
Because of the random tie breaking, DSatur becomes a stochastic algorithm and just like for the
GA, results of several runs need to be averaged to obtain useful statistics. For our investigation the
backtrack version of Turner (Turner, 1988) has been implemented, which backtracks to the lastly
evaluated node that still has available colors to try. One search step is de ned as expanding a
node with a new color, including those done after a backtracking. The DSatur heuristics performs
very well, therefore we use it as a heuristic technique to compare our EAs with.
Quite some research has been done about graph coloring with genetic algorithms, like Fleurent
and Ferland who have successfully considered various hybrid algorithms in (Fleurent and Ferland,
1996a), and who have extended their study into a general implementation of heuristic search
2
The colors are ordered just to make selection possible.
4
methods in (Fleurent and Ferland, 1996b). Others include von Laszewski who has looked at
structured operators and has used an adaption step to improve the convergence rate of a genetic
algorithm (Laszewski, 1991). Davis' algorithm in (Davis, 1991) was designed to maximize the
total of weights of nodes in a graph colored with a xed amount of colors. This resembles the
problem de nition used in the SAW-ing algorithm where we also add weights to the nodes of the
graph that is being colored. The di erence is that the SAW-ing algorithm uses variable weights
to guide its search, while Davis' algorithm sees the xed weights as the problem instance and
then tackles this problem as an optimization problem. Coll, Duran and Moscato discuss graph
coloring and crossover operators in a more general context in (Coll et al., 1995).
3 Performance Measures of Algorithms
The comparisons between di erent GAs and comparisons of GAs and DSatur will be based on
two di erent measures: success rate and computational e ort. Probabilistic algorithms may nd
a solution in one run and may not nd one in another run. To cope with this problem we
execute a number of runs and monitor how many times a solution is found. The measure success
rate (SR) is the percentage of runs in which a solution is found, it gives an estimation of the
probability of nding a solution in one run. The most obvious measure of computational e ort
for evaluating algorithms is the time complexity, e.g., CPU time in seconds, (Kronsjo, 1987). A
better option, however, is an implementation and hardware independent measure, computational
complexity, i.e., the number of basic operations required to nd a solution. For search algorithms
this is the number of basic search steps a particular algorithm uses. Unfortunately, di erent
search algorithms can use di erent search steps. For a GA a search step is the creation (and
evaluation) of a new individual, in our case a new coloring. Thus, computational complexity will
be measured by the average number of tness evaluations in successful runs, denoted as AES
(Average # of Evaluations to Solution). Fair as this measure may seem, even di erent GAs
may not fully be comparable this way. For instance, a GA using a heuristic within the crossover
operator performs `hidden labor' with respect to a GA applying a standard `blind' crossover. In
other words, the extra computational e orts in performing a heuristic crossover are not visible
if we compare the AES values. A search step of DSatur is a backtracking step, i.e. giving a
node a new color. Thus, the computational complexity of DSatur is measured di erently than
that of a GA { a problem that is rooted in the di erent nature of the algorithms and cannot
be circumvented. Despite of these drawbacks we compare algorithms by AES in the rst part
of Section 7, because this measure is independent from implementational and executional issues,
such as the processor, programming language, network load, etc. In the second part of Section 7,
however, we compare the scaling-up behavior of the investigated algorithms, that is we show how
the AES values change with growing problem sizes. Regardless of the di erent meaning of AES
for di erent algorithms this comparison is by all means fair.
5
4 Grouping Genetic Algorithm
The Grouping Genetic Algorithm (GGA) was introduced by Falkenauer in (Falkenauer and
Delchambre, 1992) and further studied in (Falkenauer, 1994; Falkenauer, 1996). The GGA tries
to capture and exploit the structure of grouping problems by using an appropriate chromosomal
representation and genetic operators. The outlines of the GGA are given in Figure 1, while the
di erent steps are explained in the reminder of this section.
Grouping GA
Initialize population
Evaluate population
while not Stop-condition do
Sort the population using 2-tournament selection
Apply crossover to the best Nc individuals
Replace the worst Nc with the o spring
Mutate Nm randomly selected individuals in the population
Apply inversion to Ni randomly selected individuals in the population
Evaluate population
end while
Figure 1: Pseudo code of the Grouping Genetic Algorithm
In general, the grouping representation consists of two parts: an object part and a group part.
The object part consists of n genes, where n is the number of objects to be grouped, the group
part consists of a permutation of the k group labels. An object gene can take any of the k group
labels as allele, indicating that the object in question belongs to group of the given label. In a
graph coloring context objects are nodes and groups are colors; an example of a chromosome for
n = 6 and k = 3 is shown in Figure 2. The group part shows that three are colors A, B and C are
used for coloring the graph. The object part discloses that node 2 and 6 are colored with color
A, nodes 1, 3 and 4 are colored with color B and node 5 is colored with color C.
BAC
BABBCA
| {z } : |{z}
groups
objects
Figure 2: Example of a chromosome in grouping representation.
The GGA uses genetic operators on the group part of the chromosomes, and adjusts the object
part to the corresponding changes in the group part. We will use the three operators described
in (Falkenauer, 1994), which are crossover, mutation and inversion. The crossover is by far the
most dicult operator of these three, it uses a number of steps to compute two new chromosomes
from two parents. These steps are the following.
1: Copy parent 1 to child 1 and copy parent 2 to child 2. Parent 1 : BABBCA:B|A|C
Select at random two crossing sites, delimiting the crossing Parent 2 : acabba:|cb|a
section, in each of the two parents' group part.
6
2: Inject the contents of the crossing section of the rst parent Child 1 : BABBCA:BAC
before the rst crossing site of the second child. Because Child 2 : acabba:Acba
we are working with the group parts, this means we are
injecting groups of the rst parent into the second child.
3: Overwrite the object part of child 2 such that membership of Child 1 : BABBCA:BAC
the newly inserted groups is enforced (inherited from parent Child 2 : aAabbA:Acba
1).
4a: Adapt the resulting groups to satisfy the constraints of the Child 1 : BABBCA:BAC
problem. Here we throw away groups that have become Child 2 : xAxbbA:Ab
empty or lost an object in step 3. This can result in objects Queue : f1; 3g
which are not assigned to a group. These objects, which
are marked by an x, are put in a queue.
4b: We now have to reinsert the objects which reside in the queue. This can be done by any
heuristic function, as long as the representation remains valid. We do this by looking at the
node we want to insert and, if possible, assigning a color, by using a rst- t heuristic on
a random order of the available colors, such that no constraints are violated. If this is not
possible, we create a new group and assign the object to this group. Note, that this step
can lead to an increase in the number of colors.
5: Execute the steps 3, 4 and 5 again, but with both parents exchanged.
The mutation operator uses a similar procedure. When a chromosome is selected for mutation,
a number of elements in the group part are deleted. When a group is deleted all nodes in the
object part having that color will be temporarily uncolored. After the deletion of groups we will
reinsert the uncolored nodes using the same heuristic that is used in the crossover operator.
The third operator is inversion, which only operates on the group part of the chromosome,
without a ecting the object part. We randomly mark two points in the group part of the chromosome and then reverse the order of the groups between these two points. An example can be
found in Figure 3.
BABDCAEE : DjBEAjC ! BABDCAEE : DAEBC
Figure 3: Example of inversion in grouping representation.
The crossover and mutation operators work in such a way that the length of the group part
of a chromosome can vary between the minimal number of colors needed to color a graph and the
number of nodes of the graph. We apply the algorithm to minimize the number of colors needed
to color the graph. For the 3-colorable graph instances we investigate the minimum is known to
be three. Therefore, we let the tness of a chromosome depend on the number of `unnecessary'
colors and the amount of nodes which are colored with an `unnecessary' color, where the three
colors that color the most nodes are considered as `necessary' and the remaining colors are de ned
as `unnecessary'. For a formal de nition of the tness function let us assume that k is the minimal
7
number of colors needed for coloring the graph. Furthermore, let a chromosome x use l colors
c1 ; : : : ; cl with k l. Now let us de ne g(x; y) as the function that returns the amount of nodes
colored with color y in chromosome x. Without loss of generality we can assume that the colors
ci with i 2 f1; : : : ; lg are ordered in such a way that g(x; c1 ) g(x; c2 ) g(x; cl ). Now the
tness function can be de ned as :
fk (x) = l ? k +
l
X
i=k+1
g(x; ci )
(1)
To minimize function (1) the GGA must minimize the number of colors used for coloring the
graph, it receives penalty points for each extra color and for every node colored with such a color.
It is easy to see that it reaches its global minimum of zero, if and only if the amount of colors
used is equal to the minimum amount of colors needed.
Initialization of the population is done by coloring every individual in the population using
the rst- t heuristic and starting the coloring at a random node. The algorithm contains a
pseudo-sorting procedure using 2-tournament selection. A pseudo-sorted list of individuals is
created by repeatedly selecting two di erent individuals uniform randomly from the population
to compete with each other. The loser, i.e. the one with the worse tness value, is removed from
the population and inserted at the top of the list. The procedure stops when the population
becomes a singleton and the last remaining individual is added to the list. The individuals that
were inserted rst have `sank' to the bottom of the list and are seen as the worst individuals,
while those inserted last are the best ones. Note that the algorithm replaces the o spring of
the best Nc individuals with the worst Nc individuals, therefore the population size must be at
least 2 Nc. As for the parameter setting we compared the values recommended by Falkenauer
in (Falkenauer, 1994) (a population of fty individuals, Nc = 12, Nm = 4 and Ni = 4, and using
an allele mutation probability, giving the probability that an allele from the group part is being
deleted when the individual is undergoing mutation, of 10%) with other values. The best option
turned out to be using more mutation and less crossover, in particular Nc = 8, Nm = 12. During
the present investigation these values will be used.
5 Standard Genetic Algorithms
We experimented with several non-grouping genetic algorithms, varying di erent components.
The common features of the GAs used are summarized in Table 1.
In the integer representation each gene of an individual represents a node in the graph and
its value can be one of the three colors. Thus, the chromosome length L equals the number of
nodes n in the given graph. The tness function is based on penalizing constraint violation. We
consider each edge as a constraint and in case of m edges the penalty function f is
f (x) =
m
X
i=1
wi (x; ei )
8
(2)
GA type
steady state
selection
2-tournament
deletion strategy
worst-deletion
crossover rate
1.0
mutation rate
1/chrom.length
stop-criterion Tmax tness evaluations
Table 1: GA setup used in the experiments, except for the GGA
where wi is the penalty, or weight, assigned to the i-th constraint (edge ei ) and
(
if x violates ei , i.e. ei = (k; l) and xk = xl
(x; ei ) = 01 otherwise
It is obvious that when the minimum penalty of zero is reached, no constraints are violated and
a solution is found. Here we use the same penalty wi 1 for each constraint (edge), thus f simply
counts the violated constraints. It has been conjectured by Falkenauer that this representation
and the corresponding genetic operators are not well-suited for grouping problems, such as graph
coloring, mainly because of the `blind disruption' of chromosomes, (Falkenauer, 1994). Our
experiments con rmed this conjecture, but one unexpected result deserves special attention.
Namely, increasing the disruptiveness of crossover operators increased GA performance, seemingly
contradicting that crossover is harmfull. Detailed presentation of all results would consume too
much space, a full overview can be found in (Eiben and Van der Hauw, 1996). Here we only
give an illustration in Figure 4 that illustrates that increasing the number of crossover points
in multi-point crossover, (De Jong and Spears, 1992), and increasing the number of parents in
multi-parent crossovers, (Eiben et al., 1994), leads to better results.
Nevertheless, the best GA variant within this representation turned out to be an asexual
GA using only mutation and no crossover in a (1+1) scheme, thus with population size 1 and
preservative selection. Figure 5 shows a comparison of the best asexual (mutation only) and
sexual (using crossover and mutation) variants.
In order-based GAs the individuals are permutations and special operators are used to
recombine and mutate permutations, (Starkweather et al., 1991; Fox and McMahon, 1991).
For graph coloring we de ne chromosomes as permutations of nodes and apply a decoder that
constructs a coloring from a permutation. So just as for integer representation the chromosome
length is L = n. As a decoder we have used the Greedy Algorithm which colors a node with the
lowest color that does not violate constraints and leaves nodes uncolored when no colors are left.
The simplest way of evaluating a permutation is then to use the number of uncolored nodes in
the coloring belonging to it. Formally, we use a penalty function that concentrates on the nodes,
instead of the edges. The function f is now de ned as:
f (x) =
n
X
i=1
wi (x; i)
9
(3)
220000
220000
200000
200000
180000
180000
scanning
uniform
HUX
AES
240000
AES
240000
160000
160000
140000
140000
120000
120000
diagonal
m-point
1-point
100000
0
5
10
15
20
25
30
35
Number of parents (diagonal) or crossover points (m-point)
100000
40
0
5
10
15
20
25
Number of parents
30
35
40
Figure 4: Integer representation: e ect of more crossover points and more parents on AES for
Geq;n=1000;p=0:025;s=5. Tmax = 250000 in each run, the results are averaged over 25 independent
runs for each setting (nr. of crossover points, nr. of parents).
300000
1
sexual
asexual
250000
0.8
sexual
asexual
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
0
0
0
0.005
0.01
0.015
0.02
Edge connectivity
0.025
0.03
0
0.005
0.01
0.015
0.02
Edge connectivity
0.025
0.03
Figure 5: Integer representation: asexual (only mutation) vs. sexual reproduction (uniform
crossover + mutation + incest prevention) for n = 1000 near the phase transition. Tmax = 300000
in each run, the results are averaged over 25 independent runs on each instance.
where wi is now the penalty (or weight) assigned to node i and
(
if node xi is left uncolored because of a constraint violation
(x; i) = 01 otherwise
Just like before, we use wi 1, but while for integer representation the tness function counted
the `wrong' edges, here we count the uncolored nodes. Note that the search space for order-based
representation is much bigger than for the integer representation: n! instead of 3n . Because
10
at most 3n di erent possible colorings exist this coding is highly redundant. Additionally, the
penalty function given in Equation 3 supplies less information than the one given in Equation 2.
These considerations might suggest that this GA will be less successful, but the experiments show
that the opposite is true.
Using this representation we compared di erent operators, various population sizes and the
e ect of using only mutation. The outcomes are similar to those of integer representation, in
the sense that an asexual GA using only SWAP mutation and no crossover in a (1+1) scheme
outperforms the best sexual GA using OX2 crossover together with SWAP on large graphs, see
Figure 6 for illustration. On small graphs this is only partly true, but the global conclusion is that
the asexual GA has the best overall performance and that order based representation is superior
to integer representation.
300000
1
sexual
asexual
250000
0.8
sexual
asexual
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
0
0
0
0.005
0.01
0.015
0.02
Edge connectivity
0.025
0.03
0
0.005
0.01
0.015
0.02
Edge connectivity
0.025
0.03
Figure 6: Order-based representation: asexual (only SWAP mutation) vs. sexual reproduction
(OX2 + SWAP mutation) for n = 1000 near the phase transition. Tmax = 300000 in each run,
the results are averaged over 25 independent runs on each instance.
Summarizing our ndings on using GAs with traditional representations we can note the
following. The best performance is achieved with an algorithm without crossover, exclusively
using mutation and having population size 1. The question arises whether such an algorithm
still can be seen as genetic. A `no' is supported by noting that the presence of crossover and a
population with more than one elements is crucial for GAs. However, one could also argue that
the representation and the mutation is standard, only the parameter setting is extreme. To avoid
con icts with conventions on terminology we percieve and name the winning algorithm variant
as an evolutionary algorithm. To this end note, that in contemporary evolutionary computation
the term evolutionary algorithm comprises among others genetic algorithms, evolution strategies
(ES), and evolutionary programming (EP) (Back et al., 1997), and that (1+1) style alorithms are
common in ES (Schwefel, 1995) and EP always operates without using crossover (Fogel, 1995).
11
6 Stepwise Adaptation of Weights
Evolutionary algorithms are intrinsically dynamic and adaptive processes. It is thus rather
unnatural to use static control parameters that remain constant during the evolution. Using
constant parameters is, however, the traditional practice. In the meanwhile, there is an increasing
number of papers that consider varying parameter settings. This accumulates knowledge on a
research area that is thus becoming an emerging sub-technology within evolutionary computation.
Looking at related work on varying parameters in EAs one can distinguish common features
several approaches share. The classi cation in the paper by (Hinterding et al., 1997) distinguishes
three di erent streams. Dynamic parameters obtain di erent values along the evolution prescribed
by a user de ned schedule. This schedule typically assigns new values to parameters depending
on time, most commonly expressed by the number of generations or tness evaluations. Adaptive
parameters obtain new values by a feedback mechanism that monitors the evolution. The new
values typically depend on the achieved progress. This progress measure is the input of a
mechanism that resets the parameter. Self-adaptive parameters are encoded in the chromosomes
and undergo evolution themselves. As opposed to the previous two techniques, here there is not
even an indirect user control of this parameter. It is clear that varying parameters suits the general
`evolutionary spirit' better than static ones, furthermore, they have technical advantages. First
of all, they often lead to increased performance. Second, adaptive and self-adaptive parameters
free the user from determining these parameters by having the EA doing it. This reduces the
chance for incorrect parameter settings.
Our approach falls in the adaptive category in the above classi cation scheme. Technically it
amounts to modifying the weights of components of the penalty function, hence modifying the
tness landscape during the search, based on feedback from the actual population. This technique
was introduced for constraint solving with GAs in (Eiben et al., 1995b), where the constraints
were weighted. The rationale behind it is clear: satisfying a constraint with a high penalty
gives a relatively high reward to the algorithm, hence it will be `more interested' in satisfying
such constraints. Thus, using appropriate weights focuses the `attention' of the algorithm, hence
it can improve the performance. Appropriate in this case means that the constraint weights
should re ect how important, or rather, how dicult a speci c constraint is. This causes two
problems. On the one hand, to determine relative hardness of constraints can require substantial
knowledge on the problem. On the other hand, `being hard' cannot be seen independently from
the applied problem solver. Therefore, it is a natural idea to leave the decision on measures of
hardness to the problem solver itself. Although there are well-founded arguments (Culberson,
1996; Wolpert and Macready, 1997) stating that no search technique can be superior in general,
having an evolutionary algorithm adjusting its parameters itself has been proved to be powerful
under many circumstances (Angeline, 1995; Hinterding et al., 1997). In the particular case of
constraint satisfaction problems we expect that a mechanism enabling the GA to learn weights
itself can circumvent diculties of setting these weights by a human user.
It is interesting to note that using adaptive, or learning features to improve search performance
is done for other types of algorithms too. The breakout method of Morris (Morris, 1993) is
the earliest example we know of. Adaptive memory features as advocated by Glover, (Glover,
1996), and applied by Lkketangen and Glover, (Lkketangen and Glover, 1996), are close to
12
the `evolutionary spirit' and work very well on constrained problems. Also, variants of the the
GSAT algorithm for satis ability problems by Selman and Kautz, (Selman and Kautz, 1993)
and Frank, (Frank, 1996b; Frank, 1996a), apply mechanisms that re-evaluate weights of clauses
during the search process.
The best EA for our graph 3-coloring problem we found so far uses order-based representation
penalizing uncolored nodes. We extend this variant with an adaptive mechanism, thus the basic
idea is now implemented by monitoring which variables in the best individual violate constraints
and raising the penalty wi belonging to these variables. Depending on when the weights are
updated we can distinguish an o -line (after the run) and an on-line (during the run) version of
this technique, see Figure 7 and Figure 8.
O -line saw
set initial weights (thus tness function f)
for x test runs do
run the GA with this f
rede ne f after termination
end for
Figure 7: Pseudo code of the o -line weight adaptation mechanism
On-line saw
set initial weights (thus tness function f)
while not termination do
for the next Tp tness evaluations do
let the GA go with this f
end for
end while
rede ne f and recalculate tness of individuals
Figure 8: Pseudo code of the on-line weight adaptation mechanism
In (Eiben et al., 1995a) and (Eiben and Ruttkay, 1996) the o -line version was applied, here
we will use the on-line version modifying the weights, hence modifying the tness function, during
the evolution. After a certain period, in particular Tp tness evaluations, the best individual in
the population is colored and the weights of its uncolored nodes (that are considered hard for the
EA) are increased by 4w, i.e. wi is set to wi + 4w. This implies that the EA has to search on a
dynamically changing tness landscape, consequently the population has to be re-evaluated after
each period3. We call this mechanism Stepwise Adaptation of Weights (SAW).
Hereby the total number of tness evaluations will not equal the total number of generated individuals. It could
be argued that the number of re-evaluations should be included in the total number of evaluations. However, we
want to count the search steps by the total number of generated colorings. Besides, re-evaluation is computationally
3
13
It is very interesting to see the tness curve of a run of the EA with the SAW mechanism.
Figure 16 shows a run when a solution is found. The left curve has a higher resolution, displaying
the tness of the best individual between 0 - 10000 evaluations, the right curve shows the range
0 - 80000. The higher resolution curve (left) shows that within each period the penalty drops
as the EA is making progress and then sharply rises when the weights are updated, giving the
image of a SAW!
Note that the SAW mechanism introduces two new parameters, Tp and 4w. It is thus
important to check whether the performance of a SAW-ing EA is sensitive for the parameter
values. To this end we performed an extensive test series that showed that the performance is
pretty much independent from these parameters. As an illustration we present the outcomes for
n = 1000, for the asexual as well as for the sexual order-based EA in Figure 9 and 10. The
exact values for 4w and Tp do not have a signi cant e ect on the performance, as long as Tp is
signi cantly smaller than Tmax . From now on, we will use 4w = 1 and Tp = 250 for no speci c
reason. Figure 9 and 10 also show that, also in the presence of the SAW mechanism the sexual
EA using OX + SWAP is clearly inferior to the asexual variant.
1
300000
SWAP
OX2+SWAP
0.8
250000
SWAP
OX2+SWAP
200000
SR
AES
0.6
0.4
150000
0.2
100000
0
50000
0
5
10
15
delta w
20
25
30
0
5
10
15
delta w
20
25
30
Figure 9: E ect of varying 4w on Geq;n=1000;p=0:010;s=5. Tp = 250, Tmax = 300000. SWAP uses
population size 1, OX2+SWAP uses population size 700.
The e ect of the SAW mechanism on the EA performance on graphs with n = 1000 and
p = 0:010 can be seen in Table 2. The increase in performance after adding the SAW-ing
mechanism is dramatic: the success rate averaged over all seeds raises from 9% to 92%, while
the number of search steps drops from 205643 to 89277. The gures show not only that a
SAW-ing EA highly outperforms the other techniques, but also that the performance is rather
independent from the random seeds. Thus, the SAW mechanism is not only highly e ective,
obtaining much better success rates at lower costs, but also very robust. Somewhat surprising is
the low performance of the GGA. On average it terminates with 5 colors and approximately 50
nodes in the `unnecessary' colors.
cheap: the individual needn't be decoded again, only the sum in Equation 3 has to be re-computed for its uncolored
nodes. Therefore, the number of re-evaluations is not included in the total cost.
14
300000
1
SWAP
OX2+SWAP
250000
0.8
SWAP
OX2+SWAP
200000
SR
AES
0.6
0.4
150000
0.2
100000
0
50000
0
2000
4000
6000
8000
10000
Tp
0
2000
4000
6000
8000
10000
Tp
Figure 10: E ect of varying Tp on Geq;n=1000;p=0:010;s=5. 4w = 1, Tmax = 300000. SWAP uses
population size 1, OX2+SWAP uses population size 700.
7 Comparing the GGA, the SAW-ing EA and DSatur
The results summarized in Table 2 serve as an indication of the viability of our SAW-ing
mechanism. For a solid conclusion on the performance of the SAW-ing EA we perform an extensive
comparison between the Grouping GA as described in Section 4, DSatur with backtracking, and a
SAW-ing EA with order-based representation, simple greedy decoder, OneSWAP mutation4 and
no crossover in a (1+1) selection scheme using Tp = 250, and 4w = 1 for the SAW mechanism.
The comparison is performed on arbitrary 3-colorable, equi-partite 3-colorable and at 3-colorable
graphs for n = 200, n = 500 and n = 1000 with edge connectivities around the phase transition.
The results are based on 100, 50 and 25 runs for n = 200, n = 500 and n = 1000, respectively.
For all instances, the graph is generated with seed s = 5. DSatur and both GAs are allowed
Tmax = 300000 search steps.
The test results concerning SR and AES values are given in the Figures 11 { 13. The Grouping
GA is inferior to the other two algorithms on all graph instances. On small graphs (n = 200)
DSatur is better for all p's than the SAW-ing EA, except for at graphs near the phase transition,
see Figure 13 for n = 200. On medium size graphs (n = 500) the SAW-ing EA is slightly better on
arbitrary and equi-partite topologies. On at graphs we see that the performance of the SAW-ing
EA deteriorates much less at the phase transition than that of DSatur. On large graphs (n = 1000)
the SAW-ing EA is clearly better w.r.t. success rates, because it is often able to nd solutions
where DSatur does not nd any. The AES curves are sometimes crossing, but in general the
SAW-ing EA needs fewer steps. The reason for these di erences could be that the instances with
n = 200 are small enough for DSatur to get out of local optima and nd solutions, while this is not
possible anymore for n = 500 and n = 1000 where the search space becomes too big. Evaluating
the overall usefulness of the SAW-ing EA for graph coloring one has to consider two cases. On
Additional tests (not reported here) showed that OneSWAP which always swaps exactly one pair of genes is
slightly better for the (1+1) GA than usual SWAP.
4
15
the easy problems, i.e. small graphs and far from the phase transition DSatur is better. On
the hard instances near the phase transition and on large graphs the SAW-ing EA outperforms
DSatur. These results are very good in the light of the fact that DSatur is a highly problem
tailored graph coloring algorithm, whereas the SAW-ing EA is a general purpose algorithm for
constraint satisfaction, using no speci c domain knowledge on graph coloring. Another strong
property of the SAW-ing EA is that it can take more advantage of extra time given for search.
We increased the total number of evaluations to 1000000 and observed that DSatur still had
SR=0.00 on Geq;n=1000;p=0:008;s=5. The performance of the SAW-ing EA, however, increased from
SR=0.00 to SR=0.44 (AES=407283), showing that it is able to bene t from more time given,
where the extra time for backtracking is not enough to get out of the local optima.
It is an important question how the performance of an algorithm changes if the problem size
grows. This is the issue of scalability, which we will consider w.r.t. to the success rates and the
computational complexity. Thus, while the gures 11 { 13 were drawn for xed n's and varying
p's, in the sequel we vary n (and keep p = c=n for some constant c related to it). We tested the
three algorithms using the same parameters as before for p = 8:0=n at the phase transition. The
results are given in Figure 14 showing that the Grouping GA could only solve the smallest problem
instances and DSatur was not able to nd solutions on larger problem instances. Therefore, we
also made a comparison on easier instances for p = 10:0=n. Figure 15 exhibits these results
showing that the SAW-ing EA scales up much better than the other two methods.
As discussed in Section 3 even the implementation independent number of search steps is not
ideal for comparing di erent algorithms. Nevertheless, comparing the growth rate of the number
of search steps when the problem size (the number of nodes in the graphs to be colored) grows
gives a sound basis for judgment, even though the absolute meaning of the given numbers may
di er. The scale-up curves for AES on the right hand side plots of Figure 14 (p = 8:0=n) and
Figure 15 (p = 10:0=n) show interesting results. On the easier case, for p = 10:0=n, DSatur is
faster up to n = 750, but the SAW-ing EA scales up much better. On the really hard graphs,
for p = 8:0=n, the SAW-ing EA outperforms DSatur already from n = 250. The curves for both
edge connectivity values indicate that the SAW-ing EA scales up linearly with the problem size.
16
300000
1
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Edge connectivity (n=200)
0.08
0
0.09
0.1
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Edge connectivity (n=200)
300000
1
0.08
0.09
0.1
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.01
0.02
0.03
Edge connectivity (n=500)
0.04
0
0.05
0.01
0.02
0.03
Edge connectivity (n=500)
300000
1
0.04
0.05
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.005
0.01
0.015
0.02
Edge connectivity (n=1000)
0.025
0
0.03
0.005
0.01
0.015
0.02
Edge connectivity (n=1000)
0.025
0.03
Figure 11: Comparison for arbitrary 3-colorable graphs, for n = 200, n = 500 and n = 1000.
17
300000
1
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Edge connectivity (n=200)
0.08
0
0.09
0.1
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Edge connectivity (n=200)
300000
1
0.08
0.09
0.1
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.01
0.02
0.03
Edge connectivity (n=500)
0.04
0
0.05
0.01
0.02
0.03
Edge connectivity (n=500)
300000
1
0.04
0.05
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.005
0.01
0.015
0.02
Edge connectivity (n=1000)
0.025
0
0.03
0.005
0.01
0.015
0.02
Edge connectivity (n=1000)
0.025
0.03
Figure 12: Comparison for equi-partite 3-colorable graphs, for n = 200, n = 500 and n = 1000.
18
300000
1
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Edge connectivity (n=200)
0.08
0
0.09
0.1
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Edge connectivity (n=200)
300000
1
0.08
0.09
0.1
0.045
0.05
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0
0.05
0.005
0.01
0.015
Edge connectivity (n=500)
0.02
0.025
0.03
0.035
0.04
Edge connectivity (n=500)
300000
1
SAW
DSatur
GGA
250000
0.8
200000
SR
AES
0.6
150000
0.4
100000
0.2
50000
SAW
DSatur
GGA
0
0.005
0.01
0.015
0.02
Edge connectivity (n=1000)
0.025
0
0.03
0.005
0.01
0.015
0.02
Edge connectivity (n=1000)
0.025
Figure 13: Comparison for at 3-colorable graphs, for n = 200, n = 500 and n = 1000.
19
0.03
1000000
SAW
DSatur
GGA
1
800000
0.8
600000
SR
AES
0.6
0.4
400000
0.2
200000
0
SAW
DSatur
GGA
0
50
250
500
750
1000
Number of nodes
1250
1500
50
250
500
750
1000
Number of nodes
1250
1500
Figure 14: Scale-up curves for SR and AES on equi-partite graphs with p = 8:0=n based on
Tmax = 1000000 and 25 runs.
8 Conclusions
NP-complete problems, such as graph coloring, form a big challenge for designers of algorithms
in general. For evolutionary algorithms, in particular, constraint satisfaction problems form a
speci c challenge. In this paper we investigated how EAs can be applied for graph 3-coloring.
After trying several traditional genetic algorithm variants based on integer and order-based
representation we concluded that using only mutation is better than using mutation and crossover.
This fact is in accordance with the common opinion in evolutionary computation that traditional
500000
1
450000
400000
0.8
350000
SAW
DSatur
GGA
300000
SR
AES
0.6
0.4
250000
200000
150000
0.2
100000
SAW
DSatur
GGA
50000
0
0
50
250
500
750
1000
Number of nodes
1250
1500
50
250
500
750
1000
Number of nodes
1250
1500
Figure 15: Scale-up curves for SR and AES on equi-partite graphs with p = 10:0=n based on
Tmax = 500000 and 50 runs.
20
genetic crossovers are not appropriate for so-called grouping problems, because they `blindly'
disrupt chromosomes. However, it is interesting to note that in integer representation using
more crossover points and more parents {which both imply heavier mixing than usual operators{
increases the performance, cf. Section 5.
As discussed at the end of Section 5, it can be argued that swithching o crossover and
setting the population size to 1 in a genetic algorithm results in an algorithm that cannot be
called genetic anymore. Iterated stochastic hill-climbing could be an appropriate name, but we
rather use the name evolutionary algorithm. This name re ects the origin of the method and there
are other algorithms in evolutionary computation that use exclusively mutation, or population
size 1 (Fogel, 1995; Schwefel, 1995).
Following the recommendations that on grouping problems special representations and
crossovers are advisable (Falkenauer, 1994), we have implemented and tested the grouping GA.
Surprisingly, the GGA is inferior to a simple order-based asexual EA, see Table 2. This, suggests
that using speci c grouping operators is not the only solution for handling the graph coloring
problem. Using only mutation and population size 1 works as well.
The main research subject of this investigation is the technique called Stepwise Adaptation
of Weights. This mechanism is problem independent and it amounts to repeatedly rede ning the
weights that are used in the de nition of the tness (penalty) function. Similar mechanisms have
been applied in other search algorithms; here we use it in an EA. Adding the SAW mechanism
to the simple order-based asexual EA increased its performance with a factor 10 (success rates
increased from 9% to 92%), and made it the best algorithm on the problem instances we used for
a preliminary comparison, cf. Table 2.
SR
DSatur
0.08
GGA
0.00
GA
0.24
GA+SAW 0.96
s=0
AES
125081
300000
239242
76479
SR
0.00
0.00
0.00
0.88
s=1
AES
300000
300000
300000
118580
s=2
s=3
all 4 seeds
SR AES SR AES SR AES
0.00 300000 0.80 155052 0.22 220033
0.00 300000 0.00 300000 0.00 300000
0.00 300000 0.12 205643 0.09 261221
0.92 168060 0.92 89277 0.92 113099
Table 2: Comparing DSatur with backtracking, the Grouping GA, (1+1) order-based GA using
SWAP and (1+1) order-based GA using SWAP and the SAW mechanism (Tp = 250, 4w = 1)
for n = 1000, p = 0:010, di erent seeds.
Conducting an extensive series of experiments on three di erent types of graph, three di erent
sizes and various edge connectivity values con rmed that the SAW-ing EA is a powerful graph
coloring algorithm. The SAW-ing EA was better than the GGA and also outperformed DSatur
by three criteria: 1) on the hardest graph instances it performs better than DSatur, 2) it is able
to increase its performance when given more time, whereas DSatur is not, 3) it scales up much
better, indicating a linear scale-up w.r.t. computational complexity. Especially nice about the
SAW-ing EA is that it is not tailored to the problem of graph coloring. Our ndings are thus
relevant in the broader context of evolutionary constraint handling. The SAW mechanism helps
21
to circumvent a major problem of penalty based evolutionary constraint handling techniques by
letting the GA nd the right constraint weights.
Further research concerns application of SAW-ing to other (NP-complete) constrained
problems and investigating variations of the SAW mechanism. Straightforward modi cations
are varying the value of 4w over the constraints, and allowing decreasing wi for constraints that
are already satis ed. Especially interesting to study is the learning procedure based on combining
frequency and recency based memory of Lkketangen and Glover, where the changes in weights
are related to solution quality, (Lkketangen and Glover, 1996).
Finally, let us make an additional note on the constraint weights the EA nds. The plots in
Figure 16 suggest that problem solving with the SAW mechanism happens in two phases. In the
rst phase the EA is learning a good setting for the weights. In this phase the penalty increases a
lot because of the increased weights. In the second phase the EA is solving the problem, exploiting
the knowledge (appropriate weights) learned in the rst phase. In this phase the penalty drops
sharply indicating that using the right weights (appropriate tness function) in the second phase
the problem becomes `easy'. This interpretation of the tness curves is plausible. We, however,
do not want to suggest that the EA could learn universally good weights for a given graph
instance. In the rst place, another problem solver might need other weights to solve the same
problem. Besides, in a control experiment we applied the SAW-ing EA to a graph, thus learning
good weights, and then applied the EA to the same graph again using the learned weights nonadaptively, i.e. keeping them constant along the evolution. The results showed worse performance
than in the rst run when adaptive weights were used. Suggesting that the SAW mechanism works
by enabling the problem solver to discover some hidden, universally good weights is, therefore,
wrong. This seems to contradict the interpretation that distinguishes two phases of search. At
the moment we tend to see the main advantage of SAW in allowing the EA to shift the focus
of search (quasi) continuously and thus allowing implicit problem decomposition that guides the
population through the search space.
2000
4000
1800
3500
1600
3000
1400
2500
Fitness
Fitness
1200
1000
800
2000
1500
600
1000
400
500
200
0
0
0
2000
4000
6000
8000
10000
Evaluations
0
20000
40000
Evaluations
60000
Figure 16: Fitness curve for the SAW mechanism for SWAP, n = 1000 and p = 0:010.
22
80000
References
Angeline, P. (1995). Adaptive and self-adaptive evolutionary computation. In Palaniswami, M.,
Attikiouzel, Y., Marks, R.J., Fogel, D., and Fukuda, T., editors, Computational Intelligence:
A Dynamic System Perspective, pages 152{161. IEEE Press.
Arora, S., Lund, C., Motwani, R., Sudan, M., and Szegedy, M. (1992). Proof veri cation
and hardness of approximation problems. In Proceedings 33rd IEEE Symposium on the
Foundations of Computer Science, pages 14{23, Los Angeles, CA. IEEE Computer Sociecty.
Back, T., Fogel, D., and Michalewicz, Z., editors (1997). Handbook of Evolutionary Computation.
Institute of Physics Publishing Ltd, Bristol and Oxford University Press, New York.
Belew, R. and Booker, L., editors (1991). Proceedings of the 4th International Conference on
Genetic Algorithms. Morgan Kaufmann.
Blum, A. (1989). An O(n0:4 )-approximation algorithm for 3-coloring (and improved
approximation algorithms for k-coloring). In Proceedings of the 21st ACM Symposium on
Theory of Computing, pages 535{542, New York. ACM.
Brelaz, D. (1979). New methods to color vertices of a graph. Communications of the ACM,
22:251{256.
Cheeseman, P., Kenefsky, B., and Taylor, W. M. (1991). Where the really hard problems are. In
Mylopoulos, J. and Reiter, R., editors, Proceedings of the 12th IJCAI-91, volume 1, pages
331{337. Morgan Kaufmann.
Clearwater, S. and Hogg, T. (1996). Problem structure heuristics and scaling behavior for genetic
algorithms. Arti cial Intelligence, 81:327{347.
Coll, P., Duran, G., and Moscato, P. (1995). A discussion on some design principles for ecient
crossover operators for graph coloring problems. Anales del XXVII Simposio Brasileiro de
Pesquisa Operacional.
Culberson, J. (1996). On the futility of blind search. Technical Report TR 96-18, The University
of Alberta.
Culberson, J. and Luo, F. (1996). Exploring the k-colorable landscape with iterated greedy. In
Johnson, D. and Trick, M., editors, Cliques, Coloring, and Satis ability: Second DIMACS
Implementation Challenge, pages 245{284. American Mathematical Society. Available by
http://web.cs.ualberta.ca/~joe/.
Davis, L. (1991). Order-based genetic algorihms and the graph coloring problem. In Handbook
of Genetic Algorithms, pages 72{90, New York. Van Nostrand Reinhold.
De Jong, K. and Spears, W. (1992). A formal analysis of the role of multi-point crossover in
genetic algorithms. Annals of Mathematics and Arti cial Intelligence, 5:1{26.
23
Eiben, A., Raue, P.-E., and Ruttkay, Z. (1994). Genetic algorithms with multi-parent
recombination. In Davidor, Y., Schwefel, H.-P., and Manner, R., editors, Proceedings of
the 3rd Conference on Parallel Problem Solving from Nature, number 866 in Lecture Notes
in Computer Science, pages 78{87. Springer-Verlag.
Eiben, A., Raue, P.-E., and Ruttkay, Z. (1995a). Constrained problems. In Chambers, L., editor,
Practical Handbook of Genetic Algorithms, pages 307{365. CRC Press.
Eiben, A., Raue, P.-E., and Ruttkay, Z. (1995b). GA-easy and GA-hard constraint satisfaction
problems. In Meyer, M., editor, Proceedings of the ECAI-94 Workshop on Constraint
Processing, number 923 in Lecture Notes in Computer Science, pages 267{284. SpringerVerlag.
Eiben, A. and Ruttkay, Z. (1996). Self-adaptivity for constraint satisfaction: Learning penalty
functions. In Proceedings of the 3rd IEEE Conference on Evolutionary Computation, pages
258{261. IEEE Press.
Eiben, A. and Ruttkay, Z. (1997). Constraint satisfaction problems. In (Back et al., 1997).
Eiben, A. and Van der Hauw, J. (1996). Graph coloring with adaptive evolutionary
algorithms. Technical Report TR-96-11, Leiden University. Also available as http://
www.wi.leidenuniv.nl/~gusz/graphcol.ps.gz.
Falkenauer, E. (1994). A new representation and operators for genetic algorithms applied to
grouping problems. Evolutionary Computation, 2(2):123{144.
Falkenauer, E. (1996). A hybrid grouping genetic algorithm for bin packing. Journal of Heuristics,
2:5{30.
Falkenauer, E. and Delchambre, A. (1992). A genetic algorithm for bin packing and line balancing.
In Proceedings of the IEEE 1992 Int. Conference on Robotics and Automation, pages 1186{
1192. IEEE Computer Society Press.
Fleurent, C. and Ferland, J. (1996a). Genetic and hybrid algorithms for graph coloring. In
G. Laporte, I. H. O. and Hammer, P. L., editors, Annals of Operations Research, number 63
in Metaheuristics in Combinatorial Optimization, pages 437{461. J.C. Baltzer AG, Science
Publishers.
Fleurent, C. and Ferland, J. (1996b). Object-oriented implementation of heuristic search methods
for graph coloring, maximum clique, and satis ability. In Trick, M. A. and Johnson, D. S.,
editors, Cliques, Coloring, and Satis ability: Second DIMACS Implementation Challenge,
volume 26 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science,
pages 619{652. American Mathematical Society.
Fogel, D. (1995). Evolutionary Computation. IEEE Press.
Fox, B. and McMahon, M. (1991). Genetic operators for sequencing problems. In Rawlins, G.,
editor, Foundations of Genetic Algorithms, pages 284{300. Morgan Kaufmann.
24
Frank, J. (1996a). Learning short-term weights for GSAT. Technical report, University of
California at Davis. Available by http://rainier.cs.ucdavis.edu/~frank/decay.ml96.ps.
Frank, J. (1996b). Weighting for Godot: Learning heuristics for GSAT. In Proceedings of the
13th AAAI-96, pages 338{343. AAAI / The MIT Press.
Garey, M. and Johnson, D. (1979). Computers and Intractability: A Guide to the Theory of
NP-Completeness. W.H. Freedman and Co.
Glover, F. (1996). Tabu search and adaptive memory programming | advances, applications,
and challenges. In Interfaces in Computer Science and Operations Research, pages 1{75.
Kluwer Academic Publishers, Norwell, MA.
Grimmet, G. and McDiarmid, C. (1975). On colouring random graphs. Mathematical Proceedings
of the Cambridge Philosophical Society, 77:313{324.
Hinterding, R., Michalewicz, Z., and Eiben, A. (1997). Adaptation in Evolutionary Computation:
a survey. In Proceedings of the 4th IEEE Conference on Evolutionary Computation, pages
65{69. IEEE Press.
Johnson, D., Aragon, C., McGeoch, L., and Schevon, C. (1991). Optimization by simulated
annealing: An experimental evaluation; part II, graph coloring and number partitioning.
Operations Research, 39(3):378{406.
Kronsjo, L. (1987). Algorithms: Their Complexity and Eciency. Wiley and Sons, second edition.
Kucera, L. (1991). The greedy coloring is a bad probabilistic algorithm. Journal of Algorithms,
12:674{684.
Laszewski, G. V. (1991). Intelligent structural operators for the k-way graph partitioning problem.
In (Belew and Booker, 1991), pages 45{52.
Lkketangen, A. and Glover, F. (1996). Surrogate constraint methods with simple learning
for satis ability problems. In Du, D., Gu, J., and Pardalos, P., editors, Proceedings
of the DIMACS workshop on Satis ability Problems: Theory and Applications. American
Mathematical Society.
Morris, P. (1993). The breakout method for escaping from local minima. In Proceedings of the
11th National Conference on Arti cial Intelligence, AAAI-93, pages 40{45. AAAI Press/The
MIT Press.
Nudel, B. (1983). Consistent-labeling problems and their algorithms: Expected complexities and
theory based heuristics. Arti cial Intelligence, 21:135{178.
Schwefel, H.-P. (1995). Evolution and Optimum Seeking. Sixth-Generation Computer Technology
Series. Wiley, New York.
25
Selman, B. and Kautz, H. (1993). Domain-independent extensions to GSAT: Solving large
structured satis ability problems. In Bajcsy, R., editor, Proceedings of IJCAI'93, pages
290{295. Morgan Kaufmann.
Starkweather, T., McDaniel, S., Mathias, K., Whitley, D., and Whitley, C. (1991). A comparison
of genetic sequenceing operators. In (Belew and Booker, 1991), pages 69{76.
Turner, J. (1988). Almost all k-colorable graphs are easy to color. Journal of Algorithms, 9:63{82.
Wolpert, D. and Macready, W. (1997). No free lunch theorems for optimization. IEEE
Transactions on Evolutionary Computation, 1(1):67{82.
26