AI Methods
Tabu Search
Introduction
Tabu Search, like simulated annealing, is a neighbourhood search. You will recall that this means we find the next possible state from the set of neighbourhood states and then we decide if to move to that state.
Tabu search, like hill climbing considers the set of all possible neighbourhood states and takes the best one. Unlike hill climbing it will take the best move in the neighbourhood, even though it might be worse than the current move. And, unlike simulated annealing, we are likely to move away from a good solution in search of others.
Therefore, with a tabu search it is normal to remember the best move seen during the complete search and return that as the best solution. In fact, most searches will do this anyway as the benefits far outweigh the disadvantages.
The main principle behind tabu search is that it has some memory of the states that is has already investigated and it does not revisit those states. It is like you searching for a bar in a town you have never been to before. If you recognised you had already been to a particular street you would not search that area again.
Not revisiting previously visited states helps in two ways. It avoids the search getting into a loop by continually searching the same area without actually making any progress. In turn, this helps the search explore regions that it might not otherwise explore.
The moves that are not allowed to be revisited are held in a data structure (usually a list) and these moves are called “tabu” (thus the name).
For those of you who are interested (Glover 1989 and Glover 1990) are two of the first tabu search references.
What is a Tabu Move?
It is one thing to say that moves are tabu but we need to decide what we mean by that.
In its simplest form we can simply say that we are not allowed to revisit a state when it is exactly the same as one we have been to before.
In something like the TSP (Travelling Salesman Problem) we could say that we cannot move to a state that has the towns listed in the same order that we have seen before.
This method may be suitable for your particular problem but in some cases it might mean that we spend a lot of our time just checking if we have seen a certain state before; particularly if the size of a state is large or complex.
We can avoid some of these problems by using “better” data structures and algorithms. For example, we could implement a hashing algorithm to ascertain if the next possible state has already been visited. Or, instead of a list, we could implement some kind of binary search tree.
Another way to set the tabu criteria is not to allow the search to return to the state that it has just come from. This is really a special case of the tabu criteria described above.
This method is fairly simple to implement and can result in a smaller data structure for the tabu list.
This method is sometimes known as REM (Reverse Elimination Method).
A further way to define a tabu move is to define a small part of the state and make that a tabu move. For example, you could explore the TSP search space by having a neighbourhood function that considers only two towns. Once you have considered these towns you could make them tabu so that the search is forced to consider other towns in subsequent moves.
As with many of the search problems we have been considering we need to decide exactly what a neighbourhood state is and, in the case of tabu search, what does constitute a tabu move.
Recency
The list of visited moves (which is sometimes called a history) does not usually grow forever. If we did this we might restrict the search too much.
Therefore, the purpose of a Recency function is to restrict the size of the list in some way (it is called a Recency function as it keeps the most recently visited states in the list – discarding the others).
The easiest (and most usual) implementation of this function is to simply keep the list at a fixed size and use a FIFO (FirstIn, FirstOut) algorithm to maintain the list. But other methods are sometimes implemented, depending on the problem. Maybe the listsize parameter is dynamic in that it changes as the algorithm runs. One method of doing this is to keep the list small when states are not being repeated very often but, when repeated states keep being generated the list is made larger so that the search is forced to explore new areas.
Aspiration
It is all very well saying that we will reject any move that is tabu. But what if a move is so good that we would be foolish to reject it?
This is where the aspiration level comes into play.
If a tabu move is greater than the aspiration level then we accept the move. An example may help.
Suppose we are trying to solve an instance of the travelling salesman problem (TSP). We might make tabu any move affecting a town that has just taken part in a move. But if the next move affects one of these towns and it leads to the best solution we have seen so far we might decide to accept the move anyway.
In this instance the aspiration function is simply a matter of checking if the next state is better than any move we have seen so far. If it is, and it’s tabu, we still accept it.
The Algorithm
Note : This algorithm is not in (Russell, 1995) but I have written it using their style in order to be consistent in our approach.
Function TABU_SEARCH(Problem) returns a solution state
Inputs : Problem, a problem
Local Variables : Current, a state
Next, a state
BestSolutionSeen, a state
H, a history of visited states
Current = MAKENODE(INITIALSTATE[Problem])
While not terminte
Next = a highestvalued successor of Current
If (not Move_Tabu(H,Next) or
Aspiration(Next)) then
Current = Next
Update BestSolutionSeen
H = Recency(H + Current)
EndIf
EndWhile
Return BestSolutionSeen
References

Glover, F. 1989. Tabu Search – Part I. ORSA Journal on Computing, Vol. 1, No. 3, pp 190206.

Glover, F. 1990. Tabu Search – Part II. ORSA Journal on Computing, Vol. 2, No. 1, pp 432.

Glover, F., Laguna, M. 1998. Tabu Search. Kluwer Academic Publishers

RaywardSmith, V.J., Osman, I.H., Reeves, C.R., Smith, G.D. 1996. Modern Heuristic Search Methods, John Wiley & Sons.

Russell, S., Norvig, P. 1995. Artificial Intelligence A Modern Approach. PrenticeHall
Dostları ilə paylaş: 