
Solving problems by searching Chapter 3 in aima

tarix  17.11.2018  ölçüsü  2,23 Mb.   #81007 

Problem Solving Rational agents need to perform sequences of actions in order to achieve goals. Intelligent behavior can be generated by having a lookup table or reactive policy that tells the agent what to do in every circumstance, but:  Such a table or policy is difficult to build
 All contingencies must be anticipated
A more general approach is for the agent to have knowledge of the world and how its actions affect it and be able to simulate execution of actions in an internal model of the world in order to determine a sequence of actions that will accomplish its goals. This is the general task of problem solving and is typically performed by searching through an internally modeled space of world states.
Problem Solving Task Given:  An initial state of the world
 A set of possible actions or operators that can be performed.
 A goal test that can be applied to a single state of the world to determine if it is a goal state.
Find:  A solution stated as a path of states and operators that shows how to transform the initial state into one that satisfies the goal test.
Welldefined problems A problem can be defined formally by five components:  The initial state that the agent starts in
 A description of the possible actions available to the agent > Actions(s)
 A description of what each action does; the transition model > Result(s,a)
 Together, the initial state, actions, and transition model implicitly define the state space of the problem—the set of all states reachable from the initial state by any sequence of actions. > may be infinite
 The goal test, which determines whether a given state is a goal state.
 A path cost function that assigns a numeric cost to each path.
Example: Romania (Route Finding Problem)
Selecting a state space Real world is absurdly complex  state space must be abstracted for problem solving
 The process of removing detail from a representation is called abstraction.
(Abstract) state = set of real states (Abstract) action = complex combination of real actions  e.g., "Arad Zerind" represents a complex set of possible routes, detours, rest stops, etc.
For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind" (Abstract) solution =  set of real paths that are solutions in the real world
Each abstract action should be "easier" than the original problem
Measuring Performance Path cost: a function that assigns a cost to a path, typically by summing the cost of the individual actions in the path.  May want to find minimum cost solution.
Search cost: The computational time and space (memory) required to find the solution. Generally there is a tradeoff between path cost and search cost and one must satisfice and find the best solution in the time that is available.
Problemsolving agents
Example: The 8puzzle states? actions? goal test? path cost?
Example: The 8puzzle states? locations of tiles actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move
Route Finding Problem States: A location (e.g., an airport) and the current time. Initial state: user's query Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for withinairport transfer if needed. Transition model: The state resulting from taking a flight will have the flight's destination as the current location and the flight's arrival time as the current time. Goal test: Are we at the final destination specified by the user? Path cost: monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time of day, type of airplane, frequentflyer mileage awards, and so on.
More example problems Touring problems: visit every city at least once, starting and ending at Bucharest Travelling salesperson problem (TSP) : each city must be visited exactly once – find the shortest tour VLSI layout design: positioning millions of components and connections on a chip to minimize area, minimize circuit delays, minimize stray capacitances, and maximize manufacturing yield Robot navigation Internet searching Automatic assembly sequencing Protein design
Example: robotic assembly states?: realvalued coordinates of robot joint angles parts of the object to be assembled actions?: continuous motions of robot joints goal test?: complete assembly path cost?: time to execute
Tree search algorithms The possible action sequences starting at the initial state form a search tree Basic idea:  offline, simulated exploration of state space by generating successors of alreadyexplored states (a.k.a.~expanding states)
Example: Romania (Route Finding Problem)
Tree search example
Tree search example
Tree search example
Implementation: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
Implementation: general tree search Fringe (Frontier): the collection of nodes that have been generated but not yet been expanded Each element of a fringe is a leaf node, a node with no successors Search strategy: a function that selects the next node to be expanded from fringe We assume that the collection of nodes is implemented as a queue
Implementation: general tree search
Search strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions:  completeness: does it always find a solution if one exists?
 time complexity: how long does it take to find the solution?
 space complexity: maximum number of nodes in memory
 optimality: does it always find a leastcost solution?
Time and space complexity are measured in terms of  b: maximum branching factor of the search tree
 d: depth of the leastcost solution
 m: maximum depth of the state space (may be ∞)
Uninformed search strategies Uninformed (blind, exhaustive, bruteforce) search strategies use only the information available in the problem definition and do not guide the search with any additional information about the problem.  Breadthfirst search
 Uniformcost search
 Depthfirst search
 Depthlimited search
 Iterative deepening search
Breadthfirst search (BFS) Expand shallowest unexpanded node Expands search nodes level by level, all nodes at level d are expanded before expanding nodes at level d+1 Implemented by adding new nodes to the end of the queue (FIFO queue):  GENERALSEARCH(problem, ENQUEUEATEND)
Since eventually visits every node to a given depth, guaranteed to be complete. Also optimal provided path cost is a nondecreasing function of the depth of the node (e.g. all operators of equal cost) since nodes explored in depth order.
Properties of breadthfirst search Assume there are an average of b successors to each node, called the branching factor. Complete? Yes (if b is finite) Time? 1+b+b2+b3+… +bd + b(bd1) = O(bd+1) Space? O(bd+1) (keeps every node in memory) Optimal? Yes (if cost = 1 per step) Space is the bigger problem (more than time)
Uniformcost search Expand leastcost unexpanded node Like breadthﬁrst except always expand node of least cost instead of least depth (i.e. sort new queue by path cost). Equivalent to breadthfirst if step costs all equal Do not recognize goal until it is the least cost node on the queue and removed for goal testing. Guarantees optimality as long as path cost never decreases as a path increases (nonnegative operator costs).
Uniformcost search Implementation: fringe = queue ordered by path cost Complete? Yes, if step cost ≥ ε Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
Depthfirst search (DFS) Always expand node at deepest level of the tree, i.e. one of the most recently generated nodes. When hit a deadend, backtrack to last choice. Implementation: LIFO queue, i.e., put new nodes to front of the queue
Properties of depthfirst search Complete? No: fails in infinitedepth spaces, spaces with loops  Modify to avoid repeated states along path
 complete in finite spaces
Time? O(bm): terrible if m is much larger than d  but if solutions are dense, may be much faster than breadthfirst
Space? O(bm), i.e., linear space! Optimal? No  Not guaranteed optimal since can find deeper solution before shallower ones explored.
:
Depthlimited search (DLS) = depthfirst search with depth limit l, i.e., nodes at depth l have no successors Recursive implementation:
Iterative deepening search
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
Iterative deepening search Number of nodes generated in a depthlimited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd2 + bd1 + bd Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b^1 + (d1)b^2 + … + 3bd2 +2bd1 + 1bd For b = 10, d = 5,  NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
 NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
Overhead = (123,456  111,111)/111,111 = 11%
Properties of iterative deepening search Complete? Yes Time? (d+1)b0 + d b1 + (d1)b2 + … + bd = O(bd) Space? O(bd) Optimal? Yes, if step cost = 1
Summary of algorithms
Repeated states Failure to detect repeated states can turn a linear problem into an exponential one!
Repeated states Three methods for reducing repeated work in order of effectiveness and computational overhead:  Do not follow selfloops (remove successors back to the same state).
 Do no create paths with cycles (remove successors already on the path back to the root). O(d) overhead.
 Do not generate any state that was already generated. Requires storing all generated states (O(b) space) and searching them (usually using a hashtable for efficiency).
Informed (Heuristic) Search
Heuristic Search Heuristic or informed search exploits additional knowledge about the problem that helps direct search to more promising paths. A heuristic function, h(n), provides an estimate of the cost of the path from a given node to the closest goal state. Must be zero if node represents a goal state.  Example: Straightline distance from current location to the goal location in a road navigation problem.
Many search problems are NPcomplete so in the worst case still have exponential time complexity; however a good heuristic can:  Find a solution for an average problem efficiently.
 Find a reasonably good but not optimal solution efficiently.
Bestfirst search  estimate of "desirability"
 Expand most desirable unexpanded node
Order the nodes in decreasing order of desirability Special cases:  greedy bestfirst search
 A* search
Romania with step costs in km
Greedy bestfirst search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., hSLD(n) = straightline distance from n to Bucharest Greedy bestfirst search expands the node that appears to be closest to goal
Greedy bestfirst search example
Greedy bestfirst search example
Greedy bestfirst search example
Greedy bestfirst search example
Does not ﬁnd shortest path to goal (through Rimnicu) since it is only focused on the cost remaining rather than the total cost.
Properties of greedy bestfirst search Complete? No – can get stuck in loops, e.g., Iasi Neamt Iasi Neamt Time? O(bm), but a good heuristic can give dramatic improvement Space? O(bm)  keeps all nodes in memory (Since must maintain a queue of all unexpanded states) Optimal? No However, a good heuristic will avoid this worstcase behavior for most problems.
A* search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Admissible heuristics h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: hSLD(n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A* using TREESEARCH is optimal
Optimality of A* (proof) Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2) since h(G2) = 0 g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above
Optimality of A* (proof) Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) > f(G) from above h(n) ≤ h*(n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) Hence f(G2) > f(n), and A* will never select G2 for expansion
Consistent heuristics A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) i.e., f(n) is nondecreasing along any path. Theorem: If h(n) is consistent, A* using GRAPHSEARCH is optimal
Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) )  A* is complete as long as
 Branching factor is always ﬁnite
 Every operator adds cost at least d > 0
Time? Exponential Space? Keeps all nodes in memory Optimal? Yes Time and space complexity still O(bm) in the worst case since must maintain and sort complete queue of unexplored options. However, with a good heuristic can ﬁnd optimal solutions for many problems in reasonable time.
Admissible heuristics E.g., for the 8puzzle: h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h1(S) = ? h2(S) = ?
Admissible heuristics E.g., for the 8puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h1(S) = ? 8 h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Dominance If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1
h2 is better for search : Since A* expands all nodes whose f value is less than that of an optimal solution, it is always better to use a heuristic with a higher value as long as it does not overestimate. Typical search costs (average number of nodes expanded):
d=12 IDS = 3,644,035 nodes A*(h1) = 227 nodes A*(h2) = 73 nodes d=24 IDS = too many nodes A*(h1) = 39,135 nodes A*(h2) = 1,641 nodes
Experimental Results on 8puzzle problems A heuristic should also be easy to compute, otherwise the overhead of computing the heuristic could outweigh the time saved by reducing search (e.g. using full breadthﬁrst search to estimate distance wouldn’t help).
Relaxed problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution
Inventing Heuristics Many good heuristics can be invented by considering relaxed versions of the problem (abstractions). For 8puzzle:  A tile can move from square A to B if A is adjacent to B and B is blank
 (a) A tile can move from square A to B if A is adjacent to B.
 (b) A tile can move from square A to B if B is blank.
 (c) A tile can move from square A to B.
If there are a number of features that indicate a promising or unpromising state, a weighted sum of these features can be useful. Learning methods can be used to set weights.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete" configurations Find configuration satisfying constraints, e.g., nqueens In such cases, we can use local search algorithms keep a single "current" state, try to improve it
Example: nqueens Put n queens on an n × n board with no two queens on the same row, column, or diagonal
Extra slides
Example: vacuum world Singlestate, start in #5. Solution?
Example: vacuum world Singlestate, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?
Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency  Nondeterministic: Suck may dirty a clean carpet
 Partially observable: location, dirt at current location.
 Percept: [L, Clean], i.e., start in #5 or #7 Solution?
Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency  Nondeterministic: Suck may dirty a clean carpet
 Partially observable: location, dirt at current location.
 Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]
Vacuum world state space graph states? actions? goal test? path cost?
Vacuum world state space graph states? integer dirt and robot location actions? Left, Right, Suck goal test? no dirt at all locations path cost? 1 per action
Breadthfirst search Expand shallowest unexpanded node Implementation:  fringe is a FIFO queue, i.e., new successors go at end
Breadthfirst search Expand shallowest unexpanded node Implementation:  fringe is a FIFO queue, i.e., new successors go at end
Breadthfirst search Expand shallowest unexpanded node Implementation:  fringe is a FIFO queue, i.e., new successors go at end
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Depthfirst search Expand deepest unexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front
Graph search
Hillclimbing search "Like climbing Everest in thick fog with amnesia"
Hillclimbing search Problem: depending on initial state, can get stuck in local maxima
Hillclimbing search: 8queens problem h = number of pairs of queens that are attacking each other, either directly or indirectly h = 17 for the above state
Hillclimbing search: 8queens problem
Simulated annealing search Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency
Properties of simulated annealing search One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Widely used in VLSI layout, airline scheduling, etc
Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
Genetic algorithms A successor state is generated by combining two parent states Start with k randomly generated states (population) A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation
Genetic algorithms Fitness function: number of nonattacking pairs of queens (min = 0, max = 8 × 7/2 = 28) 24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc
Genetic algorithms
Problem types Deterministic, fully observable singlestate problem  Agent knows exactly which state it will be in; solution is a sequence
Nonobservable sensorless problem (conformant problem)  Agent may have no idea where it is; solution is a sequence
Nondeterministic and/or partially observable contingency problem  percepts provide new information about current state
 often interleave} search, execution
Unknown state space exploration problem
Optimality of A* A* expands nodes in order of increasing f value Gradually adds "fcontours" of nodes Contour i has all nodes with f=fi, where fi < fi+1
Dostları ilə paylaş: 

