
Optimization problems, Approximation Algorithms

tarix  28.06.2018  ölçüsü  0,52 Mb.   #52235 

Approximation Algorithms, and Probabilistically Checkable Proofs
many hard problems (especially NPhard) are optimization problems many hard problems (especially NPhard) are optimization problems  e.g. find shortest TSP tour
 e.g. find smallest vertex cover
 e.g. find largest clique
 may be minimization or maximization problem
 “opt” = value of optimal solution
often happy with approximately optimal solution often happy with approximately optimal solution  warning: lots of heuristics
 we want approximation algorithm with guaranteed approximation ratio of r
 meaning: on every input x, output is guaranteed to have value
 at most r*opt for minimization
 at least opt/r for maximization
Example approximation algorithm: Example approximation algorithm:  Recall:
 Vertex Cover (VC): given a graph G, what is the smallest subset of vertices that touch every edge?
 NPcomplete
Approximation algorithm for VC: Approximation algorithm for VC:  pick an edge (x, y), add vertices x and y to VC
 discard edges incident to x or y; repeat.
Claim: approximation ratio is 2. Proof:  an optimal VC must include at least one endpoint of each edge considered
 therefore 2*opt actual
diverse array of ratios achievable some examples:  (min) Vertex Cover: 2
 MAX3SAT (find assignment satisfying largest # clauses): 8/7
 (min) Set Cover: ln n
 (max) Clique: n/log2n
 (max) Knapsack: (1 + ε) for any ε > 0
(max) Knapsack: (1 + ε) for any ε > 0  (max) Knapsack: (1 + ε) for any ε > 0
called Polynomial Time Approximation Scheme (PTAS)  algorithm runs in poly time for every fixed ε>0
 poor dependence on ε allowed
If all NP optimization problems had a PTAS, almost like P = NP (!)
A job for complexity: How to explain failure to do better than ratios on previous slide? A job for complexity: How to explain failure to do better than ratios on previous slide?  just like: how to explain failure to find polytime algorithm for SAT...
 first guess: probably NPhard
 what is needed to show this?
“gapproducing” reduction from NPcomplete problem L1 to L2
“gapproducing” reduction from NPcomplete problem L1 to L2 “gapproducing” reduction from NPcomplete problem L1 to L2
rgapproducing reduction: rgapproducing reduction:  f computable in poly time
 x L1 opt(f(x)) k
 x L1 opt(f(x)) > rk
 for max. problems use “ k” and “< k/r”
Note: target problem is not a language  promise problem (yes no not all strings)
 “promise”: instances always from (yes no)
Main purpose: Main purpose:  rapproximation algorithm for L2 distinguishes between f(yes) and f(no); can use to decide L1
 “NPhard to approximate to within r”
gapproducing reduction difficult (more later) gapproducing reduction difficult (more later) but gappreserving reductions easier
Example gappreserving reduction: Example gappreserving reduction:  reduce MAXkSAT with gap ε
 to MAX3SAT with gap ε’
 “MAXkSAT is NPhard to approx. within ε MAX3SAT is NPhard to approx. within ε’ ”
MAXSNP (PY) – a class of problems reducible to each other in this way
Missing link: first gapproducing reduction Missing link: first gapproducing reduction  history’s guide
 it should have something to do with SAT
Definition: MAXkSAT with gap ε  instance: kCNF φ
 YES: some assignment satisfies all clauses
 NO: no assignment satisfies more than (1 – ε) fraction of clauses
kSAT NPhard for any language L NP proof system of form: kSAT NPhard for any language L NP proof system of form:  given x, compute reduction to kSAT: x
 expected proof is satisfying assignment for x
 verifier picks random clause (“local test”) and checks that it is satisfied by the assignment
 x L Pr[verifier accepts] = 1
 x L Pr[verifier accepts] < 1
MAXkSAT with gap ε NPhard for any language L NP proof system of form: MAXkSAT with gap ε NPhard for any language L NP proof system of form:  given x, compute reduction to MAXkSAT: x
 expected proof is satisfying assignment for x
 verifier picks random clause (“local test”) and checks that it is satisfied by the assignment
 x L Pr[verifier accepts] = 1
 x L Pr[verifier accepts] ≤ (1 – ε)
 can repeat O(1/ε) times for error < ½
can think of reduction showing kSAT NPhard as designing a proof system for NP in which: can think of reduction showing kSAT NPhard as designing a proof system for NP in which: can think of reduction showing MAXkSAT with gap ε NPhard as designing a proof system for NP in which:  verifier only performs local tests
 invalidity of proof* evident all over: “holographic proof” and an fraction of tests notice such invalidity
Probabilistically Checkable Proof (PCP) permits novel way of verifying proof: Probabilistically Checkable Proof (PCP) permits novel way of verifying proof:  pick random local test
 query proof in specified k locations
 accept iff passes test
fancy name for a NPhardness reduction
PCP[r(n),q(n)]: set of languages L with p.p.t. verifier V that has (r, q)restricted access to a string “proof” PCP[r(n),q(n)]: set of languages L with p.p.t. verifier V that has (r, q)restricted access to a string “proof”  V tosses O(r(n)) coins
 V accesses proof in O(q(n)) locations
 (completeness) x L proof such that
 Pr[V(x, proof) accepts] = 1
 (soundness) x L proof*
 Pr[V(x, proof*) accepts] ½
Two observations: Two observations:  PCP[1, poly n] = NP
 PCP[log n, 1] NP
The PCP Theorem (AS, ALMSS): PCP[log n, 1] = NP.
Corollary: MAXkSAT is NPhard to approximate to within some constant . Corollary: MAXkSAT is NPhard to approximate to within some constant .  using PCP[log n, 1] protocol for, say, VC
 enumerate all 2O(log n) = poly(n) sets of queries
 construct a kCNF φi for verifier’s test on each
 note: kCNF since function on only k bits
 “YES” VC instance all clauses satisfiable
 “NO” VC instance every assignment fails to satisfy at least ½ of the φi fails to satisfy an = (½)2k fraction of clauses.
Elements of proof: Elements of proof:  arithmetization of 3SAT
 lowdegree test
 we will state but not prove this
 selfcorrection of lowdegree polynomials
 we will state but not prove this
 proof composition
 we will describe the idea
Two major components: Two major components:  NP PCP[log n, polylog n] (“outer verifier”)
 we will prove this from scratch, assuming lowdegree test, and selfcorrection of lowdegree polynomials
 NP PCP[n3, 1] (“inner verifier”)
NP PCP[log n, polylog n] (“outer verifier”)  NP PCP[log n, polylog n] (“outer verifier”)
 NP PCP[n3, 1] (“inner verifier”)
composition of verifiers:  reformulate “outer” so that it uses O(log n) random bits to make 1 query to each of 3 provers
 replies r1, r2, r3 have length polylog n
 Key: accept/reject decision computable from r1, r2, r3 by small circuit C
NP PCP[log n, polylog n] (“outer verifier”)  NP PCP[log n, polylog n] (“outer verifier”)
 NP PCP[n3, 1] (“inner verifier”)
composition of verifiers (continued):  final proof contains proof that C(r1, r2, r3) = 1 for inner verifier’s use
 use inner verifier to verify that C(r1,r2,r3) = 1
 O(log n)+polylog n randomness
 O(1) queries
 tricky issue: consistency
NP PCP[log n, 1] comes from NP PCP[log n, 1] comes from  repeated composition
 PCP[log n, polylog n] with PCP[log n, polylog n] yields
 PCP[log n, polyloglog n]
 PCP[log n, polyloglog n] with PCP[n3, 1] yields
 PCP[log n, 1]
many details omitted…
Theorem: NP PCP[n2, 1] Theorem: NP PCP[n2, 1] Proof (first steps): 1. Quadratic Equations is NPhard 2. PCP for QE:  proof = all quadratic functions of a soln. x
 verification = check that a random linear combination of equations is satisfied by x
quadratic equation over F2: quadratic equation over F2: ii bi Xi + c = 0 language QUADRATIC EQUATIONS (QE)  = { systems of quadratic equations over F2 that have a solution (assignment to the X variables) }
Lemma: QE is NPcomplete. Lemma: QE is NPcomplete. Proof: clearly in NP; reduce from CIRCUIT SAT  circuit C an instance of CIRCUIT SAT
 QE variables = variables + gate variables
intended proof: intended proof:  F the field with 2 elements
 given x 2 Fn, a solution to instance of QE
 fx: Fn ! F2 all linear functions of x
 fx(a) = i ai xi
 gx: Fn x n ! F2 includes all quadratic fns of x
 gx(A) = i, j A[i,j]xixj
 KEY: can evaluate any quadratic function of x with a single evaluation of fx and gx
If prover keeps promise to supply all quadratic fns of x, a solution of QE instance…  If prover keeps promise to supply all quadratic fns of x, a solution of QE instance…
Verifier’s action:  query a random linear combination R of the equations of the QE instance
 Completeness: obvious
 Soundness: x fails to satisfy some equation; imagine picking coeff. for this one last
 Pr[x satisfies R] = 1/2
To “enforce promise”, verifier needs to perform: To “enforce promise”, verifier needs to perform:  linearity test: verify f, g are (close to) linear
 selfcorrection: access the linear f’, g’ that are close to f, g
 [so f’ = Had(u) and g’ = Had(V)]
 consistency check: verify V = u u
Linearity test: given access to h:Fm ! F Linearity test: given access to h:Fm ! F  pick random a,b; check if h(a) + h(b) = h(a+b); repeat O(1) times
 do this for functions f and g supplied by prover
Theorem [BLR]: h linear)prob. success = 1; prob. success ¸ 1 – ± ) 9 linear h’ s.t. Pra [h’(a) = h(a)] ¸ 1 – O(±)
Selfcorrection: Selfcorrection:  given access to h:Fm!F close to linear h’; i.e.,
 Pra [h’(a) = h(a)] ¸ 1 – O(±)
 to access h’(a), pick random b; compute
 h(b) + h(a+b)
 with prob. at least 1 – 2¢O(±), h(b) = h’(b) and h(a+b) = h’(a+b); hence we compute h’(a)
Consistency check: given access to linear functions f’ = Had(u) and g’ = Had(V) Consistency check: given access to linear functions f’ = Had(u) and g’ = Had(V)  pick random a, b 2 Fn; check that
 f’(a)f’(b) = g’(abT)
 completeness: if V = u u
 f’(a)f’(b) = (iaiui )(ibiui) = i,j aibjV[i,j] = g’(abT)
Consistency check: given access to linear functions f’ = Had(u) and g’ = Had(V) Consistency check: given access to linear functions f’ = Had(u) and g’ = Had(V)  soundness: claim that if V u u
 Pr[(iaiui )(ibiui) = i,j aibjV[i,j] ] · 3/4
 Pr[(uuT)b Vb] ¸ 1/2
 Pr[aT(uuT)b aTVb] ¸ ½¢½ = ¼
Dostları ilə paylaş: 

