Optimization problems, Approximation Algorithms



Yüklə 0,52 Mb.
tarix28.06.2018
ölçüsü0,52 Mb.
#52235



  • Optimization problems,

  • Approximation Algorithms,

  • and

  • Probabilistically Checkable Proofs



many hard problems (especially NP-hard) are optimization problems

  • many hard problems (especially NP-hard) are optimization problems

    • e.g. find shortest TSP tour
    • e.g. find smallest vertex cover
    • e.g. find largest clique
    • may be minimization or maximization problem
    • “opt” = value of optimal solution


often happy with approximately optimal solution

  • often happy with approximately optimal solution

    • warning: lots of heuristics
    • we want approximation algorithm with guaranteed approximation ratio of r
    • meaning: on every input x, output is guaranteed to have value
      • at most r*opt for minimization
      • at least opt/r for maximization


Example approximation algorithm:

  • Example approximation algorithm:

    • Recall:
    • Vertex Cover (VC): given a graph G, what is the smallest subset of vertices that touch every edge?
    • NP-complete


Approximation algorithm for VC:

  • Approximation algorithm for VC:

    • pick an edge (x, y), add vertices x and y to VC
    • discard edges incident to x or y; repeat.
  • Claim: approximation ratio is 2.

  • Proof:

    • an optimal VC must include at least one endpoint of each edge considered
    • therefore 2*opt  actual


diverse array of ratios achievable

  • diverse array of ratios achievable

  • some examples:

    • (min) Vertex Cover: 2
    • MAX-3-SAT (find assignment satisfying largest # clauses): 8/7
    • (min) Set Cover: ln n
    • (max) Clique: n/log2n
    • (max) Knapsack: (1 + ε) for any ε > 0


(max) Knapsack: (1 + ε) for any ε > 0

    • (max) Knapsack: (1 + ε) for any ε > 0
  • called Polynomial Time Approximation Scheme (PTAS)

    • algorithm runs in poly time for every fixed ε>0
    • poor dependence on ε allowed
  • If all NP optimization problems had a PTAS, almost like P = NP (!)



A job for complexity: How to explain failure to do better than ratios on previous slide?

  • A job for complexity: How to explain failure to do better than ratios on previous slide?

    • just like: how to explain failure to find poly-time algorithm for SAT...
    • first guess: probably NP-hard
    • what is needed to show this?
  • “gap-producing” reduction from NP-complete problem L1 to L2



“gap-producing” reduction from NP-complete problem L1 to L2

  • “gap-producing” reduction from NP-complete problem L1 to L2



r-gap-producing reduction:

  • r-gap-producing reduction:

    • f computable in poly time
    • x  L1  opt(f(x))  k
    • x  L1  opt(f(x)) > rk
    • for max. problems use “ k” and “< k/r”
  • Note: target problem is not a language

    • promise problem (yes  no not all strings)
    • “promise”: instances always from (yes  no)


Main purpose:

  • Main purpose:

    • r-approximation algorithm for L2 distinguishes between f(yes) and f(no); can use to decide L1
    • NP-hard to approximate to within r”


gap-producing reduction difficult (more later)

  • gap-producing reduction difficult (more later)

  • but gap-preserving reductions easier



Example gap-preserving reduction:

  • Example gap-preserving reduction:

    • reduce MAX-k-SAT with gap ε
    • to MAX-3-SAT with gap ε’
    • “MAX-k-SAT is NP-hard to approx. within ε  MAX-3-SAT is NP-hard to approx. within ε’ ”
  • MAXSNP (PY) – a class of problems reducible to each other in this way



Missing link: first gap-producing reduction

  • Missing link: first gap-producing reduction

    • history’s guide
    • it should have something to do with SAT
  • Definition: MAX-k-SAT with gap ε

    • instance: k-CNF φ
    • YES: some assignment satisfies all clauses
    • NO: no assignment satisfies more than (1 – ε) fraction of clauses


k-SAT NP-hard  for any language L  NP proof system of form:

  • k-SAT NP-hard  for any language L  NP proof system of form:

    • given x, compute reduction to k-SAT: x
    • expected proof is satisfying assignment for x
    • verifier picks random clause (“local test”) and checks that it is satisfied by the assignment
    • x  L  Pr[verifier accepts] = 1
    • x  L  Pr[verifier accepts] < 1


MAX-k-SAT with gap ε NP-hard  for any language L  NP proof system of form:

  • MAX-k-SAT with gap ε NP-hard  for any language L  NP proof system of form:

    • given x, compute reduction to MAX-k-SAT: x
    • expected proof is satisfying assignment for x
    • verifier picks random clause (“local test”) and checks that it is satisfied by the assignment
    • x  L  Pr[verifier accepts] = 1
    • x  L  Pr[verifier accepts] ≤ (1 – ε)
    • can repeat O(1/ε) times for error < ½


can think of reduction showing k-SAT NP-hard as designing a proof system for NP in which:

  • can think of reduction showing k-SAT NP-hard as designing a proof system for NP in which:

  • can think of reduction showing MAX-k-SAT with gap ε NP-hard as designing a proof system for NP in which:

    • verifier only performs local tests
    • invalidity of proof* evident all over: “holographic proof” and an  fraction of tests notice such invalidity


Probabilistically Checkable Proof (PCP) permits novel way of verifying proof:

  • Probabilistically Checkable Proof (PCP) permits novel way of verifying proof:

    • pick random local test
    • query proof in specified k locations
    • accept iff passes test
  • fancy name for a NP-hardness reduction



PCP[r(n),q(n)]: set of languages L with p.p.t. verifier V that has (r, q)-restricted access to a string “proof”

  • PCP[r(n),q(n)]: set of languages L with p.p.t. verifier V that has (r, q)-restricted access to a string “proof”

    • V tosses O(r(n)) coins
    • V accesses proof in O(q(n)) locations
    • (completeness) x  L   proof such that
    • Pr[V(x, proof) accepts] = 1
    • (soundness) x  L   proof*
    • Pr[V(x, proof*) accepts]  ½


Two observations:

  • Two observations:

    • PCP[1, poly n] = NP
      • proof?
    • PCP[log n, 1]NP
      • proof?
  • The PCP Theorem (AS, ALMSS):

  • PCP[log n, 1] = NP.



Corollary: MAX-k-SAT is NP-hard to approximate to within some constant .

  • Corollary: MAX-k-SAT is NP-hard to approximate to within some constant .

    • using PCP[log n, 1] protocol for, say, VC
    • enumerate all 2O(log n) = poly(n) sets of queries
    • construct a k-CNF φi for verifier’s test on each
      • note: k-CNF since function on only k bits
    • “YES” VC instance  all clauses satisfiable
    • “NO” VC instance  every assignment fails to satisfy at least ½ of the φi  fails to satisfy an  = (½)2-k fraction of clauses.


Elements of proof:

  • Elements of proof:

    • arithmetization of 3-SAT
      • we will do this
    • low-degree test
      • we will state but not prove this
    • self-correction of low-degree polynomials
      • we will state but not prove this
    • proof composition
      • we will describe the idea


Two major components:

  • Two major components:

    • NPPCP[log n, polylog n] (“outer verifier”)
      • we will prove this from scratch, assuming low-degree test, and self-correction of low-degree polynomials
    • NPPCP[n3, 1] (“inner verifier”)
      • we will not prove


NP PCP[log n, polylog n] (“outer verifier”)

    • NP PCP[log n, polylog n] (“outer verifier”)
    • NP PCP[n3, 1] (“inner verifier”)
  • composition of verifiers:

    • reformulate “outer” so that it uses O(log n) random bits to make 1 query to each of 3 provers
    • replies r1, r2, r3 have length polylog n
    • Key: accept/reject decision computable from r1, r2, r3 by small circuit C


NP PCP[log n, polylog n] (“outer verifier”)

    • NP PCP[log n, polylog n] (“outer verifier”)
    • NP PCP[n3, 1] (“inner verifier”)
  • composition of verifiers (continued):

    • final proof contains proof that C(r1, r2, r3) = 1 for inner verifier’s use
    • use inner verifier to verify that C(r1,r2,r3) = 1
    • O(log n)+polylog n randomness
    • O(1) queries
    • tricky issue: consistency


NP  PCP[log n, 1] comes from

  • NP  PCP[log n, 1] comes from

    • repeated composition
    • PCP[log n, polylog n] with PCP[log n, polylog n] yields
    • PCP[log n, polyloglog n]
    • PCP[log n, polyloglog n] with PCP[n3, 1] yields
    • PCP[log n, 1]
  • many details omitted…



Theorem: NPPCP[n2, 1]

  • Theorem: NPPCP[n2, 1]

  • Proof (first steps):

  • 1. Quadratic Equations is NP-hard

  • 2. PCP for QE:

    • proof = all quadratic functions of a soln. x
    • verification = check that a random linear combination of equations is satisfied by x


quadratic equation over F2:

  • quadratic equation over F2:

  • ii bi Xi + c = 0

  • language QUADRATIC EQUATIONS (QE)

    • = { systems of quadratic equations over F2 that have a solution (assignment to the X variables) }


Lemma: QE is NP-complete.

  • Lemma: QE is NP-complete.

  • Proof: clearly in NP; reduce from CIRCUIT SAT

    • circuit C an instance of CIRCUIT SAT
    • QE variables = variables + gate variables


intended proof:

  • intended proof:

    • F the field with 2 elements
    • given x 2 Fn, a solution to instance of QE
    • fx: Fn ! F2 all linear functions of x
    • fx(a) = i ai xi
    • gx: Fn x n ! F2 includes all quadratic fns of x
    • gx(A) = i, j A[i,j]xixj
    • KEY: can evaluate any quadratic function of x with a single evaluation of fx and gx


If prover keeps promise to supply all quadratic fns of x, a solution of QE instance…

    • If prover keeps promise to supply all quadratic fns of x, a solution of QE instance…
  • Verifier’s action:

    • query a random linear combination R of the equations of the QE instance
    • Completeness: obvious
    • Soundness: x fails to satisfy some equation; imagine picking coeff. for this one last
    • Pr[x satisfies R] = 1/2


To “enforce promise”, verifier needs to perform:

  • To “enforce promise”, verifier needs to perform:

    • linearity test: verify f, g are (close to) linear
    • self-correction: access the linear f’, g’ that are close to f, g
    • [so f’ = Had(u) and g’ = Had(V)]
    • consistency check: verify V = u ­ u


Linearity test: given access to h:Fm ! F

  • Linearity test: given access to h:Fm ! F

    • pick random a,b; check if h(a) + h(b) = h(a+b); repeat O(1) times
    • do this for functions f and g supplied by prover
  • Theorem [BLR]: h linear)prob. success = 1; prob. success ¸ 1 – ± ) 9 linear h’ s.t.

  • Pra [h’(a) = h(a)] ¸ 1 – O(±)



Self-correction:

  • Self-correction:

    • given access to h:Fm!F close to linear h’; i.e.,
    • Pra [h’(a) = h(a)] ¸ 1 – O(±)
    • to access h’(a), pick random b; compute
    • h(b) + h(a+b)
    • with prob. at least 1 – 2¢O(±), h(b) = h’(b) and h(a+b) = h’(a+b); hence we compute h’(a)


Consistency check: given access to linear functions f’ = Had(u) and g’ = Had(V)

  • Consistency check: given access to linear functions f’ = Had(u) and g’ = Had(V)

    • pick random a, b 2 Fn; check that
    • f’(a)f’(b) = g’(abT)
    • completeness: if V = u ­ u
    • f’(a)f’(b) = (iaiui )(ibiui) = i,j aibjV[i,j] = g’(abT)


Consistency check: given access to linear functions f’ = Had(u) and g’ = Had(V)

  • Consistency check: given access to linear functions f’ = Had(u) and g’ = Had(V)

    • soundness: claim that if V  u ­ u
    • Pr[(iaiui )(ibiui) = i,j aibjV[i,j] ] · 3/4
    • Pr[(uuT)b  Vb] ¸ 1/2
    • Pr[aT(uuT)b  aTVb] ¸ ½¢½ = ¼


Yüklə 0,52 Mb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə