Monte Carlo estimators Stochastic Differentiation



Yüklə 0,63 Mb.
tarix05.10.2018
ölçüsü0,63 Mb.
#72201



Introduction

  • Introduction

  • Monte Carlo estimators

  • Stochastic Differentiation

  • -feasible gradient approach for two-stage SLP

  • Interior-point method for two stage SLP

  • Testing optimality

  • Convergence analysis

  • Counterexample















We examine several estimators for stochastic gradient:

  • We examine several estimators for stochastic gradient:

    • Analytical approach (AA);
    • Finite difference approach (FD);
    • Simulated perturbation stochastic approach (SPSA);
    • Likelihood ratio approach (LR).


Gradient is expressed as

  • Gradient is expressed as

  • where

  • is given by the a set of solutions of the dual problem





Let us define the set of feasible directions as follows:

  • Let us define the set of feasible directions as follows:



Denote, as projection of vector g onto the set U.

  • Denote, as projection of vector g onto the set U.

  • Since the objective function is differentiable, the solution

  • is optimal if







The starting point can be obtained as the solution of the deterministic linear problem:

  • The starting point can be obtained as the solution of the deterministic linear problem:

  • The iterative stochastic procedure of gradient search could be used further:

  • where is the step-length multiplier and

  • is the projection of gradient estimator to the ε -feasible set.



There is no a great necessity to compute estimators with a high accuracy on starting the optimisation, because then it suffices only to approximately evaluate the direction leading to the optimum.

  • There is no a great necessity to compute estimators with a high accuracy on starting the optimisation, because then it suffices only to approximately evaluate the direction leading to the optimum.

  • Therefore, one can obtain not so large samples at the beginning of the optimum search and, later on, increase the size of samples so as to get the estimate of the objective function with a desired accuracy just at the time of decision making on finding the solution to the optimisation problem.







Two-stage stochastic linear optimisation problem.

  • Two-stage stochastic linear optimisation problem.

  • Dimensions of the task are as follows:

    • the first stage has 10 rows and 20 variables;
    • the second stage has 20 rows and 30 variables.
    • http://www.math.bme.hu/~deak/twostage/ l1/20x20.1/
    • (2006-01-20).


  • The estimate of the optimal value of the objective function given in the database is 182.94234  0.066

  • N0=Nmin=100 Nmax=10000.

  • Maximal number of iterations , generation of trials was broken when the estimated confidence interval of the objective function exceeds admissible value .

  • Initial data were as follows:















The stochastic adaptive method has been developed to solve stochastic linear problems by a finite sequence of Monte-Carlo sampling estimators

  • The stochastic adaptive method has been developed to solve stochastic linear problems by a finite sequence of Monte-Carlo sampling estimators

  • The method is grounded by adaptive regulation of the size of Monte-Carlo samples and the statistical termination procedure, taking into consideration the statistical modeling accuracy

  • The proposed adjustment of sample size, when it is taken inversely proportional to the square of the norm of the Monte-Carlo estimate of the gradient, guarantees the convergence a. s. at a linear rate



Yüklə 0,63 Mb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə