All Rights Reserved
AccessEcon LLC 2006, 2008.
Powered by MinhViet JSC

 
Wilfredo Leiva Maldonado and Benar Fux Svaiter
 
''On the accuracy of the estimated policy function using the Bellman contraction method''
( 2001, Vol. 3 No.15 )
 
 
In this paper we show that the approximation error of the optimal policy function in the stochastic dynamic programing problem using the policies defined by the Bellman contraction method is lower than a constant (which depends on the modulus of strong concavity of the one-period return function) times the square root of the value function approximation error. Since the Bellman's method is a contraction it results that we can control the approximation error of the policy function. This method for estimating the approximation error is robust under small numerical errors in the computation of value and policy functions.
 
 
Keywords:
JEL: C6 - Mathematical Methods and Programming: General
 
Manuscript Received : Jul 19 2001 Manuscript Accepted : Oct 30 2001

  This abstract has been downloaded 2181 times                The Full PDF of this paper has been downloaded 166542 times