Department of Industrial Engineering,Sharif University of Technology
In this paper, we use both the stochastic dynamic programming and Bayesian inferences concepts to design an optimum-acceptance-sampling-plan policy in quality control environments. To determine the optimum policy we employ a combination of costs and risk functions in the objective function. Unlike previous studies, accepting or rejecting a batch are directly included in the action space of the proposed dynamic programming model. Using the posterior probability of the batch to be in state p (the probability of non-conforming products), first we formulate the problem into a stochastic dynamic programming model. Then, we derive some properties for the optimal value of the objective function, which enable us to search for the optimal policy that minimizes the ratio of the total discounted system cost to the discounted system correct choice probability.