Simulation sheldon ross 5th pdf download






















Professor Ross is the founding and continuing editor of the journal Probability in the Engineering and Informational Sciences. The biggest strength I see is the rare combination of mathematical rigor and illustration of how the mathematical methodologies are applied in practice. Books with practical perspective are rarely this rigourous and mathematically detailed. I also like the variety of exercises, which are quite challenging and demanding excellence from students. Krzysztof Ostaszewski, Illinois State University.

We are always looking for ways to improve customer experience on Elsevier. We would like to ask you for a moment of your time to fill in a short questionnaire, at the end of your visit. If you decide to participate, a new browser tab will open so you can complete the survey after you have completed your visit to this website. Thanks in advance for your time.

About Elsevier. Set via JS. View on ScienceDirect. Author: Sheldon Ross. Hardcover ISBN: Imprint: Academic Press. Published Date: 22nd October Page Count: Hence the desired probability is If we let A and B denote, respectively, the event that both flips land on heads and the event that the first flip lands on heads, then the probability obtained above is called the conditional probability of A given that B has occurred and is denoted by P A B A general formula for P A B that is valid for all experiments and events A and B can be obtained in the same manner as given previously.

Namely, if the event B occurs, then in order for A to occur it is necessary that the actual occurrence be a point in both A and B; that is, it must be in AB. Now since we know that B has occurred, it follows that B becomes our new sample space and hence the probability that the event AB occurs will equal the probability of AB relative to the probability of B.

P B The determination of the probability that some event A occurs is often simplified by considering a second event B and then determining both the conditional probability of A given that B occurs and the conditional probability of A given that B does not occur. Example 2a An insurance company classifies its policy holders as being either accident prone or not.

Their data indicate that an accident prone person will file a claim within a one-year period with probability. If a new policy holder is accident prone with probability. Solution Let C be the event that a claim will be filed, and let B be the event that the policy holder is accident prone.

That is, suppose that B1 , B2 ,. Then we can also compute the probability of an event A by conditioning on which of the Bi occur. Solution Let N be the event that coupon n is a new type.

To compute P N , condition on which type of coupon it is. In other words, knowing that B has occurred generally changes the probability that A occurs what if they were mutually exclusive? These quantities of interest that are determined by the results of the experiment are known as random variables. A random variable that can take either a finite or at most a countable number of possible values is said to be discrete.

A somewhat more intuitive interpretation of the density function may be obtained from Eqution 2. From this, we see that f a is a measure of how likely it is that the random variable will be near a.

In many experiments we are interested not only in probability distribution functions of individual random variables, but also in the relationships between two or more of them.

Loosely speaking, X and Y are independent if knowing the value of one of them does not affect the probability distribution of the other. Random variables that are not independent are said to be dependent.

If X is a discrete random variable that takes on one of the possible values x1 , x2 ,. Since g X takes on the value g x when X takes on the value x, it seems intuitive that E [g X ] should be a weighted average of the possible values g x with, for a given x, the weight given to g x being equal to the probability or probability density in the continuous case that X will equal x.

Indeed, the preceding can be shown to be true and we thus have the following result. One way of measuring this variation is to consider the average value of the square of the difference between X and E [X ]. We are thus led to the following definition. It is, however, true in the important special case where the random variables are independent. Before proving this let us define the concept of the covariance between two random variables.

In this section we survey some of the discrete ones. If X represents the number of successes that occur in the n trials, then X is said to be a binomial random variable with parameters n, p. The validity of Equation 2. Equation 2. A binomial 1, p random variable is called a Bernoulli random variable.

Hence the representation 2. Poisson random variables have a wide range of applications. Then n! Suppose that the n balls are chosen sequentially. Since the X i are not independent why not? The normal density function. Such a random variable Z is said to have a standard or unit normal distribution. The simplest form of this remarkable theorem is as follows. That is, it is not necessary to remember the age of the unit to know its distribution of remaining life.

Another useful property of exponential random variables is that they remain exponential when multiplied by a positive constant. A useful result is that min X 1 ,. Thus, M is independent of which of the X i is the smallest. Thus Condition a states that the process begins at time 0. Condition b , the independent increment assumption, states that the number of events by time t [i. The Interval [0, t]. Consider first the number of these subintervals that contain an event.

For a Poisson process let X 1 denote the time of the first event. We now determine the distribution of the X n. The Nonhomogeneous Poisson Process From a modeling point of view the major weakness of the Poisson process is its assumption that events are just as likely to occur in all intervals of equal size.

A generalization, which relaxes this assumption, leads to the nonhomogeneous or nonstationary process. The following result can be established. Proof This proposition is proved by noting that the previously given conditions are all satisfied.

Conditions a , b , and d follow since the corresponding result is true for all not just the counted events. The following proposition is quite useful.

How many outcomes are contained in this event? A couple has two children. What is the probability that both are girls given that the elder is a girl?

Assume that all four possibilities are equally likely. The king comes from a family of two children. What is the probability that the other child is his brother? Find the expected value of the random variable specified in Exercise 5.

Find E [X ] for the random variable of Exercise 6. There are 10 different types of coupons and each time one obtains a coupon it is equally likely to be any of the 10 types. Let X denote the number of distinct types contained in a collection of N coupons, and find E [X ].

A die having six sides is rolled. If each of the six possible outcomes is equally likely, determine the variance of the number that appears. Exercises 35 Suppose that X , the amount of liquid apple contained in a container of commercial apple juice, is a random variable having mean 4 grams.

An airplane needs at least half of its engines to safely complete its mission. If each engine independently functions with probability p, for what values of p is a three-engine plane safer than a five-engine plane? Explain why the following random variables all have approximately a Poisson distribution: a The number of misprints in a given chapter of this book. Then give an analytic proof of this. Two players play a certain game until one has won a total of five games. If player A wins each individual game with probability 0.

Consider the hypergeometric model of Section 2. Verify that this checks with the result given in Section 2. The bus will arrive at a time that is uniformly distributed between 8 and a. If we arrive at 8 a. Let X be a binomial random variable with parameters n, p.

Persons A, B, and C are waiting at a bank having two tellers when it opens in the morning. Persons A and B each go to a teller and C waits in line.

Is max X, Y an exponential random variable? Exercises 37 Consider a Poisson process in which events occur at a rate 0. What is the probability that no events occur between 10 a. That is, show that it is nonnegative and integrates to 1. An urn contains four white and six black balls. A random sample of size 4 is chosen. Let X denote the number of white balls in the sample. An additional ball is now selected from the remaining six balls in the urn. Let Y equal 1 if this ball is white and 0 if it is black.

Let U be uniform on 0,1. Bibliography Feller, W. Wiley, New York, Ross, S. Prentice Hall, New Jersey, Academic Press, New York, Random Numbers 3 Introduction The building block of a simulation study is the ability to generate random numbers, where a random number represents the value of a random variable uniformly distributed on 0, 1.

In this chapter we explain how such numbers are computer generated and also begin to illustrate their uses. These pseudorandom numbers constitute a sequence of values, which, although they are deterministically generated, have all the appearances of being independent uniform 0, 1 random variables. Thus, each xn is either 0, 1,.

Since each of the numbers xn assumes one of the values 0, 1,. Thus, we want to choose the constants a and m so that, for any initial seed x0 , the number of variables that can be generated before this repetition occurs is large.

In general the constants a and m should be chosen to satisfy three criteria: 1. For any initial seed, the number of variables that can be generated before repetition begins is large. The values can be computed efficiently on a digital computer. A guideline that appears to be of help in satisfying the above three conditions is that m should be chosen to be a large prime number that can be fitted to the computer word size. As our starting point in the computer simulation of systems we suppose that we can generate a sequence of pseudorandom numbers which can be taken as an approximation to the values of a sequence of independent uniform 0, 1 random variables.

This approach to approximating integrals is called the Monte Carlo approach. Hence, if we generate k independent sets, each consisting of n independent uniform 0, 1 random variables U11 ,. That is, it is a random point in the region specified in Figure 3. Let us consider now the probability that this random point in the square is contained within the inscribed circle of radius 1 see Figure 3.

Circle within Square. Since the density function of X, Y is constant in the square, it thus follows by definition that X, Y is uniformly distributed in the square. Starting with these random numbers we show in Chapters 4 and 5 how we can generate the values of random variables from arbitrary distributions. With this ability to generate arbitrary random variables we will be able to simulate a probability system—that is, we will be able to generate, according to the specified probability laws of the system, all the random quantities of this system as it evolves over time.

In Exercises 3—9 use simulation to approximate the following integrals. Compare your estimate with the exact answer if known. Use simulation to approximate Cov U, eU , where U is uniform on 0, 1.

Compare your approximation with the exact answer. Bibliography 45 Let U be uniform on 0, 1. For uniform 0, 1 random variables U1 , U2 ,. Find its first 14 values. Bibliography Knuth, D. Addison-Wesley, Reading, MA, USA 61, 25—28, Marsaglia, G. Zaremba, ed. Naylor, T. Ripley, B. Generating Discrete Random Variables 4 4. It is for this reason that the above is called the discrete inverse transform method for generating X. The amount of time it takes to generate a discrete random variable by the above method is proportional to the number of intervals one must search.

For this reason it is sometimes worthwhile to consider the possible values x j of X in decreasing order of the p j.

That is, suppose we want to generate the value of X which is equally likely to take on any of the values 1,. Discrete uniform random variables are quite important in simulation, as is indicated in the following two examples. Example 4b Generating a Random Permutation Suppose we are interested in generating a permutation of the numbers 1, 2,. The following algorithm will accomplish this by first choosing one of the numbers 1,. However, so that we do not have to consider exactly which of the numbers remain to be positioned, it is convenient and efficient to keep the numbers in an ordered list and then randomly choose the position of the number rather than the number itself.

That is, starting with any initial ordering P1 , P2 ,. Now we randomly choose one of the positions 1,. The elements in these positions constitute the random subset. It should be noted that the ability to generate a random subset is particularly important in medical trials.

To test its effectiveness, the medical center has recruited volunteers to be subjects in the test. Both the volunteers and the administrators of the drug will not be told who is in each group such a test is called double-blind. It remains to determine which of the volunteers should be chosen to constitute the treatment group. Clearly, one would want the treatment group and the control group to be as similar as possible in all respects with the exception that members in the first group are to receive the drug while those in the other group receive a placebo, for then it would be possible to conclude that any difference in response between the groups is indeed due to the drug.

That is, the choice should be made so that each of the subsets of volunteers is equally likely to constitute the set of volunteers. Remarks Another way to generate a random permutation is to generate n random numbers U1 ,. The difficulty with this approach, however, is that ordering the random numbers typically requires on the order of n log n comparisons.

One way to accomplish this is to note that if X is a discrete uniform random variable over the integers 1,. While this is easily accomplished by generating n random numbers U1 ,. The preceding idea can also be applied when the X i are independent but not identically distributed Bernoulli random variables. Remark on Reusing Random Numbers Although the procedure just given for generating the results of n independent trials is more efficient than generating a uniform random variable for each trial, in theory one could use a single random number to generate all n trial results.

Thus, we can in theory use a single random number U to generate the results of the n trials as follows: 1. Generate U 3. Go to Line 3. Consequently, if the last digit of U is 0 then it will remain 0 in the next transformation. Also, if the next to last digit ever becomes 5 then it will be transformed to 0 in the next iteration, and so the last 2 digits will always be 0 from then on, and so on.

Thus, if one is not careful all the random numbers could end up equal to 1 or 0 after a large number of iterations. The key to using the inverse transform method to generate such a random variable is the following identity proved in Section 2. If not, then it computes in Step 4 p1 by using the recursion 4.

Thus, the number of comparisons needed will be 1 greater than the generated value of the Poisson. Hence, the number of searches it makes is 1 more than the value of X. Remarks 1. Another way of generating a binomial n, p random variable X is by utilizing its interpretation as the number of successes in n independent Bernoulli trials, when each trial is a success with probability p.

Consequently, we can also simulate X by generating the outcomes of these n Bernoulli trials. Rejection Method step 1: Simulate the value of Y , having probability mass function q j. Otherwise, return to Step 1. The rejection method is pictorially represented in Figure 4. We now prove that the rejection method works. In addition, the number of iterations of the algorithm needed to obtain X is a geometric random variable with mean c.

Proof To begin, let us determine the probability that a single iteration produces the accepted value j. Example 4f Suppose we wanted to simulate the value of a random variable X that takes one of the values 1, 2,. Whereas one possibility is to use the inverse transform algorithm, another approach is to use the rejection method with q being the discrete uniform density on 1,.

Otherwise return to Step 1. The constant 0. On average, this algorithm requires only 1. That is, we can simulate X as follows: step 1: Generate a random number U1. This approach to simulating from F is often referred to as the composition method. In addition, the vector P k has at most k nonzero components, and each of the Q k has at most two nonzero components. That is, we show, for suitably defined Q 1 ,. Before presenting the general technique for obtaining the representation 4.

We start by choosing i and j satisfying the conditions of the preceding lemma. We now define a two-point mass function Q 1 , putting all its weight on 3 and 2 and such that P is expressible as an equally weighted mixture between Q 1 and a second two-point mass function Q 2. In addition, all the mass of point 3 is contained in Q 1.

Hence our initial two-point mass function —Q 1 — concentrates on points 3 and 1 giving no weight to 2 and 4. To start, we choose i and j satisfying the conditions of the lemma. We can now easily simulate from P by first generating a random integer N equally likely to be either 1, 2,.

The random variable X will have probability mass function P. That is, we have the following procedure for simulating from P. That is, we can arrange things so that the kth two-point mass function gives positive weight to the value k. Hence, the procedure calls for simulating N , equally likely to be 1, 2,.

Actually, it is not necessary to generate a new random number in Step 2. That is, first generate X 1 ; then generate X 2 from its conditional distribution given the generated value of X 1 ; then generate X 3 from its conditional distribution given the generated values of X 1 and X 2 ; and so on.

This is illustrated in Example 4i, which shows how to simulate a random vector having a multinomial distribution. If X i denotes the number of trials that result in outcome i, then the random vector X 1 ,. Its joint probability mass function is given by n! That is, first generate independent random variables Y1 ,. On the other hand, if n is large relative to r , then X 1 ,.

That is, first generate X 1 , then X 2 , then X 3 , and so on. Because each of the n trials independently results in outcome 1 with probability p1 , it follows that X 1 is a binomial random variable with parameters n, p1. Therefore, we can use the method of Section 4. Suppose its generated value is x1.

Thus, we can again make use of Section 4. We then use 1 2 this fact to generate X 3 , and continue on until all the values X 1 ,. Exercises 65 3. A deck of cards—numbered 1, 2,.

Write a simulation program to estimate the expectation and variance of the total number of hits. Run the program. Find the exact answers and compare them with your estimates. Another method of generating a random permutation, different from the one presented in Example 4b, is to successively generate a random permutation of the elements 1, 2,.

How many random numbers were needed? A pair of fair dice are to be continually rolled until all the possible outcomes 2, 3,. Develop a simulation study to estimate the expected number of dice rolls that are needed. Suppose that n is very large, and also that each item may appear at many different places on the list. Explain how random numbers can be used to estimate the sum of the values of the different items on the list where the value of each item is to be counted once no matter how many times the item appears on the list.

Consider the n events A1 ,. Let X be a binomial random variable with parameters n and p. Suppose that the random variable X can take on any of the values 1,. Use the composition approach to give an algorithm that generates the value of X. Explain what the above algorithm is doing in this case and why its validity is clear.

Otherwise, go to 2. Show that the following algorithm accomplishes this. Set up the alias method for generating a binomial with parameters 5, 0. Explain how we can number the Q k in the alias methodso that k is one of the two points to which Q k gives weight. Discuss efficient procedures for simulating X 1 ,. Generating Continuous Random Variables 5 Introduction Each of the techniques for generating a discrete random variable has its analogue in the continuous case.

In Sections 5. In Section 5. Finally, in Sections 5. A general method for generating such a random variable—called the inverse transformation method—is based on the following proposition.

Proposition Let U be a uniform 0, 1 random variable. That is, the negative logarithm of a random number is exponentially distributed with rate 1. See Section 2. For example, if the fourth event occurred by time 1 but the fifth event did not, then clearly there would have been a total of four events by time 1. The following algorithm can thus be used to generate a pair of exponentials with mean 1.

U No Figure 5. The rejection method for simulating a random variable X having density function f. The Rejection Method step 1: Generate Y having density g. The reader should note that the rejection method is exactly the same as in the case of discrete random variables, with the only difference being that densities replace mass functions.

In exactly the same way as we did in the discrete case we can prove the following result. Theorem i The random variable generated by the rejection method has density f.

Because such a random variable is concentrated on the positive axis and has mean 23 , it is natural to try the rejection technique with an exponential random variable with the same mean.

It turns out that this is always the most efficient exponential to use when generating a gamma random variable. Our next example shows how the rejection technique can be used to generate normal random variables. Once we have simulated a random variable X having density function as in Equation 5. By how much does the one exceed the other? Hence, summing up, we have the following algorithm that generates an exponential with rate 1 and an independent standard normal random variable.

Otherwise, go to Step 1. If we want to generate a sequence of standard normal random variables, we can use the exponential random variable Y obtained in Step 3 as the initial exponential needed in Step 1 for the next normal to be generated.

Hence, on the average, we can simulate a standard normal by generating 1. The sign of the standard normal can be determined without generating a new random number as in Step 4. The first digit of an earlier random number can be used. That is, an earlier random number r1 , r2 ,.

This is indicated by our next example. Example 5g Suppose we want to generate a gamma 2, 1 random variable conditional on its value exceeding 5.

Because a gamma 2, 1 random variable has expected value 2, we will use the rejection method based on an exponential with mean 2 that is conditioned to be at least 5. Therefore, we have the following algorithm to simulate a random variable X having density function f. The details including the determination of the best exponential mean are illustrated in Section 8.

Polar Coordinates. That is see Figure 5. This is accomplished as follows: step 1: Generate random numbers U1 and U2. Unfortunately, the use of the Box—Muller transformations 5. It now follows that such a pair V1 , V2 is uniformly distributed in the circle. V1 , V2 Uniformly Distributed in the Square. Summing up, we thus have the following approach to generating a pair of independent standard normals: step 1: Generate random numbers, U1 and U2.

Hence it will, on average, require 2. Thus, one way to generate the process is to generate these interarrival times. So if we generate n random numbers U1 , U2 ,. If we wanted to generate the first T time units of the Poisson process, we can follow the preceding procedure of successively generating the interarrival times, stopping when their sum exceeds T. In the algorithm t refers to time, I is the number of events that have occurred by time t, and S I is the most recent event time.

The final value of I in the preceding algorithm will represent the number of events that occur by time T , and the values S 1 ,.

If the simulated value of N T is n, then n random numbers U1 ,. To show that it has independent and stationary increments, let I1 ,. It now follows, from the results of Section 2. If all we wanted was to simulate the set of event times of the Poisson process, then the preceding approach would be more efficient than simulating the exponentially distributed interarrival times. Thus, it allows for the possibility that the arrival rate need not be constant but can vary with time.

It is usually very difficult to obtain analytical results for a mathematical model that assumes a nonhomogeneous Poisson arrival process, and as a result such processes are not applied as often as they should be. However, because simulation can be used to analyze such models, we expect that such mathematical models will become more common.

Hence, by simulating a Poisson process and then randomly counting its events, we can generate the desired nonhomogeneous Poisson process. This can be written algorithmically as follows. The final value of I represents the number of events time T , and S 1 ,. Thus, an obvious improvement is to break up the interval into subintervals and then use the procedure over each subinterval.

Because of the memoryless property of the exponential and the fact that the rate of an exponential can be changed upon multiplication by a constant, it follows that there is no loss of efficiency in going from one subinterval to the next.

In the algorithm t represents the present time, J the present interval i. The final exponential generated for the Poisson process, which carries one beyond the desired boundary, need not be wasted but can be suitably transformed so as to be reusable. The superposition or merging of the two processes yields the desired process over the interval. As these random variables are clearly dependent, we generate them in sequence—starting with S1 , and then using the generated value of S1 to generate S2 , and so on.

The method used to simulate from these distributions should of course depend on their form. In the following example the distributions Fs are easily inverted and so the inverse transform method can be applied. The numbers of points occurring in disjoint regions are independent. Two Dimensional Poisson Process. This fanning-out technique can also be used to simulate the process over noncircular regions. For example, consider a nonnegative function f x and suppose that we are interested in simulating the Poisson process in the region between the x-axis and the function f Figure 5.

To do so, we can start at the left-hand edge and fan vertically to the right by considering the successive areas encountered. Graph of f. Because the projection on the y-axis of the point whose x-coordinate is X i is clearly uniformly distributed over 0, f X i , it thus follows that if we now generate random numbers U1 ,.

The above procedure is most useful when f is regular enough so that the above equations can be efficiently solved for the values of X i. Let X be an exponential random variable with mean 1. Using the result of Exercise 7, give algorithms for generating random variables from the following distributions.

A casualty insurance company has policyholders, each of whom will independently present a claim in the next month with probability. Write an algorithm that can be used to generate exponential random variables in sets of 3. Compare the computational requirements of this method with the one presented after Example 5c which generates them in pairs. How can we generate from the following distributions? Which method do you think is best for this example? Briefly explain your answer.

In Example 5f we simulated a normal random variable by using the rejection technique with an exponential distribution with rate 1. Write a program that generates normal random variables by the method of Example 5f.

Let X, Y be uniformly distributed in a circle of radius 1. Show that if R is the distance from the center of the circle to X, Y then R 2 is uniform on 0, 1. Bibliography 95 To complete a job a worker must go through k stages in sequence. If we let X denote the amount of time that the worker spends on the job, then X is called a Coxian random variable.

Write an algorithm for generating such a random variable. Buses arrive at a sporting event according to a Poisson process with rate 5 per hour. Each bus is equally likely to contain either 20, 21,…, 40 fans, with the numbers in the different buses being independent. Plot the points obtained. Bibliography Dagpunar, T. Clarendon Press, Oxford, Devroye, L. Springer-Verlag, New York, Fishman, G. Knuth, D. Law, A. Kelton, Simulation Modelling and Analysis, 3rd ed.

McGraw-Hill, New York, Morgan, B. Chapman and Hall, London, Introduction to Heat Transfer — Frank P. Incropera — 6th Edition. Nixon, Alberto S. Aguado — 1st Edition. Cutsem, Costas Vournas — 1st Edition.

Structural Analysis — Russell C. Hibbeler — 3rd Edition. Readers learn to apply results of these analyses to problems in a wide variety of fields to obtain effective; accurate solutions and make predictions about future outcomes.

This latest edition features all-new material on variance reduction; including control variables and their use in estimating the expected return at blackjack and their relation to regression analysis. Additionally; the 5th edition expands on Markov chain monte carlo methods; and offers unique information on the alias method for generating discrete random variables.

Featured book. Wolfgang Bauer 0. Milo D. Koretsky 0. Randall D. Knight 0. George Odian 0. John Kenkel 0. Trott 0. Carl S. Warren 2. Warren 0. Abraham Silberschatz 1. Frederick S.

Hillier 1.



0コメント

  • 1000 / 1000