Simulation - Determinism vs. Stochasticity - EdsCave

Go to content

Main menu

Simulation - Determinism vs. Stochasticity

Simulation > Introduction to Simulation

July 3, 2017

Another key characteristic of a simulation model is whether it contains 'random' elements or not.  If a model does not have any random component, it will provide the same results each time it is run, assuming the same inputs and initial conditions (in the case of a dynamic model) are used. If the model contains any random components, however, you can expect the model's behavior to vary from run to run, even when presented with the same inputs and initial conditions. A model that does not have any random component is described as 'deterministic', while one with random components is described as 'stochastic'.

Why would you deliberately add random behavior to a simulation model?  There are many good reasons for doing so, perhaps the best being that the system being modeled behaves randomly itself. For example, consider a simulation of a casino card game such as blackjack. While the vast majority of rules needed to model the game are very clear-cut, and very deterministic ( what hand beats what other hand, when to draw another card, etc..), a fundamental characteristic of the game is that the cards are shuffled into random order before beginning play.  Casinos make a significant effort to ensuring that the cards are shuffled into random order, typically using automatic shuffling machines to do so – failing to achieve a random ordering opens the doors for exploitation by sophisticated player.  In a simulation model, not including the random aspect of the game would result in a simulation that gave the same outcome on every run.

In that other casino, known as Wall Street, stochastic models are often used to help design and price financial instruments such as derivatives. While nobody knows whether an individual stock's price will go up or down over some future period of time,  it is often assumed that its prior statistical behavior will continue into the future.  This means that if the stock has had a given historical daily volatility (changes in price, whether up or down) , that volatility will remain somewhat constant into the future. The stock's price is then modeled as a random walk, or similar type of stochastic process using the stock's prior statistical behavior as input.  While any given simulation run of the random process model  may show the stock's price going up or down, repeated runs can provide an idea of how much it is likely to go up or down over a given future time period, which is critical information to know when trying to assign a value to options and other derivatives.  For the case in which  a normal (Gaussian) statistical distribution is assumed (and commonly adopted contract terms), analytic methods such as the Black-Scholes pricing formula can be used to help in pricing [Veale 2013].   If one needs to price a derivative that is sold with contract conditions that  don't conform to the assumptions on which the Black-Sholes formula is based, it may be easier to  use simulation methods to aid in pricing as opposed to trying to develop new analytic models to account for the terms of a non-standard derivatives contract.

Another common situation where random behavior is added to a model is to help understand how the system under study varies with respect to its parameters.  Consider the process of designing an electronic audio amplifier, such as might be used in any number of consumer products for driving a speaker or headphones. In a well-designed amplifier, there is little 'random' behavior – much of that which exists shows up as the hiss you may hear when no music is playing, and is something to be minimized in the design process.  In this case, the primary use of random components in this model is to aid in understanding how the system works over a range of conditions. Because even a simple amplifier may have a few dozen components with associated properties,  it  becomes inconvenient or practically impossible to simulate how the system behaves under all possible combinations of ways in which the individual components can vary.  Instead of trying to simulate all combinations, or even trying to be clever and picking specific combinations to simulate, a more common approach is to use what is called a Monte Carlo model, where you decide to run the simulation a large, but fixed number of times, but assign random values (within relevant limits) to the model's properties. The results of the resulting hundreds or thousands or runs are then typically plotted over each other. If enough runs were performed with respect to the number of parameters that were randomly varied, the resulting plot gives the designer a good idea of how the design will perform when mass-produced.

One interesting question is that of how you get random behavior out of a very deterministic computer.  The way through which this is most commonly accomplished is through the use of pseudo-random number (PRN) generation algorithms.  A PRN can be viewed as a scrambling function, which when given a number as input, returns a different number, whose value is difficult to predict without detailed internal knowledge of the PRN.  Consider a very simple example of a PRN called a linear congruential generator [Knuth 1981]:

X[n+1]  = (a X[n]  + b) mod c


If one chooses appropriate values of a, b and c, and iteratively applies this formula to X, the result will be a sequence of X values which will appear to be 'random'.   To get the process going, however, one must provide an initial value for X ( X[0] ) known as the seed. For example, using the values of a=123, b=753, c=100 and X[0]  = 23, the linear congruential method generates the following sequence.




On casual examination this sequence certainly appears random, and it would be quite difficult to predict the next number (X[12]  = 63) in it without knowing first that the sequence was the result of a linear congruential generator and secondly the values of a,b, and c.

As pseudo-number generation processes are deterministic, every time they is started using the same seed, they will produce exactly the same sequences. While  the ability to repeat a random sequence can be very useful at times,  particularly when debugging a simulation model, in general a unique seed will need to be provided for each separate run. This is often accomplished by generating the seed from some external source, such as the values of time and date maintained by the computer.

Note that while the sequences generated by the linear congruential method may appear to be random, the method is hardly the last word in pseudo-random number generation.  How to generate pseudo-random numbers and how to test the quality of their 'randomness' has been a significant area of research, becoming even more important in the past few decades as high-quality random number generation is a cornerstone technology supporting digital encryption and security systems.


References:

[Veale 2013]  Veale, Stuart R. Derivatives: Demystifying Derivatives and their Applications,
Prentice Hall Press, New York, 2013, pp. 167-172.

[Knuth 1981] Knuth, Donald E., The Art of Computer Programming, Volume 2: Seminumerical Algorithms, 2 nd  ed.  Addison-Wesley Publishing Company, Reading Massachusetts, 1981, pp. 9-25.



 
Back to content | Back to main menu