Issues in Simulation Modeling




I am beginning to believe that most of the phenomena in our world are too complex to be modeled using closed-form analytic models. Even if you could identify the variables, the inter-relationship between the variables are forever changing, and the variables themselves might change. The natural tendency is to turn to computer simulation modeling, where we generate data for the simulation, run the simulation and analyze the results.. From this we can calculate probability of outcomes and the confidence intervals. Computer simulation is widely used in many fields from the simulation of manufacturing processes, to military planning and econometric forecasting, and the valuation of financial derivatives. On a more pedestrianlevel it can help estimate waiting times in queues, estimate the resources required for a project or help with decision-making when there are several options. The exponential increase in the speed and power of today's computers makes simulation less of a chore than it was 10 years ago. However, there are still limits to what simulation can do.


1. Simulation models are usually less 'complete' than closed-form analytic models, out of necessity. No matter how powerful today's computers are, the addition of each variable to the model results in an exponential growth in the time and load of of the simulation to the computer. Thus simulation is good when reduction, simplification and abstraction of a real-life situation does not affect its realism and the output does not become inaccurate.


2. Although we can generate the data for our simulation, the choice of the appropriate distribution to generate our data to, is all-important. Choose a distribution that does not fit the real life situation and you are are in trouble right away. For example, the Weibull distribution with its changeable parameters has been found to be suitable for quality control simulation. But using it for financial market simulation is not so appropriate. Financial markets with their long fat tails [leptokurtic] are not even a fit for a Gaussian distribution. Just studying the types of distributions and arguing about which is the most appropriate one to use is a major preoccupation. Poisson, Binomial, Student t, Lognormal, etc all have their merits and demerits.

3. After generating the data, outliers must be removed. But again when you are trying to simulate financial markets where mega-events of 8 or 9 standard deviations happen infrequently, but once they happen, have a disastrous effect on the markets [such as the Long Term Capital and Amaranth debacles] Leaving these outliers in might even make the model more realistic.

4.When a simulation has been completed, the data needs to be analysed. The simulation's output data will only produce a likely estimate of real-world events. Methods to increase the accuracy of output data include: repeatedly performing simulations and comparing results, dividing events into batches and processing them individually, and checking that the results of simulations conducted in adjacent time periods “connect” to produce a coherent holistic view of the system.

5.As most systems involve stochastic processes, simulations frequently make use of random number generators to create input data which approximates the random nature of real-world events. Computer generated [random numbers] are usually not random in the strictest sense, as they are calculated using a set of equations. Such numbers are known as pseudo-random numbers. Again, as in choice of distribution, choice of random number generator is a long topic. All sorts of ingenious algorithms for random number generation has been deviced, and the choice of a algorithm can have an effect on the simulation results.

6. A detailed knowledge of the various types of Distributions is absolutely essential. Random numbers can be Poisson, Bernoulli, Gamma, Geometric, Binomial etc. It depends which one is more fitting for the phenomenon you are trying to simulate. Also, you have to know Probability theory, especially dependent probabilities where the probability of an event happening is dependent on the probability of another event happening. Simulation can be like Markov Chains where the next state is independent of the present state or Bayesian network where there is a network of probabilistic dependencies.

7. A simulation should also be tested for sensitivity analysis, ie. how sensitive the model is to a measured change in one of the coefficients of the variables Those variables with the greatest sensitivity should be noted and their role in the real-life situation reviewed.

Comments

Popular posts from this blog

A Comparison of Four Noise Reduction Algorithms as Applied to the BSE Sensex index.

My Heart Belongs To The South