|
Recursive Logistic Equation Background:The logistic equation kx(1-x) normally is used for things like population dynamics. The input to the
equation would run from 0-1 in increments of 0.1 for our example. In this way we have normalized our equation to keep things simple.
Plot of X and Y. Plot of Sum(Y). It is a pretty tame looking thing, this equation and the curves that it generates, that is until we use feedback. Once we start feeding the output of kx(1-x) into itself repeatedly interesting things happen for different values of k. Recursive feedback makes things more interesting, that is where we are going next... Now the main story:The recursive logistic equation provides some unique insight into how chaos can emerge from simple math. kx(1-x) is the innocent looking equation. Who would ever expect chaotic behavior from it. But take it and keep feeding its result back into itself and you get interesting behavior for various values of k. Use the x = kx[n-1](1-x[n-1]) in a program and you get the following results for various regions of k. Bifurcations occur when the oscillations split from 1 to 2 to 4 and so on until there are many separate superimposed oscillations. This is similar to a water faucet dripping. A first it is just a single drip period, increase the rate and then it drips at beats that are superimposed periods, finally it becomes a chaotic flow. This is the basic behavior of when laminar flow such as slow rising smoke from cigarettes or inscence breaks into a turbulent mass of rolling smoke. There is a point where it goes from laminar to turbulent much like the recursive logistic equation goes unstable right around k = 3.57. After k = 3.57 it goes down right chaotic, not just too many oscillations to read that are superimposed. At k = 4 it is a random number generator. Above k = 4 it just exponentially goes off of the deep end, the equation blows up into large numbers very quickly, even for small values over such as k = 4.00001. The point is that it does not take complex math to generate complex behavior, this example shows this plainly. Other examples of this behavior include cellular automata and Turing machines. Good information on cellular automata exists in the book A New Kind of Science by Steven Wolfram. See the excel plots link below to see the behavior of the recursive logistic equation for various k. This just touches the tip on what is out there on the logistic equation, which originated to study the behavior of wildlife populations. Behavior of the equation for regions of k
Dependance on constant k and initial condition i It is interesting to look at the behavior of the recursive logistic equation response to varying both what is referred to as k the multiplication
constant and what is referred to as i. The term i refers to the initial seed value that is put into the equation. This initial condition of
the seed value can vary from 0 to 1. The cases 0 and 1 are not interesting so we will narrow the study down to the range 0.1 to 0.9 in steps of 0.1.
The value k will be varied from 1 to 4 with steps of 0.5.
The following data show the output value for iterations 5 and 6 (t = 5,t = 6) for various k and i. What can be seen is that some
combinations will settle down quickly in value and others will not. For low values of k there is a settling to steady state, for medium values
there is oscillatory behavior that settles to a steady state. For high values there is oscillatory and chaotic behavior in which the swing between the
output at t = 5 and 6 can be a wide range apart.
This last grouping of the data is the difference between the output at t=5 and t=6. Clearly for the higher values of k the function output
has wide variations. Where as for lower values of k the output of the function has by t = 5 & 6 settled down to its final value.
Correlation:Separating Randomness From Deterministic Chaos Chaos is deterministic. The simple logistic equation can be used as a random number generator, however the output is not random. It is deterministic
chaos. What looks random to the naked eye can sometimes have underlying chaos. In some cases this is easy to prove. Remember above where iteration
5 and 6 are shown. What if we plotted all the output values in such a way to accentuate the differences between each output step. The way to do this is to use correlation. By plotting the output values in such a way that the outputs at (T) are plotted on the X axis and (T+1) are plotted on the Y axis
the underlying chaos will reveal itself in the pattern that appears.
The following first plot shows an XY plot for the above values plus another 40, for a total of 50 points. The curve is a parabola, clearing showing deterministic chaos. If the process were truly random each point should be uncorrelated and the plot should be a random scatter. The second plot shows a truly random scatter of 50 values in the XY plane. Of course in real life I would't expect things to be as simple as this example. Some systems like the stock market or the weather might have some
deterministic patterns to them that are hard to pick out for the following two (or a combination of both) reasons. Another type of XY analysis commonly used in electrical engineering is to plot the input of a system on the X axis and the output on the Y axis. This is
what would be called a linearity and phase plot. If the system is linear and has no phase shift a line at a 45 degree angle will result.
Nonlinearity will show up as a bend in the line, phase shift will cause the plot to open up. This will result in a circle at a 90 degree phase shift and more complicated lissajous figures for higher phase shifts. My Additional Random Thoughts Another interesting idea related to chaos is the fact of what time and location the randomness gets into the system to begin with. Is it when a die finally lands on its number or is it when it leaves the throwers hand? It is actually closer to when it leaves the hand, the initial conditions, from this point the so called butterfly effect kicks in. So it is almost moot to yell at the die come on 6, come on 6 or what ever. Related, the observation of outcome occurs when the first observer, human or instrument actually observes. When the die lands finally in its resting place, in essence its wave function collapses to an observable state. It is interesting to extrapolate this to the famous Schrödinger's Cat experiment. This idea of the fact that whatever instrument that actually interacts macroscopically with the radioactive particle that leads to the demise of the cat is the actually observer. It is the interface between quantum and macroscopic that the observation occurs, much like throwing a die that finally comes to halt on a stable (point) attractor, the line where chaos becomes static stability. Original Build Date:07-18-2005 Last updated 12-23-2007 |