Category Archives: Mathematics

Journal Club: Deterministic nonperiodic flow (Edward Lorenz)

I have been reading James Gleick’s Chaos and I must confess that I am very impressed with the book so far. I am beginning to realize some of the practical applications of non-linear dynamics to analog circuit design, however, more on that later. What has been very interesting is the slow change in the mentality of the scientific world, from the notion that a small change in a systems initial conditions only warrants a small change in the output, to the reality that small changes in initial conditions can generate wildly different results. One of the pioneers in this field was the late Edward Lorenz. He discovered that a slight change (less than 1%) in the initial conditions of his deterministic weather model, which was numerically integrated, would cause the outcome to diverge from the unperturbed simulation to the point that the two weather systems were completely different after several days. The error in his integrator could not account for this disparity, therefore, he went through some analytic computations and found that simple differential equations can have very complex behaviors that were very dependent on initial conditions. He published his results in the Journal of Atmospheric Sciences, a paper that is well worth looking over. For those who are not mathematically inclined, looking over the introduction and conclusion should provide some insight into the paper. Additionally, this is the paper where the often duplicated Lorenz attractor, or butterfly attractor (figure 2) makes its first appearance.

( 1963lorenz-deterministic-nonperiodic-flow )

Book review: The Philosophy of Space and Time

It took me a few months to finally read this book, healing but it was well worth it. I have been reading it prior to sleep as it was so full of information that it was difficult to read more than ten pages without taking a break to think about all of the new ideas. Furthermore, the information was presented in such an accessible manner that even those who are not specialists in relativity, topology or physics can appreciate the message.

I selected this book because I figured the topic was far away from electrical engineering that it could give a new perspective on understanding what is implied by measuring time and distance. Sure enough, this book provided many insights into the nature of our universe through the relation of time and space measurement. I will avoid summarizing the book, however, I will mention that it would be a pleasant read for those interested in non-Eucledian coordinates and the effects of gravitational fields. The book is extremely well written and reads much like a lecture series where the audience does not need to be able to carry out all of the steps of each operation, but acquires a taste for the process and a deeper appreceation. From the point of view of technical written English, this was one of the most understandable books on a physical subject that I have read in some time.

More free textbooks

books.jpg

While looking for ways to escape muti-variate calculus purgatory in the final weeks of the semester, click I came across Open Math Text.  These are a collection of math books (in PDF and LaTeX) that are openly available for distribution and are aimed at general scholars. A quick look at the collection will show that most of the books are authored by Dr. David Santos, a professor a the Community College of Philadelphia.  It seems that he has written and made available more books, in multiple languages, than the number of scholarly papers that most researchers publish at full universities.

While looking at his personal page, I found another open textbook collection called Textbook Revolution. The obvious downside is that these publications may not go through the same levels of review as textbooks printed at conventional publishers, however, it is nice to know that there is a group of people actively working to make affordable textbooks available.

Journal Club: A Mathematical Theory of Communication by Claude Shannon

shannon.jpg

As promised before, I have finally worked through the majority of this paper, enough to give a brief introduction and discussion.

The key point of this paper is to demonstrate the importance of statistical analysis and its applications to determining information generation and transmission capacity. The measure H, or entropy, can be thought of as the amount of variance, or uncertainty, in a communication system. This leads us to define the theoretical capacity of a communication system given the known statistical properties of its constituents as well as apply analysis to practical systems.

The concept of information entropy deals with the uncertainty in the expected value of this information. Although it is rooted in statistical mechanics, it can be seen that highly predictable information has low variance, and therefore lower entropy, as compared to more random information. From this measure of information entropy, we can determine the necessary number of bits to efficiently encode this information, or to put it another way, how many symbols we can transmit per bit (assuming digital communication medium). Although the case of uniform probability distribution for all information symbols is easiest to analyze and leads to highest entropy, most practical applications have particular statistical distributions for symbol/information generation. Shannon goes to lengths to demonstrate this with the English language noting that selection of letters, or even words, is highly structured and far from random. This structure is a measure of redundancy of information, so that if I typ like ths, you cn stil undersnd me. (Spammers have been rediscovering this fact for years.)

Once the information entropy for all of the circuits involved in the communication system are determined, the channel capacity can be determined in the form of symbols per second given a finite certainty and a raw channel bit-rate. Shannon gives a fine example of a digital channel operating at 1000bits/s with a 1% error rate leading to an effective bit rate of ~919bits/s to account for error detection. Some communication system examples are given which I will not discuss in depth, however, I will try to reiterate the important steps in efficient communication design. Although Shannon gives a mathematical formulation for determining the theoretical limit for channel throughput, it is up to the designer to realize create a system which comes close to the limit. To do this, it is imperative to know the statistical properties of all of the sub-systems involved and the noise that may be present, and only then can efficiency be achieved.

The paper is by far more in-depth than this introduction and the math is not too hard, if anything, it is worth a look-over for some commentary on the statistical nature of the English language. As always, feel free to post a comment to discuss something about the paper, add something, or correct a mistake I have made. As a small bonus, I am adding Shannons’ patent for PCM-encoded voice/telephone service for those who like to read those types of things.

( 1948shannon-a-mathematical-theory-of-communication.pdf )
( 1946shannon-communication-syste-memploying-pulse-code-modulation-patent.pdf )

Micro Journal Club: the Central Limit Theorem

normal_distribution_pdf.png

This very small introduction to the Central Limit Theorem is probably something worthwhile before the Shannon paper. The main point is that as we take more and more samples from a random variable, tadalafil medicine with a fixed mean and variance, medical the samples approach a normal (Gaussian) distribution. That is, irregardless of the distribution of the random variable, if it meets the criteria, it will behave like a normal ly distributed random variable in the limiting case. The typical application engineering application of this theorem is making the assumption that some measured quantity is normally distributed and use that assumption to define things like confidence limits and so forth. The requirements for this assumption are that the process is second-order stationary, meaning the mean and variance do not change in the window of observation, and that the number of samples is approaching infinity. The requirement for a large number of samples can sometimes be loosened since the residual differences between the sample distribution and a normal distribution can sometimes be determined. The requirement for a stationary process cannot. For example, it would be foolish to apply Gaussian statistics to a random-walk (Brownian motion).

The key message is that the normal/Gaussian assumption is typically a good one, as long as the statistical nature of the random variable under investigation is constant through the period of observation and the number of samples is large.

( sec_4_f.pdf ) ( Image is from Wikipedia )

Free lectures on mathematics from Dr. Lawrence C. Evans

pasta2_07232006.jpg

I am participating in a summer reading course on stochastic differential equations and subsequently ran across lecture notes from Dr. Evans entitled “An introduction to stochastic differential equations“.  They give a quick introduction to statistics and Brownian motion followed by stochastic integrals including the Ito formula. Finally stochastic differential equations are introduced and their applications are given. I have only looked over the first half of this in detail and found it to be pretty reasonable. Furthermore, buy Dr. Evans has a larger set of available publications which include lecture notes and surveys. The semi-official book for the course is “Elementary Stochastic Caculus with Finance in View” by Mikosch (typo is reproduced from inside the front cover). A review of the book will follow later when I read more of it.

Why pasta? It reminds me of a stochastic sample set. Image was found on Musable Gourmet.

( lawrence_evans_sdes.pdf )