Category Archives: Journal Club

Journal Club: Deterministic nonperiodic flow (Edward Lorenz)

I have been reading James Gleick’s Chaos and I must confess that I am very impressed with the book so far. I am beginning to realize some of the practical applications of non-linear dynamics to analog circuit design, phlebologist however, sale more on that later. What has been very interesting is the slow change in the mentality of the scientific world, from the notion that a small change in a systems initial conditions only warrants a small change in the output, to the reality that small changes in initial conditions can generate wildly different results. One of the pioneers in this field was the late Edward Lorenz. He discovered that a slight change (less than 1%) in the initial conditions of his deterministic weather model, which was numerically integrated, would cause the outcome to diverge from the unperturbed simulation to the point that the two weather systems were completely different after several days. The error in his integrator could not account for this disparity, therefore, he went through some analytic computations and found that simple differential equations can have very complex behaviors that were very dependent on initial conditions. He published his results in the Journal of Atmospheric Sciences, a paper that is well worth looking over. For those who are not mathematically inclined, looking over the introduction and conclusion should provide some insight into the paper. Additionally, this is the paper where the often duplicated Lorenz attractor, or butterfly attractor (figure 2) makes its first appearance.

( 1963lorenz-deterministic-nonperiodic-flow )

Journal Club: A Mathematical Theory of Communication by Claude Shannon

shannon.jpg

As promised before, pharmacy I have finally worked through the majority of this paper, story enough to give a brief introduction and discussion.

The key point of this paper is to demonstrate the importance of statistical analysis and its applications to determining information generation and transmission capacity. The measure H, ampoule or entropy, can be thought of as the amount of variance, or uncertainty, in a communication system. This leads us to define the theoretical capacity of a communication system given the known statistical properties of its constituents as well as apply analysis to practical systems.

The concept of information entropy deals with the uncertainty in the expected value of this information. Although it is rooted in statistical mechanics, it can be seen that highly predictable information has low variance, and therefore lower entropy, as compared to more random information. From this measure of information entropy, we can determine the necessary number of bits to efficiently encode this information, or to put it another way, how many symbols we can transmit per bit (assuming digital communication medium). Although the case of uniform probability distribution for all information symbols is easiest to analyze and leads to highest entropy, most practical applications have particular statistical distributions for symbol/information generation. Shannon goes to lengths to demonstrate this with the English language noting that selection of letters, or even words, is highly structured and far from random. This structure is a measure of redundancy of information, so that if I typ like ths, you cn stil undersnd me. (Spammers have been rediscovering this fact for years.)

Once the information entropy for all of the circuits involved in the communication system are determined, the channel capacity can be determined in the form of symbols per second given a finite certainty and a raw channel bit-rate. Shannon gives a fine example of a digital channel operating at 1000bits/s with a 1% error rate leading to an effective bit rate of ~919bits/s to account for error detection. Some communication system examples are given which I will not discuss in depth, however, I will try to reiterate the important steps in efficient communication design. Although Shannon gives a mathematical formulation for determining the theoretical limit for channel throughput, it is up to the designer to realize create a system which comes close to the limit. To do this, it is imperative to know the statistical properties of all of the sub-systems involved and the noise that may be present, and only then can efficiency be achieved.

The paper is by far more in-depth than this introduction and the math is not too hard, if anything, it is worth a look-over for some commentary on the statistical nature of the English language. As always, feel free to post a comment to discuss something about the paper, add something, or correct a mistake I have made. As a small bonus, I am adding Shannons’ patent for PCM-encoded voice/telephone service for those who like to read those types of things.

( 1948shannon-a-mathematical-theory-of-communication.pdf )
( 1946shannon-communication-syste-memploying-pulse-code-modulation-patent.pdf )

Micro Journal Club: the Central Limit Theorem

normal_distribution_pdf.png

This very small introduction to the Central Limit Theorem is probably something worthwhile before the Shannon paper. The main point is that as we take more and more samples from a random variable, approved
medicine with a fixed mean and variance, medical the samples approach a normal (Gaussian) distribution. That is, irregardless of the distribution of the random variable, if it meets the criteria, it will behave like a normal ly distributed random variable in the limiting case. The typical application engineering application of this theorem is making the assumption that some measured quantity is normally distributed and use that assumption to define things like confidence limits and so forth. The requirements for this assumption are that the process is second-order stationary, meaning the mean and variance do not change in the window of observation, and that the number of samples is approaching infinity. The requirement for a large number of samples can sometimes be loosened since the residual differences between the sample distribution and a normal distribution can sometimes be determined. The requirement for a stationary process cannot. For example, it would be foolish to apply Gaussian statistics to a random-walk (Brownian motion).

The key message is that the normal/Gaussian assumption is typically a good one, as long as the statistical nature of the random variable under investigation is constant through the period of observation and the number of samples is large.

( sec_4_f.pdf ) ( Image is from Wikipedia )

Journal Club: Power-constrained, high-frequency circuits for the IBM POWER6 microprocessor

power6.JPG

The inaugural paper for the Journal Club is titled “Power-constrained high-frequency circuits for the IBM POWER6 microprocessor” by Brian Curran et al. and is published in the November 2007 issue of the IBM Journal of Research and Development. I have much respect for the whole POWER micro-architecture, mind
consequently, I am interested in learning a little bit about their design methodology which lead to a near-5GHz core logic clock rate. The IBM design team responsible for the POWER6 applied a three-direction strategy to achieving this performance goal: cutting edge technology, manual circuit optimization and thorough testing.

The processor was designed at a 65um manufacturing node so various technologies needed to be employed to keep leakage current to a minimum and thereby maintain an acceptable power usage. The first method involved using silicon-on-insulator (SOI) which reduced back-gate current due to parasitic capacitances and can CMOS latch-up. The processing steps to implement SOI are well understood, however, extra care must be given to design layout as it is no longer possible to drive the back-gate by connecting the whole substrate to a fixed potential. Another technological advance employed was the use of dielectrics with low relative permittivity between traces to further reduce transmission line effects and the associated propagation delay of interconnects. Since less energy is stored in the dielectric material between interconnects, this also reduces power consumption.

From a design stand point, the goal of the team was to distribute the clock properly and to maintain the latency of the core logic circuits below “13FO-4”. Propagation delays, loading and transmission line effects play a very important role in the 5GHz regime. It was very interesting to see how multiple layers of buffers and clock delays were included to guarantee that clock pulses would be synchronized around various cells while maintaining an adequate slew rate. The 13FO-4 latency means that each processing cycle had to be accomplished in the time it would take for a signal to propagate along a chain of thirteen inverters that were loaded with four devices each. This is the criteria which allowed for a 5GHz core logic clock rate. It was mentioned that threshold voltages were tuned, probably through ion implantation, to minimize leakage while maximizing speed.

Simulations, being the last major piece of the paper, were less interesting as they relied mostly on proprietary tools. The piece that may have been important for readers was the iterative cycle of debugging and performance tuning. Going from schematic overview to transmission line calculations to back-annotation, to placing and routing made some sense.

Please feel free to contribute your thoughts on this paper, my interpretation or another paper that would be an interesting read in the comments section. Lets look at Claude Shannon’s paper titled ‘A Mathematical Theory of Communications’ as suggested by Adam. As the full paper is quite long, we may want to look at only the first thirty pages in detail. Those that want to brush up on their mathematics before attempting the paper should start on page thirty-two.

2007curran-power-constrained-high-frequency-circuits-for-the-ibm-power6-microprocessor.pdf