For almost every digital circuit designer out there signal integrity problems often come up as frequencies increase, board sizes decrease and IC pin impedances change. (Every designer except for myself, I work with very low frequency analog circuits!) When signal integrity becomes so poor that unacceptable levels of transmission errors are reached, the efficient digital designer may venture into the analog domain and start looking at transmission line models of their digital traces. For those who prefer to think of everything as analog, this could be the point of argument to “prove” that ones and zeros only exist in the digital designer’s mind and do not represent physical reality (although thinking of voltages as high/low is sometimes more efficient).
Sometime in the 1970s, Motorola introduced digital emitter coupled logic (M ECL) circuits. I don’t know if Motorola was first, however, they had plenty of expertise on the subject. The made ECL useful in those days was the incredibly fast switching rate for these types of logic circuits. ECL works very similarly to standard bi-polar junction transistor based designs, however, the transistors in ECL are always partially conducting. The high and low logic levels are determined by different points along the devices’ load curves which made them faster than BJT devices which had to go from completely off to completely on to switch logic. ECL devices were (and still are) much faster than comparable CMOS devices since CMOS depends on relatively slow thermal generation of carriers to create the conduction region. The point is that fast digital circuits are not that new, and we are facing some of the same transmission line problems as thirty years ago when we scale dimensions and voltages down and increase the operating frequency. If the operating frequencies of interest are such that wave lengths (in the conducting metal trace) are comparable to the length of the trace, transmission line models must be employed. This matter of wavelength can be a tricky question to answer as it can be readily shown that the wavelength of a 60Hz signal in a thick copper conductor is about 5cm (with a phase velocity of only 3.22m/s).
Now that we believe that our traces can act like transmission lines, we are faced with a problem of matching impedance. In the simplest of cases, we have only the driving logic (generator), the trace and the receiving logic (termination). From a driving perspective, the output impedance of the device should closely match the trace impedance, typically something like 30-70Ohms. If the output logic is not matched to the trace, a reflection will not occur at the driving logic in the strict sense, however, the signal traveling down the conducting trace will already be deformed. Now that we have a packet of current traveling down the trace, as specified by the generator, any mismatch in impedance between the trace the termination logic will result in a reflection which will further deform the other current packets traveling down the conductor. This problem can easily happen when CMOS logic (infinite input impedance) is coupled with low output impedance logic and the transmission frequency is gradually increased. The problem becomes more complicated when there are multiple terminations on a given conductor segment as each impedance mismatch generates a reflection and so forth.
Besides the MECL Design Handbook, Altera provides a few application notes  on signal integrity and high speed design which include termination practices. Typically, introducing a resistor in series or in parallel (to ground) is all that is required to mostly match impedances and give adequate performance, the most important concept is knowing when and where to use these terminating resistors. Although some devices come with various termination options built into the die, most still don’t, so it is good to know when a properly placed resistor network can save a lot of shielding attempts and speed up the debugging process.
Over a decade ago, cialis I remember printing out and reading a text by Aleph1 entitled Smashing the Stack for Fun and Profit. Back then, stack-based buffer overflows were a hot topic and the tide was turning as programmers began to realize that null termination of strings was not a good security measure and bounds checking was becoming necessary for the security-minded programs.
The issue was that many people were used to using a function like strcpy() to copy a string from one memory location to a dynamically allocated memory segment on the stack. The strcpy() function simply started copying from the supplied address and stopped when it reached a null character without knowing how much space was allocated for the string at the destination. As a result, segments of the stack that were not allocated for the “local” variable, like the return address of a function, could be overwritten with arbitrary values. With the properly formatted string, even executable code could be put somewhere on the stack and the return address could be overwritten so that this code could be executed, for fun and profit as they say. Programmers became wiser and started using strncpy() instead, which only copied a fixed amount of data and therefore guaranteed that the allocated space would not be exceeded. Furthermore, most modern operating systems can now set areas of the memory dedicated to the stack as non-executable, so the above routine would be foiled. Individuals have found some ways around these security features, however, the stack smashing exploit (as described by Aleph1) has mostly been considered a thing of the past.
I use the term mostly since Nintendo has preserved the knowledge and allowed practice of this exploit with their release of the latest Zelda game for the Wii. Through a cleverly crafted save file, the name of the main characters horse can contain a string as mentioned above and lead to execution of arbitrary code. There are a few tricks to maintain the integrity of the save file, however, after a decade the above exploit still lives on, almost in the same form as described by Aleph1.
( Although the picture is not from the Twilight Princess game, it is a good game none the less. )
To follow up the last post on resistor selection, seek here is a the Agilent Technologies Impedance Measurement Handbook. I found this handbook to be quite useful and well written as it covers everything from the basics of measurement problems to examples of both low frequency and RF frequency impedance measurements. The authors focused on the often overlooked parasitic properties of common system components as well as the measurement systems themselves. They go on to outline methods to construct test structures and procedures to minimize these parasitics and go on to give practical examples. For obvious reasons, sovaldi sale all of the test equipment in the handbook is made by Agilent, however, other brands can be used just as well.
In all of my undergraduate and graduate career, I have had very little introduction to passive component selection. Most of what I know come at the expense of poor designs and from Bob Pease’s book on troubleshooting analog circuits. It should be known that resistors typically have small associated capacitances and inductances which can lead to strange circuit effects, however, this application note focuses mostly on selecting the right resistor based on power rating.
As circuits get smaller, using a something as small as a 0201 format resistor becomes fairly attractive. The downside (there are always downsides!) is that as the package size goes down, so does the power rating. Other factors, like enclosure and cooling, can change the rated dissipation limit. Although the app note covers many of these factors, they also provide a useful “shortcut” sheet on the seventh page to simplify the possible de-rating calculations for a particular circuit design. This information is still missing from many third year EE circuit design courses, so hopefully this PDF will provide an adequate supplement.
A pair of OLPC XO laptops arrived at our lab today. Given the hype and drama surrounding their debut, I decided to give them a small run through to see how usable the machine was. As the box was opened, I was very surprised at the tiny size of the device. Since this was designed for children, the size seemed fairly appropriate. The trouble started when I tried to use the device. The keyboard was about 30% smaller than a laptop keyboard and was covered with a single piece of rubber. This made the key unresponsive and made it fairly hard to type quickly. Furthermore, the mouse track pad had very poor response and was a total pain to use at points. Finally, the machine seemed very underpowered and took about ten seconds to start up a terminal without anything else running.
From a positive perspective, the laptop’s user interface was fairly intuitive and well labeled. The included video capture software worked on par with a typical 1.3MP camera phone and seemed to capture video smoothly. The device had no problem associating with our wireless network, however, there was some difficulty getting it on the VPN. The number of ports is pretty good (~3USB, audio, etc) and the battery life seems to be on par with typical portable machine. The $180 price tag was a bit higher than the $100 original, however, I foresee that the price will gradually drop as components get cheaper. Eventually adding a touchscreen would not be a bad idea.
To conclude, this laptop seems to be very appropriate for young kids. The keyboard seems like it could resist liquids and debris and the device looks durable. I didn’t see if there are any parental controls available as I doubt any parent would want to let their 5 year old sit behind a computer all day long. As for adult use, it is better to spend a little bit more money and get a subcompact laptop from ASUS or a budget laptop from Dell. The size of the machine and lack of ports (ethernet, parallel, serial) make it less attractive from a hacking perspective.
According to Look Around You, an investigative scientific program appearing on the BBC’s Channel 3, a new atomic element that may revolutionize semiconductor fabrication has been successfully formulated in laboratory conditions. This element is Intelligent Calcium (see above) which may replace sodium ion implantation in the near future and thereby increase both digital and analog circuit performance.
From a design standpoint, ion implantation is one of the crucial steps in integrated circuit manufacturing as it allow the designer some freedom to set the threshold voltage for a MOSFET transistor as well as negate some of the potential problems with manufacturing. The basic idea is that by applying a positive or negative voltage at the gate terminal, we can attract either negative or positive charges (pairs of which are constantly thermally generated) to the “top” of the device respectively. If enough of these charges accumulate, we can form a conducting channel through the substrate. By implanting immobile ions in the gate oxide region, we can change the voltage at which this channel formation begins to occur and thereby the required bias for transistor operation. It is not hard to imagine that some chemical process steps may add undesired ions at the silicon-oxide interfaces in addition to dangling bonds in the oxide, so this same technique may sometimes be used to balance the parasitic ion concentration due to processing and return the device to the designed activation threshold.
Typically, the positive ion of choice is sodium. Ions are generated by electrically heated metal and are then accelerated by electromagnetic fields until the impact the target. Upon impacting the crystal lattice, the sodium looses momentum and typically does not move from its resting position unless the device is severely heated (can happen!). The sodium’s only action is to interact with the charges around it and modulate the effective threshold voltage for the device. The main downside is that the sodium ion cannot ‘decide’ when to act, so its effects are constant throughout time.
This is where the concept of intelligent calcium comes in. Unlike the ‘dumb’ sodium, the intelligent calcium’s higher atomic weight allows it higher flexibility with its charge configuration and thereby more freedom to ‘decide’ when to act as a 2+ valence ion and when to pretend to be neutrally charged. By using intelligent calcium as a positive ion throughout an integrated circuit, a calcium network is formed where each atom becomes a node and can communicate with both adjacent and far-away atoms to get a general feel for the situation and the activity of the device. It can then modulate its charge to increase (or decrease) the individual transistor thresholds as needed. From an analog perspective, the transconductance of the device goes up tremendously as well as the frequency response (due to intelligent calcium’s rapid activation). From a digital perspective, the speed of information propagation in the intelligent calcium network exceeds the mobilities of both holes and electrons, even in a strained silicon lattice. For this reason, the transistors adjust their threshold in advance of the gate voltage changes and thereby increase their switching speeds. This in turn translates to quicker gates and overall quicker devices.
The future is bright for intelligent calcium as it has many desirable properties for semiconductor fabrication. Scientists are presently pushing the bleeding edge of technology as they investigate the possibility of using the intelligent calcium network as a means to communication between transistors and a total replacement for the metal interconnects. The progress is slow, however, I have full confidence that I will one day have the opportunity to image an metal-less, intelligent calcium powered device in the weekly IC Friday column.