For almost every digital circuit designer out there signal integrity problems often come up as frequencies increase, board sizes decrease and IC pin impedances change. (Every designer except for myself, I work with very low frequency analog circuits!) When signal integrity becomes so poor that unacceptable levels of transmission errors are reached, the efficient digital designer may venture into the analog domain and start looking at transmission line models of their digital traces. For those who prefer to think of everything as analog, this could be the point of argument to “prove” that ones and zeros only exist in the digital designer’s mind and do not represent physical reality (although thinking of voltages as high/low is sometimes more efficient).
Sometime in the 1970s, Motorola introduced digital emitter coupled logic (M ECL) circuits. I don’t know if Motorola was first, however, they had plenty of expertise on the subject. The made ECL useful in those days was the incredibly fast switching rate for these types of logic circuits. ECL works very similarly to standard bi-polar junction transistor based designs, however, the transistors in ECL are always partially conducting. The high and low logic levels are determined by different points along the devices’ load curves which made them faster than BJT devices which had to go from completely off to completely on to switch logic. ECL devices were (and still are) much faster than comparable CMOS devices since CMOS depends on relatively slow thermal generation of carriers to create the conduction region. The point is that fast digital circuits are not that new, and we are facing some of the same transmission line problems as thirty years ago when we scale dimensions and voltages down and increase the operating frequency. If the operating frequencies of interest are such that wave lengths (in the conducting metal trace) are comparable to the length of the trace, transmission line models must be employed. This matter of wavelength can be a tricky question to answer as it can be readily shown that the wavelength of a 60Hz signal in a thick copper conductor is about 5cm (with a phase velocity of only 3.22m/s).
Now that we believe that our traces can act like transmission lines, we are faced with a problem of matching impedance. In the simplest of cases, we have only the driving logic (generator), the trace and the receiving logic (termination). From a driving perspective, the output impedance of the device should closely match the trace impedance, typically something like 30-70Ohms. If the output logic is not matched to the trace, a reflection will not occur at the driving logic in the strict sense, however, the signal traveling down the conducting trace will already be deformed. Now that we have a packet of current traveling down the trace, as specified by the generator, any mismatch in impedance between the trace the termination logic will result in a reflection which will further deform the other current packets traveling down the conductor. This problem can easily happen when CMOS logic (infinite input impedance) is coupled with low output impedance logic and the transmission frequency is gradually increased. The problem becomes more complicated when there are multiple terminations on a given conductor segment as each impedance mismatch generates a reflection and so forth.
Besides the MECL Design Handbook, Altera provides a few application notes  on signal integrity and high speed design which include termination practices. Typically, introducing a resistor in series or in parallel (to ground) is all that is required to mostly match impedances and give adequate performance, the most important concept is knowing when and where to use these terminating resistors. Although some devices come with various termination options built into the die, most still don’t, so it is good to know when a properly placed resistor network can save a lot of shielding attempts and speed up the debugging process.
In my battle with transfer function estimation, ed I have been dealing with many noise problems lately and have come across this application note from TI regarding the calculation of noise figures for basic op-amp circuits. This noise figure deals with the ratio of circuit signal-to-noise ratio (SNR) at the input versus the output. The article goes through the derivation of noise analysis equations due to thermal noise in resistive elements and due to rms noise figures of the active device and goes on to quantify the noise figure as a function of temperature, resistances and op-amp parameters. This can be useful in determining performance properties of circuits given a set of passive components and can be used to define a “best case scenario”.
This all well and good, but then one might ask what noise has to do with system identification (transfer function estimation)? The simple answer is that the frequency-domain transfer function can be determined by passing “noise” through a system and comparing the spectral properties of the output versus the input. The idea is that white noise has a flat spectra (over infinite time) so the transfer function can be accurately determined for all frequencies (again, given infinite time). If infinity is too long a time to wait, one trick is using something called periodic random noise to give a well defined spectral distribution in finite time. A Gaussian random number generator can create white noise, however, an inverse Fourier transform is used to to generate the periodic noise.
Essentially, enough sinusoids are added together to cover the frequency range of interest with equal amplitudes and randomized phases that are distributed over +/- pi. The amplitude will relate to the desired resulting rms value for the noise and the number of summed sinusoids. The frequencies of choice should line up with the sampled frequencies in the following FFT that will be computed to compare the spectra of the input and output signals. The signal will now look like noise and will be “random”, however, all of the frequency domain components will maintain their amplitude and phase through the whole procedure leading to less variance in the FFTs.
( slyt094.pdf )
It is true that I hold a high regard for current sources due to their limitless applications in the biosciences. Current sources are at the heart of electroplating systems for electrode manufacture, pilule stimulation of tissue and imaging. This is only a small part of the reason that I find this 1973 application note so appealing. The most important message, to me, is the complete walk-through from the basic governing equations based on ideal op-amps, to non-ideal characteristics, to error propagation.
In this case, the current source becomes less important than the design process. One of the subtle issues that can be seen from the equations is that resistor matching can degrade circuit accuracy just as much as the op-amp quality. For this reason, it may be beneficial to spend an $0.05 on more accurate resistors and save $1.00 on an op-amp. The converse is also true: you can spend an extra $1.00 on a more precise op-amp to save $0.05 on passive components. The standard error from your complete circuit may end up being comparable in the two cases.
( an587-d.pdf )
Although Wolfson Microelectronics produces some fine integrated circuits, their application note section is somewhat out of the way and doesn’t like to be linked to directly. This didn’t stop me from looking around and finding some potentially useful app notes:
A.C. Coupling Capacitor Selection
Recommended Output Filters for Wolfson Audio DACs
Class D Headphone Filter Component Selection
Issues When Grounding D.C. Coupled Headphone Outputs
The main reason that I was looking there in the first place was that I was getting excessive noise when coupling a portable audio player to an audio system I am working on and couldn’t figure out why. When I took everything apart, I found that output stage of the audio device was being pulled up to a higher voltage than expected by the coupling on the input stage and thereby biasing the input stage of the audio amplifier incorrectly. After some careful circuit modifications, the signal integrity was returned with fairly good low frequency response. At this point, my audio circuit experience is still minimal, I hope to post some designs once I get something worthwhile going.
As per my previous post, I have started working out the waveforms to interface a Sony dualshock 2 controller. I decided to go with a Cypress PSoC instead of a standard 8051 because it has a built in SPI controller thereby making the bit-banging much easier. The downside is that I don’t have a C compiler built into the development suite, but that is all right, I am pretty good with assembly. In the process of setting this board up and testing a few things, I found the embedded systems section of wikibooks quite useful. The whole electrical engineering section looks pretty good. The pages can be edited by anonymous readers just as easily as wikipedia, however, I have not found anything terribly wrong in the limited time I spent looking at the site. Over all, it seems like a decent place to learn for beginners and to contribute for experts.
Here is a short application note (AN-404) from Analog Devices that deals with high performance analog and digital layout on the same printed circuit board. The specific example deals with AD1845 and CS4231 codecs and demonstrates some ideas for clean power and ground plane separation, order among others. The application note provides some handy numbers, such as a “ballpark” estimate inductance of a PCB trace of 1nH/mm. Another helpful hint is that the note helps prioritize the various pins of the codec on page six to optimize noise management.
While on the subject of PCB design, here is a nice tutorial covering various dielectric materials used in printed circuit board fabrication. Its main goal is to give an overview of the various properties of the materials so the designer has a better judgement of which to use for higher performance RF boards and which is most economic for medium-speed digital designs.
( an-404.pdf )
What to do when you need to mount a ball-grid array (BGA) package on a circuit board without sophisticated equipment? One popular option is to create something called a “reflow oven” which is able to control your circuit boards temperature with respect to time. The idea behind reflow soldering is that we may want to apply a thin layer of solder paste (solder with flux) over the exposed pads on a printed circuit board, then place all of the surface-mount components on that side, and then heat the board so the solder melts and the components become electrically attached. This is pretty much the only method for attaching components whose pads are completely on the underside making them inaccessible to soldering irons. The temperature profile is fairly standardized (here, here and here) and consists of first removing any excess moisture from the packages, then ramping up to the temperature required to melt the solder, then to cool off in a safe manner that prevents component or joint damage. It should be noted that these temperature profiles aim to limit the time components spend at elevated temperatures (>250C) to minimize the risk of damage due to heat.
What I am proposing is something much simpler: lets use a hot plate to heat the PCB and achieve the same sort of reflow process. The main disadvantage is that the process is much less controlled and the dimensions of the board must be small enough to fit on the hotplate. The primary benefits are its simplicity. I am fortunate enough to have a hotplate which has a thermocouple to the surface and can measure the surface temperature with some degree of proficiency, so an alternate method will be required for other types. Some kind of infra-red measurement method would probably work well.
The idea is that we first apply solder paste to the board, when necessary. In this example, I am mounting a MICROSMD8 package where there is ample solder on the board and the chip to achieve connection. It is often a good idea to put some clean-free flux on the board in any case. Everything is first pre-heated for ten minutes at 50-80C to get rid of some of the moisture. The assembly is then heated to about 230C. At this point, the chips should already be aligned over the target pads. The reason for this temperature is that unlike the oven, the top surface of the PCB is exposed to air and thereby creates a thermal gradient. We need to control the heat on the top surface so that the solder just barely melts. This can be noted when watching the PCB under a microscope or with a magnifying glass as the solder will become very shiny when it melts. As the solder melts on the chips and PCB, the surface tension will pull the chip into alignment. The whole assembly can then be slowly cooled and tested electrically. When populating larger projects, it is best to put on the larger chips first and then place something to act as a heat-sink on top. I have had success with larger DSP chips where I placed inverted bolts on top to radiate away some of their heat while adjusting other components. Finally, don’t forget that a cold PCB looks the same as a hot one, so be sure to avoid burning yourself.
( an081.pdf ) ( an353.pdf ) ( xapp427.pdf )
I have recently stumbled upon a power-related mistake that I made which may be educational. The basic setup is that I designed an analog amplifier with some digital controls and did not power the digital circuitry correctly which then resulted in some very infrequent errors. The power supply for my amplifier was +/- 5V and a ground reference. This was just fine for the analog circuitry, however, some digital potentiometers needed a single +5V supply to operate. Not wanting to contaminate the reference ground with current from the digital components, I decided to create a second “power” ground for the circuit. I did this by placing a 7805 +5V linear regulator between the +5V and -5V power rails and called the new position the power ground. I also placed some LEDs in series with resistors between the power rails and the power ground. Finally, I hooked up the digital electronics between the +5V rail and the newly created power ground.
At first this may seem to be a reasonable idea since the 7805 regulator should provide a voltage that was close to the ground reference. The problem is that the linea regulator essentially acts as a serial impedance between the power rail and the load (see On Semi’s linear regulator guide or the one from National). To be more specific, the 7805 in the previously described configuration acts as a variable impedance between the +5V power rail and power ground which varies to ensure that the voltage between the power ground and -5V would be maintained at 5 volts. So if the load (digital circuitry) is between the +5V rail and the power ground, the regulator cannot really do its job since it cannot drive current between its output and the -5V (its ground) rail. The right way to solve this specific problem would have been to use a 7905 negative 5 volt regulator which would provide the same approximate voltage for power ground and would have no problems driving the current to the -5V rail. The reason the circuit worked most of the time but failed sporadically was because of the LEDs between +5V, power ground and -5V. The amount of current that the digital components typically required was small compared to the LEDs current and therefore it was easily sourced, by the LED, not the regulator.
( hb206-d.pdf )
Last week, I wrote an entry where I pointed out some methods to aid with getting your SPICE simulation to converge and made a promise that I would write a guide that would go through all the necessary steps to create a simulation with a non-standard device. Luckily, the fine folks at Texas Instruments have already written such a guide. The guide is designed to work with the Orcad/Cadence suite and guides the user through all the steps, starting with downloading a SPICE model from ti.com to changing the appearance of the schematic symbol to creating a simulation profile and running the simulation. Although this is geared towards Texas Instruments, the ideas are generic enough to apply to practically any vendor’s models.