Category Archives: Signal

Impedance measurement handbook from Agilent

agilent.JPG

To follow up the last post on resistor selection, seek here is a the Agilent Technologies Impedance Measurement Handbook. I found this handbook to be quite useful and well written as it covers everything from the basics of measurement problems to examples of both low frequency and RF frequency impedance measurements. The authors focused on the often overlooked parasitic properties of common system components as well as the measurement systems themselves. They go on to outline methods to construct test structures and procedures to minimize these parasitics and go on to give practical examples. For obvious reasons, sovaldi sale all of the test equipment in the handbook is made by Agilent, however, other brands can be used just as well.

( 5950-3000.pdf )

Amplifier noise app note from TI/periodic random noise

arnold.jpg

In my battle with transfer function estimation, ed I have been dealing with many noise problems lately and have come across this application note from TI regarding the calculation of noise figures for basic op-amp circuits.  This noise figure deals with the ratio of circuit signal-to-noise ratio (SNR) at the input versus the output. The article goes through the derivation of noise analysis equations due to thermal noise in resistive elements and due to rms noise figures of the active device and goes on to quantify the noise figure as a function of temperature, resistances and op-amp parameters. This can be useful in determining performance properties of circuits given a set of passive components and can be used to define a “best case scenario”.

This all well and good, but then one might ask what noise has to do with system identification (transfer function estimation)? The simple answer is that the frequency-domain transfer function can be determined by passing “noise” through a system and comparing the spectral properties of the output versus the input. The idea is that white noise has a flat spectra (over infinite time) so the transfer function can be accurately determined for all frequencies (again, given infinite time). If infinity is too long a time to wait, one trick is using something called periodic random noise to give a well defined spectral distribution in finite time. A Gaussian random number generator can create white noise, however, an inverse Fourier transform is used to to generate the periodic noise.

Essentially, enough sinusoids are added together to cover the frequency range of interest with equal amplitudes and randomized phases that are distributed over +/- pi. The amplitude will relate to the desired resulting rms value for the noise and the number of summed sinusoids. The frequencies of choice should line up with the sampled frequencies in the following FFT that will be computed to compare the spectra of the input and output signals. The signal will now look like noise and will be “random”, however, all of the frequency domain components will maintain their amplitude and phase through the whole procedure leading to less variance in the FFTs.

( slyt094.pdf )

Journal Club: A Mathematical Theory of Communication by Claude Shannon

shannon.jpg

As promised before, I have finally worked through the majority of this paper, enough to give a brief introduction and discussion.

The key point of this paper is to demonstrate the importance of statistical analysis and its applications to determining information generation and transmission capacity. The measure H, or entropy, can be thought of as the amount of variance, or uncertainty, in a communication system. This leads us to define the theoretical capacity of a communication system given the known statistical properties of its constituents as well as apply analysis to practical systems.

The concept of information entropy deals with the uncertainty in the expected value of this information. Although it is rooted in statistical mechanics, it can be seen that highly predictable information has low variance, and therefore lower entropy, as compared to more random information. From this measure of information entropy, we can determine the necessary number of bits to efficiently encode this information, or to put it another way, how many symbols we can transmit per bit (assuming digital communication medium). Although the case of uniform probability distribution for all information symbols is easiest to analyze and leads to highest entropy, most practical applications have particular statistical distributions for symbol/information generation. Shannon goes to lengths to demonstrate this with the English language noting that selection of letters, or even words, is highly structured and far from random. This structure is a measure of redundancy of information, so that if I typ like ths, you cn stil undersnd me. (Spammers have been rediscovering this fact for years.)

Once the information entropy for all of the circuits involved in the communication system are determined, the channel capacity can be determined in the form of symbols per second given a finite certainty and a raw channel bit-rate. Shannon gives a fine example of a digital channel operating at 1000bits/s with a 1% error rate leading to an effective bit rate of ~919bits/s to account for error detection. Some communication system examples are given which I will not discuss in depth, however, I will try to reiterate the important steps in efficient communication design. Although Shannon gives a mathematical formulation for determining the theoretical limit for channel throughput, it is up to the designer to realize create a system which comes close to the limit. To do this, it is imperative to know the statistical properties of all of the sub-systems involved and the noise that may be present, and only then can efficiency be achieved.

The paper is by far more in-depth than this introduction and the math is not too hard, if anything, it is worth a look-over for some commentary on the statistical nature of the English language. As always, feel free to post a comment to discuss something about the paper, add something, or correct a mistake I have made. As a small bonus, I am adding Shannons’ patent for PCM-encoded voice/telephone service for those who like to read those types of things.

( 1948shannon-a-mathematical-theory-of-communication.pdf )
( 1946shannon-communication-syste-memploying-pulse-code-modulation-patent.pdf )

National Semi application note on practical uses of undersampling

undersampling.JPG

If I had to sum up this application note with a phrase, I would re-iterate that the minimum sampling rate to adequately capture a signal depends not only on the frequency content, but also the signal bandwidth. To demonstrate, we can look at GSM-based mobile communications which operate at around 1700MHz in the US. Even though the frequency content is high, each GSM channel is only 200kHz wide, so we can use a relatively slow ADC and a bit of good design. The typical trick employed in RF equipment is to set up an oscillator to run at the center frequency (~1700MHz) of the desired GSM channel and multiply it by the incoming RF signal (also ~1700MHz). As with a Fourier transform, the DC component of the result will represent the power at the oscillator frequency and the adjacent frequencies will be shifted to center around DC and will show up as “beats”. This new signal will have a much lower frequency content, on the order of the 200kHz, and will therefore allow slower ADCs to be used with a focus on economics (cheaper handsets) and higher accuracy (better reception).

The application note presents a similar type of trick, except this time, digital undersampling is involved. The idea is that unfiltered frequency content that is outside of the Nyquist band will be aliased into the Nyquist band and still provide meaningful information as long as it has narrow bandwidth and it is the only frequency content coming in. To use the previous example, if we can set up a well-tuned bandpass filter to center around the GSM channel of choice, we can run an ADC at 400kHz and expect the higher-frequency content to be aliased in.

On a final note, I have to apologize for my negligence on keeping up the `Journal Club‘. I have not forgotten about discussing Shannon’s work and plan to write a post about it at the earliest convenient time.

I2C manual from NXP (Philips)

i2c.JPG

I found this surprisingly well-written manual for I2C serial communication protocol today. In short, ailment this is a fairly popular message-based protocol that can be found in many embedded systems in consumer electronics, hospital test and automation and automotive fields. There are low-speed alternatives and the structure of the protocol is fairly user-friendly making it a good option for hobbyists. There also schematics available on-line for rs232 and USB to I2C adapters available on-line like this open-source platform.

( an10216_1.pdf )

Some comments on taking a power spectra of a time series

matlab-macro.jpg

I have been noticing more and more the tendency of tutorials or help information on the fast Fourier transform (FFT) to completely ignore signal windowing/enveloping/tapering. The sample code typically starts out with generating a time series made up of one or more sinusoids with possible random noise included. The code then takes an FFT of the data and displays the power spectra. This simple method works well for a small class of signals whose properties are not changing over the time bin and whose values go to zero at the start and end of the time bin. In all other cases, check there is some degree of spectral leakage, there or unnecessary broadening of spectral peaks and potential additional spectral noise. The typical solution to this problem to subdivide the whole time series into overlapping time-bins and then apply some kind of window function and only then perform a FFT. Care should be taken to normalize the resulting FFT with the area of the window function so that accurate power values are preserved. Things get more complicated if the time series under analysis deals with point processes, mind something which may be described later. The image above is a μblog original and may be used freely.

( an014-understanding-fft-windows.pdf )

How to duplicate certain RF proximity cards

prox-assembled-standing.jpg

With RFID-type devices becoming more and more ubiquitous in our society, it is good to know some of the advances being made in the security research fields so as to avoid a false sense of security. I came across Jonathan Westhues‘ site which outlines his experiences duplicating certain identification devices. It is important to note that the duplicated devices are of the identification-only type and do not have any built in security mechanisms, however, these are accessible initial steps. Hopefully this will motivate me to do something with the TMS3705A-based RFID reader I built following a sample design.

Maxwell is here

youngjamesclerkmaxwell.jpg

I have finally obtained Maxwell’s works and will be posting both volumes (~40MB each). The reason that historic electromagnetic works have been posted recently is because I am tasked with giving a presentation on electromagnetic scattering off a perfectly conducting sphere. Once I get things sorted, pilule I will post a link to all of the papers that I worked with, all of which are considered public domain according to current U.S. copyright laws.

( 1873MAXWELL-a-treatise-on-electricity-and-magnetism-vol1.pdf )

( 1873MAXWELL-a-treatise-on-electricity-and-magnetism-vol2.pdf )