Beamforming | Parity Augmentation | Digital Signal Processing | Hyperspectral Imaging | Matrix Sparsification | Sub-Optimal Decoding | Random Forest

Beamforming
Beam forming uses a microphone array and signal processing to deduct the direction of arrival signal. An array of sixteen microphones is set up, and a narrow-band frequency signal is captured. Based on the frequency, phase shifts are added to the sixteen different channels. This is repeated for several different angles of arrival. The angle that produces the most constructive interference among the sixteen channels is the direction of arrival.
PRESENTATION: BEAMFORMING (uses MS Powerpoint)
Parity Augmentation
Parity augmentation is a form of LDPC decoding based on the goal of even parity (the sum of the bits equaling 0).  By using information from the other bits to find out the probability that a bit is 1 or 0, the probability of parity increases (hence it is called parity augmentation).  This alternative approach based on extrinsic probaiblities and normalizing is computationally less complex than the original belief propagation method suggested by Robert Gallager, the inventor of LDPC codes.  Other methods of such decoding, such as moving in the direction based on the gradient, only decoding specific rows of a parity check matrix, and averaging values before making decisions, were also explored.
PAPER: LDPC DECODING BY PARITY AUGMENTATION AND MAXIMIZATION (DSP Workshop Jan 2009)
POSTER: LDPC DECODING BY PARITY AUGMENTATION AND MAXIMIZATION (DSP Workshop Jan 2009
PRESENTATION: PARITY AUGMENTATION
Digital Signal Processing
In an effort to introduce engineering students to DSPs much sooner in the curriculum, labs from graduate level classes were rewritten into tutorial and demonstration formats. Undergraduate students will be able to gain insight to the potential application of DSPs. Most likely these will be used in the systems course.
PAPER: DSP LAB PROJECT WRITE-UP
Hyperspectral Imaging
Hyperspectral Imaging is a DSP study analyzing end-members from data captured using a hyperspectral camera.
Matrix Sparsification
In LDPC codes, every 1 in a matrix represents a connection between check nodes and variable nodes. Reducing the sparsity of a matrix makes LDPC decoding faster, and more reliable. Matrix sparsification is difficult using GF2 operations. They require a different set of math rules.

The belief propagation (BP) decoder commonly used for LDPC codes performs very poorly on denser parity check matrices. In this paper, we explore the idea of finding a sparse matrix which is row-space equivalent matrix to a given matrix, with the intent that the sparser representation may be used with a BP decoder. In general the sparsification problem is difficult (NP complete), so it is not expected that the sparsest representation will be found. But the matrices found are sparse enough that they may be used (with some loss of performance) with the BP decoder. Testing is accomplished by starting with a sparse matrix (as a control), transforming to a dense equivalent matrix, then finding a newly sparse matrix.
PAPER: APPROACHES TO MATRIX SPARSIFICATION FOR LDPC DECODING (DSP Workshop Jan 2009)
PRESENTATION: MATRIX SPARSIFICATION
Sub-Optimal Decoding
The need to deal with large constraint length codes, has in the past been achieved by stack algorithms or sequential decoding, which typically do not produce soft outputs, which may be desirable in some modern iterative decoding frameworks. Motivated by an approximate posterior equalizer, we present a suboptimal decoder which employs a similar decomposition for binary convolutional codes observed in additive white Gaussian noise. This results in mixedfield arithmetic (GF(2) and R). The GF(2) (convolutional code) arithmetic means that the central limit theorem does not apply. Instead, we invoke Gallager’s lemma to compute the distribution of the interfering terms. Under various assumptions of independence (resulting in different complexities) the channel posterior probability can be approximated, resulting in a soft-output decoder. Initial results indicate effective performance when dependencies among rows are incorporated. The method may also extend to arbitrary binary linear codes.
PRESENTATION: SUB-OPTIMAL DECODING
POSTER: A CONTROLLABLE COMPLEXITY SOFT-OUTPUT SUBOPTIMAL CONVOLUTIONAL DECODER (DSP Workshop Jan 2009)
PAPER: A CONTROLLABLE COMPLEXITY SOFT-OUTPUT SUBOPTIMAL CONVOLUTIONAL DECODER (DSP Workshop Jan 2009)
Random Forests
Classification and Regression trees are statistical tools specially designed for approximating or "learning" a complex method, function or algorithm.  A Forest is a collection of trees that are then averaged together to produce an even more accurate result.  These machine learning techniques are very powerful and robust.  An application of these techniques to LDPC decoding shows promising results, because after training, a forest or a tree would be able to determine the correct bit by a decrease in decoding time and decrease in complexity.