Wednesday, April 29, 2009

EEG Analysis Using MATLAB, EEGLAB, and ICA


This quarter has me working in an EEG lab, analyzing the results of EEG experiments with a MATLAB plugin called EEGLAB. I thought I'd write a bit about what a lab monkey like myself does in the name of science and lab experience.

Electroencephalography (EEG) is one of two basic methods of functional brain imaging, the other being fMRI. Whereas fMRI is a fairly good measure of where things are happening in the brain, EEG is an excellent measure of when things are happening. This is because EEG measures the electromagnetic waves associated with neuronal action potentials, and these waves propagate at the speed of light. Only the coordinated activities of a lot of physically aligned neurons create a signal large enough to be measured, so EEG doesn't register all brain activity, but researchers use continue to use it because it's very cheap and has that great temporal resolution.

My involvement begins when a subject shows up for their experiment. We're running a 32 electrode experiment right now, which means that we have to apply 32 electrodes to the subject. The bulk of these are fitted into a snug cap, but we do have to apply 6 by hand. Each electrode is just a small disk of metal connected to a wire and generally surrounded by a plastic housing. To get the best reading possible, each electrode has to make an electrical connection to the scalp. To do this, we fill the electrode housing with an electrolytic gel and make some scratches to the skin below each electrode. And that's the fun part, because generally we have to scratch the skin through a whole in the center of the electrode using the end of a hypodermic needle: we do the scratching by feel alone. It's a little intimidating for your first few goes. We gauge our connection based on the resistance between each electrode and a reference electrode, in our case the right mastoid electrode.

After that 45 minute investment, our only remaining task for the experiment is to project enough motivation into the subject to keep him or her from falling asleep or zoning out during the experiment; such things add noise to our readings. The subject, on the other hand, still has another 40-50 minutes of repetitive stimulus response tasks, which they complete from a comfy recliner, buttons in hand, staring at a computer screen, in a sound proof electro-shielded booth. (Although, as a sticky note on the door warns the researchers, the booth is not completely sound proof. That was probably some hard won information; we experimenters tend to talk about the subject.)

In our experiment, the subject is presented with a control stimulus and and an experimental stimulus. Our goal is to evoke event-related potentials (ERPs), record them via EEG, and then find differences between the two sets of ERPs. Then the experiment concludes, we remove the cap and loose electrodes, vainly try to mop the 23 spots of translucent white gel out of the subject's furiously messy hair, point them to the bathroom, close the door, and try once again to reverse engineer this hack job of a brain that God neglected to code comment. (Note that steps following 'close the door' are a little figurative, science is a highly distributed process that occurs over weeks or years.)

When we compile our data, it's in the form of a series of measurements about one second long. Each measurement period, which we term an 'epoch', contains one stimulus presentation. And we time-lock each epoch by setting each stimulus presentation at time zero. The basic idea is that we want to take an average of epoch activity for each of the stimuli conditions, control and experimental, and then subtract--yes, literally subtract--one from the other. The result will be the average difference between ERPs with respect to the two conditions. Based on the number of trials and magnitude of the difference, we can determine if the difference is statistically significant. And hopefully the nature of the difference means something to the scientist embarking on the investigation.

There are a few complications in this process. Chief among these is that the power of noise in an EEG is significantly higher than that of signal (see signal to noise ratio). Muscle activity produces its own electrical signals, much stronger than those of brain activity. Blinks, for example, can register as high amplitude waves across several channels. There are two simple solutions to this problem of signal-to-noise, and one complicated one.

The simple answers are these: throw away the trials that are corrupted by muscle activity or just let the--hopefully independently distributed--noise get averaged out in the averaging process. Both solutions work, but require a lot more data before we can get a meaningful average difference between test conditions. And because it's much more difficult to recruit and prep test subjects than it is to get hopeful young research assistants willing to pour countless hours into lab work, scientists longed for a way to clean these noise sources out of individual epochs while preserving the brain activity signal.

And this method is called independent component analysis (ICA).I'll use the example of eye blink noise. What if we could identify the location, form, and amplitude of the waves generated by an eye blink? If we could do that, then we would know what the eye blink signal would add to each of the electrode readings, given that we know where each electrode is located. Once we know that, it's easy to subtract that eye blink signal from each electrode. This is what ICA does, though in the general sense. We give it 31 channels of data (one of the 32 channels is a reference channel or ground), and it tries to come up with the most likely set of 31 sources of cranial activity. Each source is just a notional group of neurons working together to produce some relatively regular signal. Of course, there are many more sources than 31, but because we only have 31 channels our model's best guess will contain at most 31 components. I'll post more about the details of ICA later.

So, the method is this: use ICA to transform each epoch from a record of the activity of 31 channels into a record of the activity of 31 components. Then, we identify epochs containing clearly anomalous data. Examples of this would be massive noise across many components, a component's signal drifting towards positive or negative values due to some electrode problem, etc. Then we delete these epochs from the record, leaving a less irregular set of data. This new channel data is more comprehensible to the ICA algorithm--because it's asked to account for fewer oddities--so we run ICA one more time. Then we identify which components correspond to noise--like blinks, horizontal eye movements, muscle tension, electrode drift, etc.--and which correspond to brain activity. When we ultimately find our averages, they are at the channel level, but with the noise components filtered out of the signal. The process leaves some noise behind and unfortunately eliminates some signal, but ultimately it's a great boon.

And that is how I help add to the base of human knowledge.

Science!

Sunday, April 19, 2009

Vegan Pancakes


Pancakes are the undisputed champions of cooking ease and excellence. Though I was sadly ignorant of their greatness until a couple of months ago, I've come around. So here's my recipe, scaled to serve one and one third real men.

1 cup flour
1 tsp baking powder
1 tbsp sugar (or more to taste)
1 tbsp soy powder (or use soy milk instead of water)
1 tbsp oil
1 cup water

Optional/Flavors
1 tsp vanilla
1/4 cup oatmeal
1/2 banana
Cinnamon
Chocolate chips

Pancakes are a quick bread, so you want to combine all of the dry ingredients and mix them well before adding the wet ingredients. And then you want to cook them up while the baking powder is still producing carbon dioxide.

Dollop about 1/4 to 1/3 cup onto a greased pan. Cook on low to medium heat. The rule for pancakes is to cook them until they get all bubbly and then flip them to the other side. Then you cook that until it's done. You'll have to experiment with batter thickness; too thick and they're tough to cook all the way through. I suggest topping with margarine and peanut butter. Try making a single huge pancake for extra lulz.

Cheers.

Approximate Dietary Veganism


I'm a poor student who loves cooking his own food. So I thought I'd share some of my secrets.

Among other things, I'm an approximate dietary vegan. Before I start laying out recipes, I thought I'd make a few notes on the what, why, and how.

I'm a dietary vegan in that I avoid eating dairy and eggs, but I don't avoid wearing them; leather is a pretty superior material, and if we're going to be killing the cows anyway we might as well use their skins. Yes, I realize that the leather market in effect subsidizes the meat market, but I'm comfortable with my position. I have ethical issues with the production of dairy and eggs--among other things they necessitate the killing of calves and roosters--but I'm fine with honey. I also feel that dairy is generally unhealthy to eat.

I'm an approximate dietary vegan in that I'll sometimes eat things with dairy or eggs in them. My general rule is that if I'm at a restaurant and it's not a "featured ingredient" then I'll eat it. The logic behind this is that a lot of foods have incidental amounts of dairy, and if no one drank milk, etc., then these incidental amounts would probably be replaced by some substitute. So grilled cheese sandwiches are out, but bread with a bit of milk in it is fine. Though on some special occasions I will have something with cheese on it.

I like to think I'm 100% vegetarian. But between you and me, gentle reader, I have a couple of exceptions. I don't eat meat, but if the food is made with broth, is ethnic, and very difficult to find without broth, I'll sometimes look the other way. Examples would be Kimchi or Phở. A good contrasting example is Pad Thai; it's easy enough to find Pad Thai without the oyster/fish sauce if you look around, and I make a point of asking whether it's vegetarian when I order.

The reasons for my vegetarianism are purely ethical. Unlike dairy, I think that meat is very healthy to eat in moderation. I absolutely miss the taste and satisfaction of meat. It's a little sad knowing that I'll probably never again eat anything as good as filet mignon. The most I can hope for in my life time, and I do hope for it, is an in vitro grown hamburger.

That said, I honestly don't care if you agree with me or not. I'm not writing this to persuade you, and I don't think much against those who eat meat. I have friends who eat meat, often at the table with me, and I don't judge them. I just wanted to have make this reference post in case anyone cares to know what I eat or why I eat it.

Cheers.

Wednesday, April 8, 2009

Inverse Inference in fMRI


I attended a nice talk by UCLA's Russel Poldrack yesterday, on the topic of inverse inference in fMRI studies.

A bit of background for those unfamiliar with fMRI: it's a technique which measures oxygenation in the blood via MRI. The idea behind fMRI studies is that we can assign participants a task, which should require them to perform particular cognitive operations, which are going to be at least somewhat localized in the brain, the neurons of which will require more oxygen, depleting the surrounding blood of oxygen, a fact which can be measured by fMRI. So from this we can associate the performance of certain tasks with blood oxygen depletion in particular areas of the brain. And it's not too far of a step to roughly equate blood oxygen depletion with increased neuron activity and thus associate the performance of particular tasks with increased use of particular brain areas.

Now, the inference gets a little fuzzier when we try to map task performance to cognitive operations. So, while we might notice increased activity in a particular area of the brain while a subject is singing a song, it would be an unfounded assertion to say that that area of the brain is responsible for song recall, or song production, or anything that specific. Further evidence can help us narrow in on statements like these, but it's tricky business because of our dim understanding of the brain.

The first half of the talk was about the inverse of these--often illfounded--inferences. That is, the idea that because a certain task or stimulus activates a particular part of the brain, the subject must be undergoing cognitive operations that have been previously associated with that area. Prof. Poldrack used this NY Times article as an example of this type of flawed inverse inference. Here's a good example from that article:

6. In Rudy Giuliani versus Fred Thompson, the latter evokes more empathy.

There is much discussion this year about “authenticity,” as politicians strive to be credible and real. On this front, Mr. Thompson may have an advantage over Mr. Giuliani. When our subjects viewed photos of Mr. Thompson, we saw activity in the superior temporal sulcus and the inferior frontal cortex, both areas involved in empathy. When subjects viewed photos of Mr. Giuliani, these areas were relatively quiet.


Notably, a group of neuroscientists, psychologists, etc., wrote in to complain about the flawed logic in the article. Their letter was published by the NY Times. And apparently the group that conducted this research is a commercial outfit.

The second half of the talk was about establishing an ontology of cognitive processes. Using such an ontology--as well as a centralized database of fMRI data and metadata--Poldrack suggested we might be able to form more conclusive ties between areas in the brain and cognitive processes. He's heading up a project called the Cognitive Atlas, in an effort to form such an ontology in a wiki sort of effort. A few people raised concerns about how accurate a collaborative effort could be, but he suggested that the value would be in reaching either a consensus or well founded competing theories, which could then be tested using machine learning techniques against the previously mentioned fMRI database.

Cheers.

Sunday, April 5, 2009

MATLAB in Linux


Here's a note for those looking for a bit of fit and finish when using MATLAB in Linux from a windows manager. You can associate a menu item with MATLAB and place it in your toolbar, but if you're calling MATLAB from anything other than the command line, you need to add the "-desktop" option. Otherwise, no window will show up. If you're calling "matlab -desktop" and still nothing is coming up, make sure your license manager is running: try calling MATLAB from the command line to see if you get an error.

You can get a MATLAB icon by searching google images for "matlab icon".