Wednesday, April 30, 2014

A better user interface for viewing hits and errors in real-time

I've written a simple Python script for scanning the 'ardulines' file and producing a continuously updated display of the subject's hits and errors.

Here's a screenshot of the display:

The trial number is on the x-axis. Only the last 80 or so trials are shown, to avoid crowding the points.

The top 8 rows are different varieties of "go left" trials. The bottom 8 rows are different varieties of "go right". Red is error, green is correct. (Rarely, the subject refuses to answer at all, producing a black "spoiled" trial). The row labels on the left show the individual percent correct for each variety of stimulus.

One thing that I noticed in the behavior was a very strong "stay"-bias. That is, if a subject went left on the previous trial, he's likely to go left on the next trial ... regardless of what the correct answer was on either trial! This is obviously a non-optimal solution to the problem. With time, subjects typically learn to suppress this poor strategy.

In the mean time, it's useful to be able to diagnose which strategy the subjects are using. To do this, I fit an ANOVA to the subject choice on each trial (LEFT or RIGHT), as a function of the correct response, the previous response, and a bias term reflecting a constant preference to go one direction or the other.

These values are displayed (admittedly, a bit tersely) in the title of the figure above. The factors are called "rewside" (correct side); "prevchoice" (previous choice); and "Intercept" (side bias). For each factor, I've shown the fraction of explainable variance (EV) in the subject's response that is encoded by that factor, as well as the coefficient and p-value for each factor.

This subject is doing okay (60% correct); fortunately, the rewarded side is the largest determinant of his choice. Stay bias accounts for a small fraction of the variability, and there is no side bias.