Mixture model analysis of analogue report data

This code relates to my 2009 Journal of Vision article [1] on the distribution of recall errors in working memory tasks. I've made the relevant MATLAB functions available for download here [zip]. If you're looking for my more recent analysis code taking a non-parametric approach to swap errors, click here.

This code is released under a GNU General Public License: feel free to use and adapt these functions as you like, but credit should be given if they contribute to your work, by citing:

Bays PM, Catalao RFG & Husain M. The precision of visual working memory is set by allocation of a shared resource. Journal of Vision 9(10): 7, 1-11 (2009)

If you refer to this code directly, please also include the URL of this webpage, or my homepage bayslab.com.

Fig. 1 | The colour report task.
The following functions were developed for analyzing errors on recall tasks where both stimuli and responses are chosen from a circular parameter space. A typical example is the colour report task developed by Wilken & Ma [2], illustrated in Fig. 1.

The circular parameter space in this case is represented by a 'colour wheel'. On each trial a memory array is presented consisting of a set of items with colours chosen at random from the wheel. After a retention period, one array location is indicated (cued) and the participant clicks on the wheel to indicate the colour they recall at that location.

The data available for analysis on each trial therefore consist of a response value (the colour at the location of the click), a target value (the colour of the cued item), and a set of non-target values (the colours of each of the uncued items). The following MATLAB functions require that each of these values be given in radians, in the range [−π, π].

Responses coded in degrees can be converted to radians as follows:

X = wrap(X_deg/180*pi)

The wrap function constrains the resulting response values to the desired range. The wrap function should also be used when adding or subtracting response values. E.g. the error on a trial can be calculated as the deviation between the response value X and the target value T:

E = wrap(X-T)

will ensure the resulting error measure is still in the range [−π, π].

There is the possibility of confusion with tasks testing recall of orientations. If the task involves reporting the remembered orientation of a bar, for example, responses might be coded in the range 0°−360°, but the range of unique angles is only 0°−180°. So, orientation values should be mapped onto radians thus:

X = wrap(X_deg/90*pi)

Bias and Precision
It is common in error analysis to distinguish two independent measures of performance: bias and precision. The bias (or 'accuracy') measures systematic deviations in responses from the target value; the precision measures the trial-to-trial variability in the response error.

In a Euclidean (non-circular) response space, bias is typically measured by the mean error, and precision can be measured by the inverse of the standard deviation of error (inverse variance is also commonly used). The function JV10_error can be used to calculate equivalent precision and bias estimates for responses in a circular parameter space:

[P B] = JV10_error(X,T)

Fig. 2 | Examples of error distributions with specified bias and precision

The inputs X and T are (n×1) vectors of responses and target values respectively, in the range [−π, π]. Output P returns the precision (inverse of circular s.d., corrected for chance*), and B returns the bias (circular mean).

Bias and precision for some example error distributions are illustrated in Fig 2. Note that these performance measures are non−parametric: their calculation makes no assumptions about the shape of the error distribution (e.g. gaussian versus mixture, see below).

* This function uses Fisher's definition of sample mean and s.d. for circular data [3]. While asymptotically unbiased, Fisher's formula systematically underestimates the population s.d. when the distribution is close to uniform even with very large n. The JV10_error function incorporates a correction ensuring that the expected precision estimate for uniformly distributed responses ('guessing') is zero.

Von Mises distribution
The Von Mises distribution is a circular analogue of the familiar Gaussian (or normal) distribution. It is parameterized by a mean, μ, and concentration, κ.

y = vonmisespdf(x,mu,K)

returns the Von Mises probability density function evaluated at values of x in the range [−π, π]. Note that setting K = 0 will return a uniform distribution.

r = randvonmises(n,mu,K)

returns n random samples from a Von Mises distribution (method of Best & Fisher [4]). Conversion between the Von Mises concentration parameter and circular standard deviation is achieved with the functions:

sd = k2sd(K)
K = sd2k(sd)

Mixture modeling
Bays et al. [1] proposed a probabilistic mixture model [5] to account for the distribution of errors on recall tasks. According to this model, the overall response distribution comprises a mixture of three different components:
Fig. 3 | Mixture model components

(1) Target responses − the observer correctly reports the feature value (e.g. colour) of the cued item (with some variability κ),

(2) Non-target responses − the observer mistakenly reports the feature value of one of the other, uncued items held in memory (with the same variability κ),

(3) Uniform responses − the observer generates a random response unrelated to either cued or uncued items.

Each of these components has a corresponding probability density function, illustrated in Fig. 3 for a 3-item memory array (one target value, T, and two nontarget values, N).

The contribution made by each of these components to the overall response distribution can be estimated with the function JV10_fit. The function uses an EM algorithm [6, 7], with a range of initial parameter values, to fit the mixture model described above to input data:

[B LL] = JV10_fit(X, T, NT)

where X is an (n×1) vector of responses, T is an (n×1) vector of target values, and NT is an (n×m) matrix of non-target values. The output B contains 4 values corresponding to maximum likelihood estimates of the parameters of the mixture model:

B == [K pT pN pU]

where K is the concentration parameter of the Von Mises distribution describing response variability, and [pT pN pU] are mixture parameters indicating the estimated probability of target, non-target and uniform responses, respectively. LL returns the log likelihood of the fitted model. If the input NT is omitted, the function will fit a mixture of target & uniform components only. Note that the fitting function is unlikely to provide reliable parameter estimates for n < 50 trials, and may fail altogether for small n.

Update Feb 2017: Note that an extension is now available to the JV10_fit function that returns trial-by-trial estimates of the probability that responses were drawn from each of the three mixture components. The new code is available here.


[1] Bays PM, Catalao RFG & Husain M. The precision of visual working memory is set by allocation of a shared resource. Journal of Vision 9(10): 7, 1-11 (2009) [pdf]

[2] Wilken P & Ma, WJ. A detection theory account of change detection. Journal of Vision 4, 1120-1135 (2004)

[3] Fisher NI. Statistical analysis of circular data. (1993)

[4] Best DJ & Fisher NI. Efficient simulation of the von Mises distribution. Applied Statistics 28, 152-157 (1979)

[5] McLachlan GJ, Peel D. Finite mixture models. (2000)

[6] Bilmes JA. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. International Computer Science Institute Technical Report ICSI-TR-97-021 (1997)

[7] Dhillon IS and Sra S. Modeling data using directional distributions. Technical Report TR-03-06, Department of Computer Sciences, University of Texas (2003)