Tasks

Classes representing each task

class tasks.SyntheticData

Object representing synthetic data

cumreward_param_plot :
Plots the cumulative reward against model parameters. Useful to determine the relationship between reward acquisition and model parameters for a given task.
plot_cumreward :
Plots the cumulative reward over time for each subject
plot_cumreward()

Plots cumulative reward over time for each subject

class tasks.bandit(narms=2, rewards=[1, 0], rprob='stochastic', rprob_sd=0.025, rprob_bounds=[0.2, 0.8])

Simple one-step bandit task.

narms
: int
Number of arms
rewards
: ndarray(shape=(2))
First entry is the reward, if gained, and the second entry is the magnitude of the loss
rprob
: {ndarray(shape=(narms)), ‘stochastic’}
Probabilty of reward for each arm of the task. One can either specify the probabilities for each arm or enter ‘stochastic,’ which will vary the reward probability by a gaussian random walk
simulate(nsubjects,ntrials)
Runs the task on simulated subjects
class tasks.ortho_gng(rewards=[1, 0, -1])

Model of the orthogonalized go-nogo task from Guitart-Masip et al. (2012)

rewards : list

[1] Guitart-Masip, M. et al. (2012) Go and no-go learning in reward and punishment: Interactions between affect and effect. Neuroimage 62, 154–166

class tasks.twostep(ptrans=0.7, rewards=[1, 0])

Model of the two-step task (Daw et al. 2011).

ptrans
: ndarray
Probability of transitioning from state 0 to either state 1 or 2 depending on the choice made at the first step of the task.
simulate
Generates synthetic data from the task.

[1] Daw, N.D. et al. (2011) Model-based influences on humans’ choices and striatal prediction errors. Neuron 69, 1204–1215