Dataset Open Access

# Data supplementing the article Schomaker, J., Walper, D., Wittmann, B.C., & Einhäuser, W. (2017). Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience. Vision Research, 133, 161-175.

Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C.; Einhäuser, Wolfgang

### Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Schomaker, Judith</dc:creator>
<dc:creator>Walper, Daniel</dc:creator>
<dc:creator>Wittmann, Bianca C.</dc:creator>
<dc:creator>Einhäuser, Wolfgang</dc:creator>
<dc:date>2017-03-13</dc:date>
<dc:description>These data supplement the article Schomaker, J., Walper, D., Wittmann, B.C., &amp; Einhäuser, W. (2017). Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience. Vision Research, 133, 161-175.

Use is free for academic purposes, provided the aforementioned article is appropriately cited.

The directory contains the following files

stimuli.tar.gz - stimuli used in this study; note that this is based on the MONS database, but some deviations from the final version of the database do exist.

ratings.mat contains the variables
arousal - mean arousal rating
valence - mean valence rating
valence2 - squared mean valence rating (after subtracting midpoint)
motivationalValue - mean motivation rating
motivaionalValue2 - squared mean motivation rating (after subtracting midpoint)

All variables are 104x3, where the first dimension is the stimulus number, and the second dimension the motivation ground truth (aversive, neutral, appetitive)

Experiment 1

fixationsExperiment1.mat contains the variables fixationX, fixationY, fixationDuration, fixaitonOnset, fixationInitial, which contain for each fixation horizontal and vertical coordinate, the duration, the time of the onset relative to the trial onset and whether it is the initial fixation. All variables have dimensions 16x104x3x50, where the first dimension is the observer, the second the scene, the third the condition and the forth a counter of fixations. Whenever there are less than 50 fixations the remainder are filled with NaN.

boundingBoxesExperiment1.mat contains for each critical object the bounding box coordinates x,y of upper left corner and width and height as variables boundingBoxX, boundingBoxY, boundingBoxW, boundingBoxH respectively. Note that this is relative to the eyetracker coordinates of experiment 1 (full display 1024x768, presentation in the center) and will therefore not match the coordinates of the images in the archive or the bounding box coordinates of experiment 2. Dimensions are 104x3, the dimensions representing scene number and condition, respectively.

figure2.m uses these data to computes figure 2 of the article from these data

dataForExperiment1.Rdata contains the data frame data, which contains for each fixation the values of the predictors used in the model of table 1. This is computed from the matlab data listed above in addition to the peak values of the AWS salience in the object.

table1.R computes and prints the models for table 1

Experiment 2

fixationsExperiment2.mat contains fixation data for experiment 2. Variable names as in experiment 1. Dimensions are 18x99x3x3x50, where the first dimension is the observer, the second the image number, the third the visual condition, the third the motivational condition and the fifth the fixation count. Since only one visual condition was shown to each observer per motivational condition, there is an additional variable 'hasData', which is 1 if the image was presented to the observer in this condition and 0 otherwise. Since fixations can be outside the image and will therefore be excluded, there is also an additional variable fixationNumber to keep a correct count of the fixation number in the trial.

boundingBoxesExperiment2.mat contains bounding box data for experiment 2 in image (and fixation) coordinates. Notation as for experiment 1, but coordinates refer to image and eyetracking coordinates used for experiment 2 and therefore can differ occasionally.

figure3and4.m generates figures 3 and 4 of the article from these data files.

dataForExperiment2.Rdata contains the data frame data, which contains for each fixation the values of the predictors used in the model of tables 2 amd 3. This is computed from the matlab data listed above in addition to the peak values of the AWS salience in the object.  The fields imgMot and imgVis contain the motivational ground truth and the salience manipulation, respectively.

table2.R uses the Rdata file to compute the models for table 2 of the article and print summary results

table3.R uses the Rdata file to compute the models for table 3 of the article and print summary results. Note that the computation can take substantial time; results might deviate slightly depending on the exact version of R and its libraries used.

</dc:description>
<dc:identifier>https://zenodo.org/record/377004</dc:identifier>
<dc:identifier>10.5281/zenodo.377004</dc:identifier>
<dc:identifier>oai:zenodo.org:377004</dc:identifier>
<dc:relation>doi:10.1016/j.visres.2017.02.003</dc:relation>
<dc:relation>pmid:28279712</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:source>Vision Research 133 161-175</dc:source>
<dc:subject>attention</dc:subject>
<dc:subject>motivation</dc:subject>
<dc:subject>eye movements</dc:subject>
<dc:subject>natural scenes</dc:subject>
<dc:subject>vision</dc:subject>
<dc:subject>psychophysics</dc:subject>
<dc:subject>salience</dc:subject>
<dc:title>Data supplementing the article Schomaker, J., Walper, D., Wittmann, B.C., &amp; Einhäuser, W. (2017). Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience. Vision Research, 133, 161-175.</dc:title>
<dc:type>info:eu-repo/semantics/other</dc:type>
<dc:type>dataset</dc:type>
</oai_dc:dc>