Published March 13, 2017 | Version v1
Dataset Open

Data supplementing the article Schomaker, J., Walper, D., Wittmann, B.C., & Einhäuser, W. (2017). Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience. Vision Research, 133, 161-175.

  • 1. Justus Liebig University Giessen, Department of Psychology and Sports Science
  • 2. Chemnitz University of Technology, Institute of Physics, Physics of Cognition

Description

These data supplement the article Schomaker, J., Walper, D., Wittmann, B.C., & Einhäuser, W. (2017). Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience. Vision Research, 133, 161-175.

Use is free for academic purposes, provided the aforementioned article is appropriately cited.

The directory contains the following files

stimuli.tar.gz - stimuli used in this study; note that this is based on the MONS database, but some deviations from the final version of the database do exist.

ratings.mat contains the variables
      arousal - mean arousal rating
      valence - mean valence rating
      valence2 - squared mean valence rating (after subtracting midpoint)
      motivationalValue - mean motivation rating
      motivaionalValue2 - squared mean motivation rating (after subtracting midpoint)

All variables are 104x3, where the first dimension is the stimulus number, and the second dimension the motivation ground truth (aversive, neutral, appetitive)


Experiment 1

fixationsExperiment1.mat contains the variables fixationX, fixationY, fixationDuration, fixaitonOnset, fixationInitial, which contain for each fixation horizontal and vertical coordinate, the duration, the time of the onset relative to the trial onset and whether it is the initial fixation. All variables have dimensions 16x104x3x50, where the first dimension is the observer, the second the scene, the third the condition and the forth a counter of fixations. Whenever there are less than 50 fixations the remainder are filled with NaN.


boundingBoxesExperiment1.mat contains for each critical object the bounding box coordinates x,y of upper left corner and width and height as variables boundingBoxX, boundingBoxY, boundingBoxW, boundingBoxH respectively. Note that this is relative to the eyetracker coordinates of experiment 1 (full display 1024x768, presentation in the center) and will therefore not match the coordinates of the images in the archive or the bounding box coordinates of experiment 2. Dimensions are 104x3, the dimensions representing scene number and condition, respectively.


figure2.m uses these data to computes figure 2 of the article from these data


dataForExperiment1.Rdata contains the data frame data, which contains for each fixation the values of the predictors used in the model of table 1. This is computed from the matlab data listed above in addition to the peak values of the AWS salience in the object.


table1.R computes and prints the models for table 1

 

Experiment 2

fixationsExperiment2.mat contains fixation data for experiment 2. Variable names as in experiment 1. Dimensions are 18x99x3x3x50, where the first dimension is the observer, the second the image number, the third the visual condition, the third the motivational condition and the fifth the fixation count. Since only one visual condition was shown to each observer per motivational condition, there is an additional variable 'hasData', which is 1 if the image was presented to the observer in this condition and 0 otherwise. Since fixations can be outside the image and will therefore be excluded, there is also an additional variable fixationNumber to keep a correct count of the fixation number in the trial.

boundingBoxesExperiment2.mat contains bounding box data for experiment 2 in image (and fixation) coordinates. Notation as for experiment 1, but coordinates refer to image and eyetracking coordinates used for experiment 2 and therefore can differ occasionally.


figure3and4.m generates figures 3 and 4 of the article from these data files.

dataForExperiment2.Rdata contains the data frame data, which contains for each fixation the values of the predictors used in the model of tables 2 amd 3. This is computed from the matlab data listed above in addition to the peak values of the AWS salience in the object.  The fields imgMot and imgVis contain the motivational ground truth and the salience manipulation, respectively.

table2.R uses the Rdata file to compute the models for table 2 of the article and print summary results

table3.R uses the Rdata file to compute the models for table 3 of the article and print summary results. Note that the computation can take substantial time; results might deviate slightly depending on the exact version of R and its libraries used.

 

Notes

This work was supported by the German Research Foundation (DFG; SFB/TRR 135 "Cardinal Mechanisms of Perception", project B1).

Files

readme.txt

Files (912.1 MB)

Name Size Download all
md5:ae172eb63581a78fd5f53c8d5bbc73d1
4.0 kB Download
md5:59855bea210e8a60327fafe48eca6574
2.3 kB Download
md5:535504fb6a7252320e98d189a7cb6371
51.2 kB Download
md5:e67916a8fdb32e2a71f9ac299ec767a9
48.5 kB Download
md5:d1e02edfd2767e8259a49203c438143c
5.0 kB Download
md5:a129636874e55fe1bd05b70ed03db748
6.1 kB Download
md5:6dc32aab7896d726c9cfc86a65b95eaf
881.4 kB Download
md5:05de6138596c0d10f446a5bbd3e49e02
1.2 MB Download
md5:2f2aca728d8bad9479c46eb5870ac8e5
5.8 kB Download
md5:0b0e02d2054c711400d9aa97a6612e15
4.4 kB Preview Download
md5:ec6ac71393ba9b190b09c917f017e36f
909.9 MB Download
md5:dfb3b5235a5b934029d51682b869e26f
3.0 kB Download
md5:808e114466579689a0d626471b62482f
3.0 kB Download
md5:c3e1ac779ec9073a0aaaf4e16942283a
3.1 kB Download

Additional details

Related works

Is supplement to
10.1016/j.visres.2017.02.003 (DOI)
28279712 (PMID)