345946
doi
10.5281/zenodo.345946
oai:zenodo.org:345946
Methfessel, Philipp
Chemnitz University of Technology, Physics of Cognition Group & Cognitive Systems Lab
Bendixen, Alexandra
Chemnitz University of Technology, Cognitive Systems Lab
Dataset supplementing the article Einhäuser, W., Methfessel, P., & Bendixen, A. (2017). Newly acquired audio-visual associations bias perception in binocular rivalry. Vision Research, 133, 121-129.
Einhäuser, Wolfgang
Chemnitz University of Technology, Physics of Cognition Group
doi:10.1016/j.visres.2017.02.001
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Rivalry
Cross-modal integration
vision
audition
associative learning
Optokinetic nystagmus
No-report paradigm
<p>This dataset supplements the publication<br>
Einhäuser, W., Methfessel, P., & Bendixen, A. (2017). Newly acquired audio-visual associations bias perception in binocular rivalry. Vision Research, 133, 121-129. doi: 10.1016/j.visres.2017.02.001</p>
<p>Use is free for scientific purposes, provided the aforementioned reference is appropriately cited.<br>
Description of files<br>
- conditionsByObserver.csv<br>
contains for each of the 16 observers the color and grating direction that had been coupled to either the low-pitch or the high-pitch tone<br>
column 1: observer number<br>
column 2: color associated with low-pitch tone<br>
column 3: color associated with high-pitch tone<br>
column 4: drift direction associated with low-pitch tone<br>
column 5: drift direction associated with high-pitch tone</p>
<p>- conditionsByObserver.mat contains the same information as matlab variables (as four vectors/cell arrays with one entry per observer)</p>
<p>- toneByBlockAndTrial.csv<br>
contains the conditions for all 18 rivalry trials (6 rivalry blocks with 3 trials each) for each observer<br>
column 1: observer number<br>
column 2: block number<br>
column 3: trial number<br>
column 4: tone (low [pitch], high [pitch], none) played in this trial<br>
Note that due to a technical error for observer #16, block 6 was presented first, followed by 1,2,3,4,5; for all other observers blocks were used in the order given (1,2,3,4,5,6).</p>
<p>- toneByBlockAndTrial.mat contains the same information as a 16x6x3 matrix named toneByBlockAndTrial ; tones are coded numerically (1-low pitch,2-high pitch,3-none)</p>
<p>- eyeTraces.mat contains three cell arrays of dimensions 16x6x3 (observer x rivalry block x rivalry trial) called xEye, oknGain, and timeSinceTrialStart;</p>
<p>o each entry of xEye contains the horizontal eye position for<br>
the respective trial in eye-tracker coordinates (which correspond to screen pixels, except that (1/1) is the upper right rather than the upper left and values increase from right to left due to the setup configuration)</p>
<p>o oknGain contains the gain computed from these eye positions.</p>
<p>o timeSinceTrialStart contains the time in seconds since onset of the trial</p>
<p><br>
For all variables, the sampling rate is 500 Hz, in eye-tracker coordinates the speed of the grating is 240 units/ms. Blinks were removed from both eye-data variables, fast-phases were removed from the gain data. Removed data were set to NaN in eye-data variables.</p>
<p>- Matlab functions figure1d.m, figure 2.m, figure3.m and figure4.m compute raw versions of the aforementioned paper's figures from the datafiles to exemplify their usage.</p>
<p>[Note: In the originally published version of the article, the first two means and their standard errors of section 3.3 were stated incorrectly. All figures and statistical analyses are based on the correct data].</p>
The work was supported by the German Research Foundation (DFG) through SFB/TRR-135 (B4).
Zenodo
2017-03-06
info:eu-repo/semantics/other
787081
1579893770.250849
4324
md5:e63865fe09f323faec3ca30846fd5e32
https://zenodo.org/records/345946/files/figure2.m
314
md5:55704b5f1a7a6851b7c5664113894b5d
https://zenodo.org/records/345946/files/toneByBlockAndTrail.mat
3360
md5:9243fea259f3e5623ecf02e99508b857
https://zenodo.org/records/345946/files/toneByBlockAndTrail.csv
2605
md5:aeb6fde4bf4adf6951682bb82f94f822
https://zenodo.org/records/345946/files/readme.txt
3348
md5:912b9859aef82195c4b0b3c3487d7c58
https://zenodo.org/records/345946/files/figure4.m
288
md5:bca13f83121e7d6442ec88d245130f04
https://zenodo.org/records/345946/files/conditionsByObserver.csv
505
md5:4e3f3becf5d578b4b7af38fbb21e11c9
https://zenodo.org/records/345946/files/conditionsByObserver.mat
3951
md5:d949b58abd72346362ea710021eeda91
https://zenodo.org/records/345946/files/figure3.m
42741921
md5:5e1d528b8e95d04011ce7c7e49c9fe97
https://zenodo.org/records/345946/files/eyeTraces.mat
957
md5:c273a8a135a6b0e50548a9e074a48f00
https://zenodo.org/records/345946/files/figure1d.m
public
10.1016/j.visres.2017.02.001
Is supplement to
doi
isVersionOf
doi
Vision Research
133
121-129
2017-03-06