803975
doi
10.1145/3041710.3041713
oai:zenodo.org:803975
user-eu
Giorgos Saloustros
Institute of Computer Science, FORTH (ICS)
Manolis Marazakis
Institute of Computer Science, FORTH (ICS)
Angelos Bilas
Institute of Computer Science, FORTH (ICS) and Department of Computer Science, University of Crete, Greece
Iris: An optimized I/O stack for low-latency storage devices
Anastasios Papagiannis
Institute of Computer Science, FORTH (ICS) and Department of Computer Science, University of Crete, Greece
doi:10.1007/978-3-319-46079-6_44
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial No Derivatives 4.0 International
https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode
NVM
I/O
storage systems
low latency
protection
European Union (EU)
Horizon 2020
Euratom
Euratom research & training programme 2014-2018
<p>System software overheads in the I/O path, including VFS and file system code, become more pronounced with emerging low-latency storage devices. Currently, these overheads constitute the main bottleneck in the I/O path and they limit efficiency of modern storage systems. In this paper we present a taxonomy of the current state-of-the-art systems on accelerating accesses to fast storage devices. Furthermore, we present Iris, a new I/O path for applications, that minimizes overheads from system software in the common I/O path. The main idea is the separation of the control and data planes. The control plane consists of an unmodified Linux kernel and is responsible for handling data plane initialization and the normal processing path through the kernel for non-file related operations. The data plane is a lightweight mechanism to provide direct access to storage devices with minimum overheads and without sacrificing strong protection semantics. Iris requires neither hardware support from the storage devices nor changes in user applications. We evaluate our early prototype and we find that it achieves on a single core up to 1:7 and 2:2 better read and write random IOPS, respectively, compared to the XFS and EXT4 file systems. It also scales with the number of cores; using 4 cores Iris achieves 1:84 and 1:96 better read and write random IOPS, respectively. In sequential reads we provide similar performance and in sequential writes we are about 20% better compared to other file systems.</p>
A previous version of this paper appeared in Michela Taufer, Bernd Mohr, Julian M. Kunkel (Eds.): High Performance Computing, LNCS 9945, ISC High Performance 2016 International Workshops ExaComm, E-MuCoCoS, HPC-IODC, IXPUG, IWOPH, P3MA, VHPC, WOPSSS Frankfurt, Germany, June 19–23, 2016, Revised Selected Papers.
Zenodo
2016-12-01
info:eu-repo/semantics/article
803974
user-eu
award_title=European Exascale System Interconnect and Storage; award_number=671553; award_identifiers_scheme=url; award_identifiers_identifier=https://cordis.europa.eu/projects/671553; funder_id=00k4n6c32; funder_name=European Commission;
1579539456.124203
399892
md5:1cf018bdb0d6aabeb112fd38739ae10a
https://zenodo.org/records/803975/files/iris_paper_osr16_draft.pdf
public
10.1007/978-3-319-46079-6_44
Is new version of
doi
ACM SIGOPS Operating Systems Review - Special Topics
50
3
3-11
2016-12-01