Published October 27, 2025 | Version v1
Poster Open

Reproducibility of Code-Understanding Large Language Models: A 217 Paper Analysis of ICSE '23 & SC '24

Description

Reproducibility is essential to scientific progress: without it, results cannot be validated, built upon, or trusted. In machine learning and high-performance computing, complex software stacks mean that missing containerization, incomplete documentation, or unclear environment specifications can easily undermine workflow credibility and slow down collaboration. To evaluate the reproducibility of HPC/AI research papers in computer science and data science, a structured scorecard was used to assess the content of technical papers, the quality of the documentation and the reproducibility of the environment. This framework is informed by recent reproducibility benchmarks and HPC/AI research challenges

Files

Gateways2025_paper_14.pdf

Files (307.0 kB)

Name Size Download all
md5:691d34eee96d41a296b77dae984d1f90
307.0 kB Preview Download