Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
M.Tech. Project, IISc, 2021
Part of requirement for the course E0243: Computer Architecture
M.Tech. Project, IISc, 2021
Part of requirement for the course E0243: Computer Architecture
M.Tech. Project, IISc, 2022
Part of requirement for the course E0253: Operating Systems
M.Tech. Project, IISc, 2022
Part of requirement for the course E0358: Advanced Techniques in Compilation and Programming for Parallel Architectures
Published in USENIX ATC, 2025
Accelerate preprocessing by caching intermediate results across epochs
Recommended citation: Jha, Keshav Vinayak. (2025). "HyCache: Hybrid Caching for Accelerating Input Preprocessing Pipelines in DNN training" USENIX ATC 2025. 1(1). https://www.usenix.org/conference/atc25/presentation/vinayak
Published:
End-to-end deep neural networks’ (DNNs) training performance depends not only on the time spent in training the model weights but also on the time spent in loading and preprocessing the training data. Recent advances in GPU hardware have made training substantially faster. As a result, the bottleneck has shifted to the CPU-based input pipeline. This pipeline must fetch and transform each sample through multiple stages before it can be consumed by the GPU.