|
Jiangnan Yu
I am a second-year Ph.D. student at The Hong Kong University of Science and Technology (HKUST), advised by Prof. Yuan Xie. My research interests are in Computer Architecture and Machine Learning, with a focus on designing efficient hardware accelerators for sparse computation, in-memory computing, and large language models.
Before joining HKUST, I received my M.S. and B.S. in Microelectronics from Fudan University, where I developed a strong foundation in VLSI design and computer architecture.
GitHub
|
Email:
jyucr@connect.ust.hk
|
Research Interests
My research lies at the intersection of Computer Architecture and Machine Learning. I am passionate about designing efficient computing systems through hardware-software co-design to accelerate emerging AI workloads.
- Sparse Computing: Efficient accelerators for sparse matrix operations (GEMM) with online sparsity prediction
- In-Memory Computing: Digital stochastic computing in memory, ReRAM-based acceleration for edge AI
- LLM Acceleration: Multi-chiplet architectures with HBM-PIM for large language model inference
- Hardware Design Automation: NoC generators, DNN accelerator simulators
|
News
- [2026] [Paper] Our work DSCIM on Digital Stochastic Computing in Memory accepted to DATE 2026!
- [2025] [Paper] Our work McPAL on Multi-Chiplet HBM-PIM Architecture for LLMs accepted to DAC 2025!
- [2025] [Paper] Our work DIRC-RAG on Edge RAG acceleration accepted to ISLPED 2025!
- [2024] [Paper] Our work FullSparse on Sparse-Aware GEMM Accelerator published at CF 2024.
- [Jan. 2026] Started this personal website!
|
Education
- Ph.D. in Electronic and Computer Engineering, The Hong Kong University of Science and Technology (HKUST), 2024 - Present
Advisor: Prof. Yuan Xie
- M.S. in Microelectronics, Fudan University (复旦大学), 2021 - 2024
- B.S. in Microelectronics, Fudan University (复旦大学), 2017 - 2021
|
|
Publications
(*: Equal Contributions)
-
Scalable Sparse Transformer Accelerator with In-Memory Butterfly Zero Skipper and Local Attention Reusable Engine for Irregular-Pruned NN
Jiangnan Yu, et al.
IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS), 2026 [Accepted]
-
DSCIM: Digital Stochastic Computing in Memory Featuring Accurate OR Accumulation for Edge AI Models
Jiangnan Yu*, et al.
Design, Automation and Test in Europe Conference (DATE), 2026 [Accepted]
-
McPAL: Scaling Unstructured Sparse Inference with Multi-Chiplet HBM-PIM Architecture for LLMs
Shiwei Liu, Jiangnan Yu, et al.
Design Automation Conference (DAC), 2025 [Accepted]
-
DIRC-RAG: Accelerating Edge RAG with Robust High-Density and High-Loading-Bandwidth Digital In-ReRAM Computation
Jiangnan Yu*, et al.
International Symposium on Low Power Electronics and Design (ISLPED), 2025
-
FullSparse: A Sparse-Aware GEMM Accelerator with Online Sparsity Prediction
Jiangnan Yu, Yang Fan, et al.
ACM International Conference on Computing Frontiers (CF), 2024
-
TPNoC: An Efficient Topology Reconfigurable NoC Generator
Jiangnan Yu, et al.
Great Lakes Symposium on VLSI (GLSVLSI), 2023
-
NNASIM: An Efficient Event-Driven Simulator for DNN Accelerators with Accurate Timing and Area Models
X. Yi, Jiangnan Yu, et al.
IEEE International Symposium on Circuits and Systems (ISCAS), 2022
|
Misc
- Hobbies: I enjoy running and hiking in my free time.
|
|