News

  • Nov. 2024 Congrats Chen-Fong, Paper@ACM TECS Transactions 2024
  • Nov. 2024 Congrats Pao-Ren, Paper@HPCA 2025
  • Sept. 2024 Congrats Aldo, won NXP's MCX MCU Design Contest, 2nd Prize
  • May. 2024 Congrats Yu-Yuan, won Hon-Hai Technology Award
  • Mar. 2024 Congrats Ching-Jui, paper@ACM TACO Transactions 2024
  • Oct. 2023 Congrats Yu-Yuan, Fong, paper@NeuraIPS 2023, HPCA 2024
  • Sept. 2023 Congrats Meng-Hsien, paper@ASP-DAC 2024
  • July 2023 Congrats Fong, Obtain Synopsys TinyML Completition Enterprise Award
  • June. 2023 Congrats Tsung Tai, Obtain Google Silicon Research Grant
  • Sept. 2022 Congrats Tsung Tai, paper@ASP-DAC 2023, @ISOCC2022
  • Mar. 2022 Congrats Ching-Jui Lee and Yu Xuan Zhou, poster @DAC 2022
  • Feb. 2021 Congrats Heng Chun, Research Creativity Award from the National Science Council, Taiwan.
  • Nov. 2020 Congrats Tsung Tai, paper @HPCA 2021
  • Oct. 2020 Congrats Meng-Hsien et al. won MakeNTU --InnOvaTion VIA Prize
  • EC 516, Engineering Building 3, 1001 University Road,
    Hsinchu 300, Taiwan
    886-03-5712121#54723
    ttyeh@cs.nycu.edu.tw
    LAB: EC 619

    Welcome

    I am an assistant professor in the department of computer science at National Chiao Tung University. My research work spans computer architecture, computer systems, and programming languages.

    I am always looking for undergraduate and graduate students excited about the research in computer architecture, systems, and programming languages. Please email me with a copy of your resume or CV if you are interested in working with me.

    Research

    My research aims to design high throughput, low latency, and high energy efficiency domain-specific accelerators and their systems.[video]


    My research domains:
    · Computer Architecture
    · Computer Systems
    · Memory and Storage Systems
    · Domain-specific Accelerators (GPU, Neural Processing Units)

    Bio

    I received a Ph.D in Electrical and Computer Engineering, Purdue University in 2020 and was advised by Timothy G. Rogers. I obtained my M.S. in Institute of Information Systems and Applications, National Tsing Hua University, Taiwan. Previously, I worked at AMD Research, Purdue University, and Academia Sinica. My research work was nominated by the Best Paper Award (PPoPP 2017). I was also the recipient of the Lynn Fellowship at Purdue University. To know more about me, please visit my CV.

    Selected Publications

  • [HPCA] EDA: Energy-Efficient Inter-Layer Model Compilation for Edge DNN Inference Acceleration Bo-Ren Pao, I-Chia Chen, En-Hao Chang Tsung Tai Yeh in the 31th IEEE International Symposium on High-Performance Computer Architecture (HPCA), 2025 Acceptance rate: 21%
  • [HPCA] TinyTS: Memory-Efficient TinyML Model Compiler Framework on Microcontrollers Yu-Yuan Liu, Hong-Sheng Zheng, Yu-Fang Hu, Chen-Fong Hsu, Tsung Tai Yeh in the 30th IEEE International Symposium on High-Performance Computer Architecture (HPCA), 2024 Acceptance rate: 75/410 = 18.3%
  • [HPCA] Deadline-Aware Offloading for High-Throughput Accelerators, Tsung Tai Yeh, Matthew D. Sinclair, Brad Beckmann, Timothy G Rogers, in the 27th IEEE International Symposium on High-Performance Computer Architecture, 2021. Acceptance rate: (63/258) = 24.4% [slide] [video]
  • [ASPLOS] Dimensionality-Aware Redundant SIMT Instruction Elimination, Tsung Tai Yeh, Roland Green, Timothy G. Rogers, in ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2020.Acceptance rate: 86/476 = 18%
  • [PPoPP] Pagoda: Fine-Grained GPU Resource Virtualization for Narrow Tasks, Tsung Tai Yeh, Amit Sabne, Putt Sakdhnagool, Rudolf Eigenmann, Timothy G. Rogers, in Proceedings of ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2017. Acceptance rate: 39/132 = 22%, Best Paper Nominee
  • Grant Funding

  • Designing an Integration of the TinyML Accelerator on the Microcontroller, Synopsys, 10/2023 - 9/2024
  • Enhancing the Memory Usage and Data Reuse of the SRAM Buffer on the Edge TPU, Google Silicon, 06/2023 - 06/2024
  • Concurrent Flash Translation Layer on Solid State Device,Phison, 03/2023 - 03/2024
  • Designing High-Performance, Low-Power,Edge Computing Accelerator Architecture and System, NSTC111-2221-E-A49-131-MY3, NSTC, 08/2022 - 07/2025
  • Accelerating Ray Tracing on CGRA-based Architecture, MediaTek MARC, 01/01/2022 - 07/01/2023
  • Designing an In-storage Accelerator for Deep Neural Networks , Phison, 03/01/2022 - 02/28/2023
  • Optimizing Domain-Specific Accelerator Hardware Resource Utilization, 109-2222-E-009-009-MY2, MOST, 11/01/2020 - 10/31/2022

    Invited Talk

  • Optimizing Multi-Chip-Module Packaging Architecture for Multi-Tenant DNNs, NTHU CS, Graduate Student Seminar, 12/1/2021
  • Tesla Dojo Architecture, 9/25/2021, [video]
  • Accelerating Machine Learning Through Software and Hardware Optimization, Taichung First Senior High School, 6/16/2021, [slide]
  • When GPU Architecture Designs Meet Machine Learning, NTHU CS, Advanced Computer Architecture Class, 1/7/2021, [slide]
  • Scaling Performance and Energy Efficiency Through Domain-Specific Accelerators, NTHU/NCU CS, Graduate Student Seminar, 11/18/2020, 11/25,2020, [slide]
  • When Microprocessor Designs Meet Machine Learning, NCTU CS, Junior Seminar, 11/16/2020, [slide]
  • When Designs of Computer Architecture Meet Machine Learning, NCTU CS, Freshman Seminar, 11/11/2020, [slide]
  • counter free