Hardware Design Automation for Efficient Deep Learning
Mar 04, 2019
The mismatch between skyrocketing processing demand for AI and the end of Moore’s Law highlights the need for Co-Design of efficient ML algorithms and domain-specific hardware. Dr. Han introduces recent AutoML work learning optimal pruning and quantization strategies and neural network architectures for a target hardware architecture, and automating analog circuit design. Dr. Han shows his temporal shift module (TSM) for efficient video understanding, that offers 8x lower latency, 12x higher throughput than 3D convolution-based methods, while maintaining top scores.
About the Speaker
Song Han is an assistant professor in the EECS Department of Massachusetts Institute of Technology (MIT) and Director of H.A.N. Lab. Dr. Han’s research focuses on energy-efficient deep learning and domain-specific architectures, including a current engagement with Samsung Adv. Inst. of Technology (SAIT). Dr. Han proposed “Deep Compression” that widely impacted the industry. Prior to joining MIT, Song Han graduated from Stanford with a PhD advised by NVIDIA Chief Scientist Bill Dally, which included internships at Facebook, Google and Apple. He was the co-founder and chief scientist of DeePhi Tech, Beijing, based on his PhD thesis, which was acquired in 2018 by Xilinx.
Be the first.
At Samsung, we are passionate about fostering innovation and we'd love to keep in touch with you about the latest updates in artificial intelligence, mobility, digital health, and more. You may unsubscribe at any time.
Thank you for signing up.