There Is No One-Size-Fits-All in Machine Learning at the Edge

  • Jim McGregor, TIRIAS Research

Many silicon and system architectures are emerging for edge computing. These solutions vary from using standard logic solutions to dedicated neural processing units (NPUs) and in-memory processing units. While all will work as inference engines, there are tradeoffs between performance, power consumption, manufacturing complexity, cost, and form factor size. The choice is also dependent upon the machine learning task(s) to be performed. As a result, the software model has a significant impact on the choice of machine learning solution. This presentation will discuss the different approaches and the most appropriate use by application and system requirements.

  • Date:Tuesday, October 16
  • Time:3:30 PM - 4:20 PM
  • Location:Executive Ballroom 210G
  • Session Type:Conference Session
  • Room:Executive Ballroom 210G
  • Pass Type:All-Access Pass
  • Secondary Track:System Design Methodology
Jim McGregor
TIRIAS Research