Many silicon and system architectures are emerging for edge computing. These solutions vary from using standard logic solutions to dedicated neural processing units (NPUs) and in-memory processing units. While all will work as inference engines, there are tradeoffs between performance, power consumption, manufacturing complexity, cost, and form factor size. The choice is also dependent upon the machine learning task(s) to be performed. As a result, the software model has a significant impact on the choice of machine learning solution. This presentation will discuss the different approaches and the most appropriate use by application and system requirements.