This resource is no longer available
With a growing trend towards deep learning techniques in AI, there are a lot of benefits to accelerating neural network models using GPUs.
Tune into this webinar to hear two experts from NVIDIA and Microsoft discuss how to accelerate model inferencing from cloud to the edge, covering:
- An overview of ONNX and ONNX Runtime
- How to reach a faster and smaller inference with ONNX Runtime
- How to deploy ONNX models at scale with Azure ML services