Opening the platform up means developers and AI experts to develop cross-device AI tools
AWS has released its Neo-AI code as an open source project, encouraging developers and other AI experts to contribute to the platform.
The company explained that usually, ensuring a machine learning model works across a variety of hardware platforms (especially those running on edge networks) is difficult because there are so many factors and limitations to consider.
Even in less complicated devices, there are so many software variations that it can be tricky to make sure machine learning works across all of them. As a result, manufacturers and vendors are being limited by which companies they can work with to provide machine learning tools they require.
With AWS’s Neo-AI, machine learning models are automatically optimised for use TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models, converting them into common formats to work on a wider variety of devices. The models can be run at a faster pace as well because Neo-AI uses a compact runtime, limiting the resources a framework would typically consume.
Even if edge devices are being constrained by resources, this is irrelevant, because Neo-AI will shrink down the resources needed to run. Neo-AI currently supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm arriving later in the year.
“To derive value from AI, we must ensure that deep learning models can be deployed just as easily in the data center and in the cloud as on devices at the edge,” said Naveen Rao, general manager of the artificial intelligence products group at Intel.
“Intel is pleased to expand the initiative that it started with nGraph by contributing those efforts to Neo-AI. Using Neo, device makers and system vendors can get better performance for models developed in almost any framework on platforms based on all Intel compute platforms.”
AWS has released its Neo-AI code as an open source project, encouraging developers and other AI experts to contribute to the platform.
The company explained that usually, ensuring a machine learning model works across a variety of hardware platforms (especially those running on edge networks) is difficult because there are so many factors and limitations to consider.
Even in less complicated devices, there are so many software variations that it can be tricky to make sure machine learning works across all of them. As a result, manufacturers and vendors are being limited by which companies they can work with to provide machine learning tools they require.
With AWS’s Neo-AI, machine learning models are automatically optimised for use TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models, converting them into common formats to work on a wider variety of devices. The models can be run at a faster pace as well because Neo-AI uses a compact runtime, limiting the resources a framework would typically consume.
Even if edge devices are being constrained by resources, this is irrelevant, because Neo-AI will shrink down the resources needed to run. Neo-AI currently supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm arriving later in the year.
“To derive value from AI, we must ensure that deep learning models can be deployed just as easily in the data center and in the cloud as on devices at the edge,” said Naveen Rao, general manager of the artificial intelligence products group at Intel.
“Intel is pleased to expand the initiative that it started with nGraph by contributing those efforts to Neo-AI. Using Neo, device makers and system vendors can get better performance for models developed in almost any framework on platforms based on all Intel compute platforms.”
Source: https://www.cloudpro.co.uk/business-intelligence/7908/aws-releases-neo-ai-code-to-the-open-source-world (Accessed on January 28, 2019)
No comments:
Post a Comment
Have a Say?..Note it down below.