Techno Blender
Digitally Yours.

Microsoft Build 2023: How every Windows 11 developer can be an AI developer

0 56


Microsoft is dedicated to providing developers with powerful tools to rapidly socialise the development of AI-driven apps in the new era. Regardless of whether developers are working on x86/x64 or Arm64 platforms, Microsoft aims to simplify the integration of AI-powered experiences into Windows apps across both Cloud and Edge environments.

During last year’s Build conference, Microsoft introduced Hybrid Loop, an innovative development pattern that enables the seamless integration of hybrid AI scenarios spanning Azure and client devices. Today, Microsoft has announced the realisation of this vision through the utilisation of ONNX Runtime as the gateway to Windows AI, complemented by Olive, a comprehensive toolchain designed to streamline the process of optimising models for various Windows and other devices. Third-party developers now have access to the same extensive tools used internally by Microsoft for operating AI models on Windows and other devices, including CPU, GPU, NPU, or hybrid setups with Azure, by leveraging ONNX Runtime.

Notably, ONNX Runtime now supports a unified API that allows developers to run models either on the device or in the Cloud, enabling hybrid inferencing scenarios. This means that apps can leverage local resources whenever possible and seamlessly switch to cloud-based processing when necessary. Introducing the Azure EP preview, developers can effortlessly connect to models deployed in AzureML or even utilise the Azure OpenAI service. By simply specifying the cloud endpoint and defining criteria for cloud usage, developers gain enhanced control over costs and user experience. Azure EP empowers developers to choose between utilising larger cloud models or smaller local models during runtime, providing flexibility and optimisation.

Moreover, developers can optimise their models for diverse hardware targets with the help of Olive, an extensible toolchain incorporating state-of-the-art techniques for model compression, optimization, and compilation. The versatility of ONNX Runtime extends across multiple platforms, including Windows, iOS, Android, and Linux, enabling developers to leverage their Windows AI investments across all their app platforms.

Both ONNX Runtime and Olive significantly contribute to expediting the deployment of AI models within apps. By reducing engineering efforts and enhancing performance, ONNX Runtime simplifies the creation of exceptional AI experiences on Windows and other platforms.

Microsoft’s commitment to empowering Windows 11 developers with AI capabilities underscores its stated dedication to fostering innovation and enabling a broader community of developers to harness the potential of artificial intelligence. With these powerful tools in their hands, developers can seamlessly integrate AI functionality into their apps, paving the way for a more intelligent and enriched user experience in the Windows ecosystem.


Microsoft is dedicated to providing developers with powerful tools to rapidly socialise the development of AI-driven apps in the new era. Regardless of whether developers are working on x86/x64 or Arm64 platforms, Microsoft aims to simplify the integration of AI-powered experiences into Windows apps across both Cloud and Edge environments.

During last year’s Build conference, Microsoft introduced Hybrid Loop, an innovative development pattern that enables the seamless integration of hybrid AI scenarios spanning Azure and client devices. Today, Microsoft has announced the realisation of this vision through the utilisation of ONNX Runtime as the gateway to Windows AI, complemented by Olive, a comprehensive toolchain designed to streamline the process of optimising models for various Windows and other devices. Third-party developers now have access to the same extensive tools used internally by Microsoft for operating AI models on Windows and other devices, including CPU, GPU, NPU, or hybrid setups with Azure, by leveraging ONNX Runtime.

Notably, ONNX Runtime now supports a unified API that allows developers to run models either on the device or in the Cloud, enabling hybrid inferencing scenarios. This means that apps can leverage local resources whenever possible and seamlessly switch to cloud-based processing when necessary. Introducing the Azure EP preview, developers can effortlessly connect to models deployed in AzureML or even utilise the Azure OpenAI service. By simply specifying the cloud endpoint and defining criteria for cloud usage, developers gain enhanced control over costs and user experience. Azure EP empowers developers to choose between utilising larger cloud models or smaller local models during runtime, providing flexibility and optimisation.

Moreover, developers can optimise their models for diverse hardware targets with the help of Olive, an extensible toolchain incorporating state-of-the-art techniques for model compression, optimization, and compilation. The versatility of ONNX Runtime extends across multiple platforms, including Windows, iOS, Android, and Linux, enabling developers to leverage their Windows AI investments across all their app platforms.

Both ONNX Runtime and Olive significantly contribute to expediting the deployment of AI models within apps. By reducing engineering efforts and enhancing performance, ONNX Runtime simplifies the creation of exceptional AI experiences on Windows and other platforms.

Microsoft’s commitment to empowering Windows 11 developers with AI capabilities underscores its stated dedication to fostering innovation and enabling a broader community of developers to harness the potential of artificial intelligence. With these powerful tools in their hands, developers can seamlessly integrate AI functionality into their apps, paving the way for a more intelligent and enriched user experience in the Windows ecosystem.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment