Home Patent Forecast® Sectors Log In   Contact  
How it works Patent Forecast® Sectors Insights
Menu
Enjoy your FREE PREVIEW which shows only 2022 data and 25 documents. Contact Patent Forecast for full access.        

Smart Cities: Edge Computing

Search All Applications in Smart Cities: Edge Computing


Application US20190138908


Published 2019-05-09

Artificial Intelligence Inference Architecture With Hardware Acceleration

Various systems and methods of artificial intelligence (AI) processing using hardware acceleration within edge computing settings are described herein. In an example, processing performed at an edge computing device includes: obtaining a request for an AI operation using an AI model; identifying, based on the request, an AI hardware platform for execution of an instance of the AI model; and causing execution of the AI model instance using the AI hardware platform. Further operations to analyze input data, perform an inference operation with the AI model, and coordinate selection and operation of the hardware platform for execution of the AI model, is also described.



Much More than Average Length Specification


View the Patent Matrix® Diagram to Explore the Claim Relationships

USPTO Full Text Publication >

3 Independent Claims

  • 1. A computing device adapted for artificial intelligence (AI) model processing, the computing device comprising: communication circuitry to receive a request for an AI operation using an AI model; and processing circuitry configured to: process the request for the AI operation; identify, based on the request, an AI hardware platform for execution of an instance of the AI model; and cause execution of the AI model instance using the AI hardware platform.

  • 13. A method for artificial intelligence (AI) model processing with an AI hardware platform, the method comprising a plurality of operations executed with at least one processor and memory of a computing device, and the operations comprising: obtaining a request for an AI operation using an AI model; identifying, based on the request, an AI hardware platform for execution of an instance of the AI model; and causing execution of the AI model instance using the AI hardware platform.

  • 24. At least one non-transitory machine-readable storage medium, comprising a plurality of instructions adapted for artificial intelligence (AI) model processing with an AI hardware platform, wherein the instructions, responsive to being executed with processor circuitry of a computing machine, cause the processor circuitry to perform operations comprising: obtaining a request for an AI operation; identifying, based on the request, an AI hardware platform for execution of an instance of the AI model; and causing execution of the instance of the AI model using the AI hardware platform; wherein the computing device is implemented as an edge gateway or edge switch within an edge computing platform, and wherein the AI hardware platform comprises an accelerator operable as one of a plurality of hardware accelerators within the edge computing platform.