Home Patent Forecast® Sectors Log In   Contact  
How it works Patent Forecast® Sectors Insights
Menu

Patent US10633007
Stradvision, Inc.

Autonomous Driving Assistance Glasses That Assist In Autonomous Driving By Recognizing Humans' Status And Driving Environment Through Image Analysis Based On Deep Neural Network

A method for providing safe-driving information via eyeglasses of a driver of a vehicle is provided. The method includes steps of: a safe-driving information analyzing device, (a) if a visual-dependent driving image, corresponding to a perspective of the driver, from a camera on the eyeglasses, acceleration information and gyroscope information from sensors are acquired, inputting the visual-dependent driving image into a convolution network to generate a feature map, and inputting the feature map into a detection network, a segmentation network, and a recognition network, to allow the detection network to detect an object, the segmentation network to detect lanes, and the recognition network to detect driving environment, inputting the acceleration information and the gyroscope information into a recurrent network to generate status information on the driver and (b) notifying the driver of information on a probability of a collision, lane departure information, and the driving environment, and giving a warning.

Much More than Average Length Specification


1 Independent Claims

  • Claim CLM-00001. 1. A method for providing safe-driving information via assistance glasses worn by a driver, comprising steps of: (a) if at least one visual-dependent driving image corresponding to perspective of the driver taken from at least one camera installed on the assistance glasses worn by the driver of a vehicle, acceleration information and gyroscope information from one or more sensors installed on the assistance glasses are acquired, a safe-driving information analyzing device performing (i) a process of inputting the visual-dependent driving image into a convolution network, to thereby allow the convolution network to generate at least one feature map by applying convolution operation to the visual-dependent driving image, and a process of inputting the feature map respectively into a detection network, a segmentation network, and a recognition network, to thereby allow the detection network to detect at least one object located on the visual-dependent driving image by using the feature map, allow the segmentation network to detect one or more lanes on the visual-dependent driving image, and allow the recognition network to detect driving environment corresponding to the visual-dependent driving image, (ii) a process of inputting the acceleration information and the gyroscope information into a recurrent network, to thereby allow the recurrent network to generate status information on the driver corresponding to the acceleration information and the gyroscope information; and (b) the safe-driving information analyzing device performing (i) a process of notifying the driver of information on an estimated probability of a collision between the vehicle and the object via an output unit of the assistance glasses by referring to the object detected by the detection network, a process of notifying the driver of lane departure information on the vehicle via the output unit by referring to the lanes detected by the segmentation network, and a process of notifying the driver of the driving environment detected by the recognition network via the output unit, and (ii) a process of giving a safe-driving warning to the driver via the output unit by referring to the status information on the driver detected by the recurrent network.
  • Claim CLM-00010. 10. A safe-driving information analyzing device for providing safe-driving information via assistance glasses worn by a driver, comprising: at least one memory that stores instructions; and at least one processor configured to execute the instructions to perform or support another device to perform: (I) if at least one visual-dependent driving image corresponding to perspective of the driver taken from at least one camera installed on the assistance glasses worn by the driver of a vehicle, acceleration information and gyroscope information from one or more sensors installed on the assistance glasses are acquired, (I-1) a process of inputting the visual-dependent driving image into a convolution network, to thereby allow the convolution network to generate at least one feature map by applying convolution operation to the visual-dependent driving image, and a process of inputting the feature map respectively into a detection network, a segmentation network, and a recognition network, to thereby allow the detection network to detect at least one object located on the visual-dependent driving image by using the feature map, allow the segmentation network to detect one or more lanes on the visual-dependent driving image, and allow the recognition network to detect driving environment corresponding to the visual-dependent driving image, (I-2) a process of inputting the acceleration information and the gyroscope information into a recurrent network, to thereby allow the recurrent network to generate status information on the driver corresponding to the acceleration information and the gyroscope information, and (II) (II-1) a process of notifying the driver of information on an estimated probability of a collision between the vehicle and the object via an output unit of the assistance glasses by referring to the object detected by the detection network, a process of notifying the driver of lane departure information on the vehicle via the output unit by referring to the lanes detected by the segmentation network, and a process of notifying the driver of the driving environment detected by the recognition network via the output unit, and (II-2) a process of giving a safe-driving warning to the driver via the output unit by referring to the status information on the driver detected by the recurrent network.
  • Claim CLM-00019. 19. Assistance glasses for providing a driver with safe-driving information, comprising: the assistance glasses wearable by the driver; one or more sensors, including a camera for taking at least one visual-dependent driving image corresponding to perspective of the driver, an acceleration sensor, and a gyroscope sensor, which are installed on the assistance glasses; and an output unit for providing the driver with the safe-driving information, of the assistance glasses; wherein the assistance glasses includes a safe-driving information analyzing device for performing (I) (I-1) a process of inputting the visual-dependent driving image, acquired from the camera, into a convolution network, to thereby allow the convolution network to generate at least one feature map by applying convolution operation to the visual-dependent driving image, and a process of inputting the feature map respectively into a detection network, a segmentation network, and a recognition network, to thereby allow the detection network to detect at least one object located on the visual-dependent driving image by using the feature map, allow the segmentation network to detect one or more lanes on the visual-dependent driving image, and allow the recognition network to detect driving environment corresponding to the visual-dependent driving image, (I-2) a process of inputting acceleration information acquired from the acceleration sensor and gyroscope information acquired from the gyroscope sensor into a recurrent network, to thereby allow the recurrent network to generate status information on the driver corresponding to the acceleration information and the gyroscope information, and (II) (II-1) a process of notifying the driver of information on an estimated probability of a collision between a vehicle of the driver and the object via the output unit of the assistance glasses by referring to the object detected by the detection network, a process of notifying the driver of lane departure information on the vehicle of the driver via the output unit by referring to the lanes detected by the segmentation network, and a process of notifying the driver of the driving environment detected by the recognition network via the output unit, and (II-2) a process of giving a safe-driving warning to the driver via the output unit by referring to the status information on the driver detected by the recurrent network.


View Abstract and Specification Size

PDF with Images and Document Face >

Full Text Publication >



Patent Matrix® Search


USPTO Patent Document Number