How does the Kopernikus AVP Type 2 technology exactly work?

  • Images are extracted from the camera streams and fed into the inference networks.

  • The network can detect, identify and track a vehicle of interest as well as all surrounding objects.

  • The network is trained to give an estimation on object position and orientation, which is projected onto a global map. 

  • The result is sent to the vehicle with a timestamp.

  • This information is fused with onboard odometry to optimize localization for the local planer.

  • It uses this information to follow the trajectory that it got for the driving-task and dynamically re-calibrates to stay on track.


There are several competitive advantages to the Kopernikus AVP Type 2 technology, as it is based on proven low-cost sensors and the use of A.I. enables us to understand exactly what we are seeing. 


Details on sensor technology:

  • Main Sensors are POE Cameras (H264 Streaming)

  • RADAR or LIDAR may be fused for narrow passages or specific tasks

  • Proven & reliable sensors for all weather conditions and temperatures

  • Minimal sensor cost allows for multiple redundancy and cost-efficient scalability

  • Positioning accuracy ~ 10cm with camera, with LIDAR down to 1cm

  • Strong coverage area per camera

  • Automatic calibration by reference objects


Details on A.I. technology:

  • Visual discrimination of static and dynamic objects

  • Detection and protection for pedestrians, children, bikers, etc. 

  • Dynamic re-planning of the route for static objects

  • Waiting for the route to clear for dynamic objects 

  • Flexible path planning with alternative waypoints

  • Low-cost A.I. compute units (GPUs) 

  • Overprovisioning and separation of GPUs

  • Made for mixed traffic conditions