Today, almost all autonomous vehicle companies targeting level-5 autonomy use a setup involving LiDARs calibrated together with cameras working in sync to perceive the world around them.
Although 2D camera data is used to teach autonomous vehicles to find their way from Point A to PointB, it comes with its own set of drawbacks.
State-of-the-Art Semantic Segmentation models need to be tuned in terms of memory consumption and fps output to be used in time-sensitive applications like autonomous vehicles. Here we study models like FCN, SegNets, ENets, and ICNets.
An understanding of open data sets for urban semantic segmentation shall help one understand how to proceed while training models for self-driving cars. Explore datasets like Mapillary Vistas, Cityscapes, CamVid, KITTI and DUS.