LiDAR (Light Detection And Ranging)
What is LiDAR?
Take look at the image below, it’s a
vehicle but What’s that weird, whirling can thing sitting on the top like a
helmet. That’s what we call a LiDAR: it’s spinning round, firing invisible
laser beams in all directions, catching the reflections, and measuring how long
the beams take to return so it can figure out what obstacles are nearby and how
far away they are.
The basic concept of LiDAR is exactly
the same as radar and sonar. With radar, you have a jet plane firing out a beam
of coded radio waves and listening for a return beam reflected off same nearby
object(another plane about to crash into you); it uses time taken by the beam
to return to figure out how far away the object is. With sonar, you do the same
underwater, only using the sound waves(because ordinary light and radio waves
don’t travel through water very far). In everyday, on-land situations – driving
down the street or navigation through a building – reflected laser light turns
out to be a better source of information than either radio waves or sound, and
that’s why LiDAR has become so popular: it’s simple, reliable and with a little high-cost.
So you can use LiDAR data to build a
real time map of the streets through which self-driving car trying to navigate.
Why LiDAR is
needed?
Look around you. What we see is a 3D
colour map of your immediate environment that your brain has built using the
light rays soaked by your eyes. If you were a robot with a couple of digital
cameras stuck on your head, you could build yourself a map of a room in much
the same way, but it wouldn’t be anything like as informative and useful. For
example, as a robot you wouldn’t be able to figure that one object is nearer
than another or that a rotating thing with wings on ceiling in the middle of
the room is a fan. As a human, you know these things because your brain
processes visual information using a lifetime of experience of what is that
rotating thing with wings actually means. But robots don’t have the same
encyclopaedic life experience to draw
on, which means they’re at a natural disadvantage when it comes to “seeing” the
world. They are like Aamir Khan from ‘PK’ movie who is from another world and
does not know how this world works.
That’s why the autonomous robots and
self-driving cars often prefer to look at the world in different way, using
LiDAR systems instead of cameras. Where a camera-based eye snaps an instant 2d
photo of a scene that has to be processed and interpreted to find out what it’s
looking at, LiDAR makes millions of measurements of depth information in all
directions simultaneously – and it’s often quicker and easier to turn that data
into map which you can use for navigation in real time.
This how a LiDAR can sense and see.
LiDAR in
Self-driving Cars:-
Autonomous vehicles need to monitor
everything fixed or moving in their immediate environment. Autonomous cars are
hot – a multi-billion-dollar business opportunity that could transform
mobility. Beyond everyday use, self-driving cars could expand transportation
options for the elderly and disabled and ease business travel by guiding
drivers in unfamiliar locales. Perhaps most important, their use could reduce
the accidents by many reasons like drunk and drive, over-speeding etc.
One of the most popular LIDAR sensors on the market is the high-powered Velodyne HDL-64E, as seen below mounted on Homer.
For level 4 and 5 of vehicle
automation, automotive companies have to rely on all the three types of ADAS
sensors, i.e. Vision, RADAR and LiDAR-based sensors. All the three sensors
modules complement each other to provide complete driver assistance.
Vision-based
systems assist in high visibility conditions,
helping by providing parking assistance, recognizing traffic signs, identifying
road markings and more
RADAR-based
systems perform in low visibility conditions,
covering a relatively longer range.
LiDAR-based
systems are highly accurate in object
detection and recognition of 3D shapes and even for longer distances when it
comes to vehicle’s surroundings with a 360-degree field of view. LiDAR system’s
3D mapping capability also helps in differentiating between cars, pedestrians,
trees, people or other objects, while also calculating and sharing details of
their velocity in real time.
Why we cannot use
camera instead of LiDAR in self-driving cars?
First let’s take look at LiDAR
Advantages of LiDAR
One of the primary advantages of LiDAR
is accuracy and precision. The reason why Waymo is so protective over its LiDAR
system is because of its accuracy. The drive reported that Waymo’s LiDAR is so
incredibly advanced, it can tell what direction pedestrians are facing and
predict their movements. It also can recognize hand signals that bicyclists
use, it predicted that in which direction the cyclists will turn.
LiDAR gives self-driving cars a
three-dimensional; image to work with. LiDAR is extremely accurate compared to
cameras because the lasers aren’t fooled by shadows, bright sunlight or the
oncoming headlights of the other cars.
It also saves computing power. It can
immediately tell the distance to an object and direction of that object. Very
few car manufactures actually deploy LiDAR in consumer vehicles. Volvo has said
they will incorporate LiDAR in their 2022 Volvo XC90. Audi has also deployed
front-facing LiDAR in some vehicles, like A8 and A6. I gave example of A8 in
previous blog named the five levels of automation in cars.
Limitations of LiDAR
The cost is one of the major
disadvantages of LiDAR. Google’s system originally ran upwards of $75,000.
Today start-ups have brought the costs of LiDAR units down to below $1,000 in the
case of Luminar and Velodyne (Start-ups Autonomous vehicle Companies) even
introduced a more limited LiDAR called the Velabit.
Interference and jamming are another
potential issue with LiDAR as these systems roll out more broadly. If there are
a large number of vehicles all generating laser pulses at the same time, it
could cause interference and potentially “blind” the vehicles. Manufactures
will need to develop methods to prevent this interference.
LiDAR also has a limitation in that
many systems cannot yet see well through fog, snow and rain. Autonomous
vehicles would interpret a mass of falling snowflakes as wall in the middle of
the road.
LiDAR doesn’t provide information that
cameras can typically see like words on a sign or the colour of a spotlight.
LiDAR systems are currently very bulky
since they require spinning laser systems to be mounted around the vehicle.
Tesla CEO Elon Musk is not a fan of
LiDAR in vehicles and was very blunt at the ‘Tesla Autonomy Day, 2019’ for
investors where he said, “Anyone relying on LiDAR is doomed”. In the video
given below, checkout the detail Elon discussed regarding LiDAR.
https://www.youtube.com/watch?v=HM23sjhtk4Q&t=622s
Now, Why Cameras?
Cameras in autonomous driving work in
the same way our human vision works and utilize similar technology found in
most digital cameras today.
As Elon Musk puts it, “This whole road
system is meant to be navigated with passive optical, or cameras, and so once
you solve cameras or vision, then autonomy is solved. If you don’t solve
vision, it’s not solved.”
Why Cameras are so popular? First,
cameras are much less expensive than LiDAR systems, which brings down costs of
self-driving cars, especially for end-consumers. They are also easy to
incorporate since video cameras are
already on the market. Tesla simply buys an off-shelf camera and improve it
rather than going out and inventing some entirely new technology.
Another advantage is that cameras
aren’t blind to weather conditions such as fog, snow and rain. Whatever a
normal human can navigate, so can a camera-based system.
Finally, cameras can easily be incorporated into the design
of the car and hidden within a car’s structures, making it more appealing for
consumer vehicles.
Limitations of Cameras
Cameras are subject to the same issue humans face when
lighting conditions change in such a way where the subject matter becomes
unclear. Think of the situation where strong shadows or bright lights, from the
sun or oncoming cars, may cause confusion. It’s one of the reasons why Tesla
still incorporates a radar at the front of their cars for additional input.
Cameras are also relatively “dumb” sensors in that they only
provide raw image data back to the system, unlike with LiDAR where exact
distance and location of an object is provided. That means camera systems must
rely on powerful machine learning(neural networks or deep learning) computers
that can process those images to determine exactly what is where, similar to
how our human brain processes the stereo vision from our eyes to determine
distance and location.
The neural networks/machine learning systems just weren’t
powerful enough to handle large amount of data from cameras in order to process
everything in time to make driving decisions. However, the development of
neural networks are becoming much more sophisticated now that they are able to
potentially handle real-world inputs better than LiDAR.
Conclusion
So, if you ask me, my point of view. I am fully in support of
Elon Musk for the cameras because it is reliable, easy to find, cheap. What
Elon Musk said about giving car a human vision makes more sense to me than
LiDAR because the systems that helps camera visualize and measure can keep on
refining and updating.
So to know and understand how tesla incorporates cameras and
uses it. Checkout the upcoming blogs.






Comments
Post a Comment