A fully autonomous automobile is able to decide whether it can safely enter an intersection. It is able to decide how to maneuver around other vehicles, people, and other moving objects.
Autopilot vs. Autonomous
Lynnette Reese | Mouser
People have misperceptions about autonomy. Autopilots have been used in aircraft for decades. In newer planes and at properly equipped airports, the autopilot can also be used for landing, usually in dense fog with lots of radar equipment on the plane and on the ground, and the landing isn’t gentle. (Pilots occasionally perform a “greased in” landing, which is when touch down is so gentle you don’t even feel it. Be sure to congratulate your pilot if this ever happens.)
Autopilot is usually activated shortly after takeoff and disengaged shortly before landing. The autopilot controls the trajectory of the aircraft without requiring the pilot to perform constant “hands-on” control. However, the autopilot does not replace the human pilot; it enables the pilot to focus on higher value tasks such as monitoring the weather, communicating with air traffic control, managing navigation, monitoring the aircraft’s systems, making announcements and in the old days, perhaps flirting with flight attendants.
The autopilot relies on sensors that report the aircraft’s speed, altitude, and how turbulence is affecting the vehicle’s orientation to maintain the flight path input by the pilot. The autopilot does not perform collision avoidance. In this way, it is analogous to cruise control in an automobile. In contrast, a critical capability of autonomous automobiles is collision avoidance. Over a decade ago, I heard a story from a first-officer pilot friend of mine that illustrates how autopilot can get the pilots in trouble. He was ex-military, and said that it’s very hard to stay awake on a red-eye flight. The cabin is dark with only a few twinkling LEDs and stars, the temperature is cool, and there’s a deep steady noise like a large fan would make that, combined, make the cockpit a perfect sleep-inducing workplace. The story was about a pilot and first officer on a red eye from New York to L.A. Somehow both had fallen asleep. They overshot the airport and were over the ocean by a good ways with a plane load of people when a low fuel alarm woke them up. Unplanned incidents require a report to the FAA that explains the situation. He said that they safely landed the plane at the airport, but both pilots were sacked. So autopilot is not the same thing as autonomous.
Tesla vehicles are not fully autonomous. The company calls the Tesla’s self-driving capabilities “AutoPilot“ precisely because it is not capable of autonomous driving. For example, it currently cannot drive on roads where it cannot identify the lane unless it is following another vehicle, and it’s the driver’s responsibility to decide whether following that vehicle is a good idea or not. Teslas equipped with “AutoPilot hardware” and the “Tech package” have Traffic-Aware Cruise Control (TACC), and are currently not able to detect whether it should enter an intersection; that responsibility falls solely on the driver. Per the Tesla Model S owner’s manual, “Traffic-Aware Cruise Control is designed to slow down Model S if needed to maintain a selected time-based distance from the vehicle in front, up to the set speed. Traffic-Aware Cruise Control does not eliminate the need to watch the road in front of you and to apply the brakes if needed...Traffic-Aware Cruise Control is primarily intended for driving on dry, straight roads, such as highways and freeways. It should not be used on city streets.” However, the software in the Tesla automatically gets wirelessly updated so it can strengthen or add new autopilot capabilities, bringing it slightly closer to becoming an autonomous vehicle(AV), which is technically feasible.
A fully autonomous automobile is able to decide whether it can safely enter an intersection. It is able to decide how to maneuver around other vehicles, people, and other moving objects. Autonomous cars, such as the Google Self-Driving Car, rely on detailed information about the roads they are driving on before they enter those roads. The Google car generates a 3D map of the area with stationary objects, including the identification of traffic lights, which are combined with existing high-resolution maps of the area to create a data model by which it navigates. Understanding how other objects, including people and other cars, will behave and being able to successfully predict what they will do is critical to reaching a fully autonomous driving capability. It’s fully reasonable to expect that technically we can get there, however, with an infinite number of possible scenarios while driving, an autonomous vehicle will never be flawless. Google’s first self-driving car accident where the car’s AV mode was at fault happened earlier this year, and the result was a low speed fender-bender. It’s somewhat amusing and portends the seemingly over-cautious nature of AVs in that the Google car was driving at 2 m.p.h. when it collided with a bus travelling at 15 m.p.h. as it tried to re-enter traffic after realizing that it’s path was blocked by sandbags.
Figure 1: The Google car, in AV mode, was approximately at the position occupied by the white van in this Street View when it hit a bus while trying to get around some sand bags blocking its path in the right lane on El Camino Real in Mountain View, CA.
Doubtless this will be/has been examined for detail on why the car didn’t see the bus. So while autopilot can appear to be the same as autonomous, in each case there are significantly different responsibilities placed on the driver.
Figure 2: This Lexus model Google Self-Driving car had a fender-bender when it didn't see a bus. With over a million miles and a pristine record, it's forgiveable in my book, because it's still a better driving record than most humans I know, including me.
This post does not have any comments. Be the first to leave a comment below.
Post A Comment
You must be logged in before you can post a comment. Login now.