Last Updated on September 4, 2024
Dinesh Abeywickrama Ph.D.(Reading), MBCS, MBA, BCS explores the embryonic issue of self-driving cars and looks at some of the security implications associated with this disruptive technology.
Content
- Why is Self-Driving Cars so important?
- How does Self-Driving Cars Works?
- Insecurity coupled with security
- Unexpected encounters
- Human-robot conflict
- Lack of sensitivity
- Every cloud has a silver lining
- Policy for protection
Why is Self-Driving Cars so important?
With our busy daily lives, we’re spending much more time inside our cars. And because life is short, every moment matters and we’re under pressure to make every minute count. Hence, the era of driver-less cars will likely transform our lives even more, whether we like it or not.
Travel between destinations without human operational involvement is no longer illogical and, according to business intelligence, 10 million self-driving cars will be on the road by 2020 (Rouse, 2017). The automakers are already introducing numerous self-directed features in the car, which carry additional high margin revenues.
It is projected to be the fastest-growing market for carmakers for the next ten years. Especially if we consider the importance of these technological advancements, with existing semiautonomous features such as self-braking systems, assisted parking, blind-spot monitoring, and lane-keeping assistance, preventing 95 percent of accidents caused by human errors.
Most importantly, these semi-autonomy features are not affected by the state of the driver (e.g. tired, angry, sad, etc.). Additionally, it can also scan multiple directions simultaneously, thus improving road safety overall, and reducing auto insurance and health costs.
Self-driving cars can get confused by unexpected encounters, such as when a traffic officer waves vehicles through a red light. Being a human, you can recognize body language and other contextual factors of human behaviors. The cars cannot.
However, the self-driving car is really a massive computer. So, can we be sure of the safety of such utilities? Who is liable for the risk? Even though self-driving cars will turn out to be mainstream in more than a decade, there are definite considerations that car users should start thinking about now.
How does Self-Driving Cars work?
When we consider self-driving car technologies, sensors, connectivity, able to drive due to age or physical impairments (Gupton, 2017). To navigate the car safely sensors like radar, ultrasonic and camera provide the necessary inputs. Google is using ‘lidar’ (a radar-like technology that uses light instead of radio waves) sensor technology and going straight to cars without steering wheels or foot pedals.
Connectivity supports the detection of the latest traffic, weather, surface conditions, construction, maps, adjacent cars, and road infrastructure. This information can be used to monitor a vehicle’s functioning environment to anticipate breaking or to avoid dangerous situations.
Software/control algorithms are there to capture data from sensors and connectivity and make the necessary changes to speed, steering, braking and the route. Tesla has a software system named ‘Autopilot’, which employs high-tech camera sensors (eyes) and some of their cars are already on the market.
Insecurity coupled with security
Have you ever thought about what will happen if hackers could breach the network, which is connected to a self-driving vehicle and then deactivate key sensors and GPS features? If this happens, hackers could remotely force the vehicle to head to a remote, undisclosed location in order to facilitate the theft of both the vehicle and its contents.
In addition, lives could be at risk. Most importantly, the connecting technologies, including laser range finders, ultrasonic devices, wheel sensors, cameras, and internal system measurement systems can be accessed by hackers (Miller, 2014). At the same time, different types of risks will emerge, such as software bugs, information system incompatibilities and control failures (Greenberg, 2015).
Unexpected encounters
According to research done at ‘Duke University,’ it is impossible to code every scenario in advance. For example, self-driving cars can get confused by unexpected encounters, such as when a traffic officer waves vehicles through a red light. Being a human, you can recognize body language and other contextual factors of human behaviors.
The cars cannot. Therefore, it is a huge challenge for the computer to handle such a situation. Another example might be when a kid is about to dart into a road; a self-drive car’s artificial intelligence (AI) then must then be able to abstract it. This is the one best example of how advanced technologies are not yet able to create 100 percent secured designs (Hamers, 2016).
Human-robot conflict
How does the car notify a passenger as to whether or not they should take over the task? Moreover, how does the car confirm that the passenger is ready to take over the responsibilities of driving? Scientists are still doing research to try and understand the connection between the human brain and notifications they might receive while in passenger mode (Hamers, 2016).
Lack of sensitivity
Can the self-driving car function in the same way whatever the road conditions? It should be able to detect all the road features around it, despite bad weather conditions such as fog, lightning, rain, and snow. Therefore, these sensors should be reliable, accurate and fit-for-purpose, with enough detail available to enable the vehicle to continue functioning even in extreme conditions.
Every cloud has a silver lining
When we consider security implications for self-driving cars, one way to prevent hackings and security breaches would be a centralized token system, and this would provide protection and connection with the vehicle when actions were needed. Using this technique, hackers would not only have to reach the network connectivity but also, they would have to compromise the access token.
Therefore, it would be difficult for them to penetrate two security layers with cross-protection. Furthermore, a sheltered, centralized position within the cloud would be likely to operate as the preeminent interoperability configuration for these communication networks.
But, then again, earing in mind the fact that today’s popular applications (mobile phones) are regularly hacked, it would be wise for original equipment manufacturers (OEMs), suppliers and technology providers to holistically collaborate on security measures, before some self-driving vehicle drives down the middle of the road, and the actions of a hacked car become most dire (Miller, 2014).
Policy for protection
Policymakers should research on target agile innovation and advance the way they use aggregate volumes of data, cumulative patterns, and accumulated incidents.
It is imperative to have a holistic stakeholder collaborative platform connected with an automotive ecosystem that will include insurers, auto manufacturers, technology companies and regulators.
In fact, stakeholder organizations, such as policy institutes, insurers, automobile manufacturers and suppliers, should conduct rigorous pilot research in order to inspire or inhibit adoption in each different geographical location/environment.
Their research findings must integrate with the corresponding progress in each jurisdiction and highlight leading technological models that are suitable as default templates for broader roll-out.
Dinesh Abeywickrama Ph.D.(Reading), MBCS, MBA, BCS
Follow me to get latest Updates
Facebook https://www.facebook.com/Samuraidinesh
Twitter https://twitter.com/dineshabeywick
Instagram https://www.instagram.com/samuraidinesh/
LinkedIn https://lk.linkedin.com/in/samuraidinesh
References
- Drefuss, E., 2017. ‘Security news this week: a whole new way to confuse selfdriving cars.’ http://www.wired.com/story/securitynews-august-5-2017/ [Accessed 05 September 2017].
- Greenberg, A., 2015. ‘Hackers remotely kill a jeep on the highway – with me in it.’
http://www.wired.com/2015/07/hackersremotely-kill-jeep-highway/ [Accessed 06 Sepetember 2017]. - Gupton, N., 2017. ‘The Science of SelfDriving Cars’ The Franklin Institute.
http://www.fi.edu/science-of-selfdrivingcars [Accessed 03 September 2017]. - Hamers, L., 2016. ‘Five challenges for self-driving cars.’
http://www.sciencenews.org/article/five-challenges-self-driving-cars [Accessed 06 September 2017]. - Miller, D., 2014. ‘Autonomous Vehicles:What Are the Security Risks?’ http://www.covisint.com/blog/autonomousvehicles-what-are-the-security-risks/ [Accessed 05 Sepetember 2017].
- Rouse, M., 2017. ‘Driverless car – definition’ http://whatis.techtarget.com/definition/driverless-car [Accessed 03 September 2017].