AI can only think about what it's programed to think about and what it's sensors tell it. Snow is just one, but a very difficult one, condition that AI will struggle with. I'm all for self driving. When the roads are designed to work with it (sensors built into the pavement for example, cars talking to each other, etc.) it'll be nice to take a trip where you don't have to worry about the task of driving. We're not there yet and I don't think we are as close as some want us to be.
There's similar systems already in place for aviation - but there are still considerable constraints on automated/autonomous systems for several reasons. One being the non-transmitting (aka "uncooperative") objects flying through the air which include everything from general aviation aircraft, to hang gliders, para gliders, and hot air balloons where all the aircraft involved aren't yet fully equipped to transmit their own location or flight vectors -- and unmanned aircraft aren't yet capable of detecting those objects. Then of course there's also birds which can really mess up an aircraft when they are struck.....
All of which is simpler to deal with than the ground traffic due to the greatly reduced numbers and challenges involved. Also considering the wild life that could be hit in the air generally does far less damage to aircraft than the animals on the ground can do to cars & trucks the consequences can also be less severe. That of course is also assuming the life form isn't intentionally trying to create problems - which isn't universally true as even in certain areas individuals will intentionally step in front of oncoming vehicles (for various reasons that may not include attempted suicide). Unfortunately, it's not hard to imagine how the rules of a self-driving car would/could be exploited by those engaged in criminal activities since it'd be a lot safer to try corralling a self-driving vehicle (particular one that refuses to drive over/through people) than one driven by a human.
Establishing those precedents will likely happen.... not sure it will be during my lifetime.....
There are many reasons Level5+ is attractive..... medical events where the lone-occupant is incapacitated being one.
Programming a rules-based complex system is non-trivial.
If a level5 is trying to get an incapacitated person to the ER doorway, and encounters a blocked road just before the entrance, will it choose to drive over a sidewalk to get the person there ?
Rules are (usually) there for reasons..... but knowing when to bend or break them is a higher-level reasoning challenge....
Rgds, D.
I agree there definitely are a great many attractions, but as noted there are also a lot of challenges --- many of which may seem like "improbably/impossible" corner cases to the casual observer. However, those seemingly "impossible" cases when applied to large fleets of vehicles can start becoming pretty routine/regular; the "one in a millions seems" phrase generally seems pretty impossible until it's applied to a sample size of tens/hundreds of millions or billions. For consideration the 737 Max fleet had a fatal accident rate of 4 per million flights with ~500,000 flights when the fleet was grounded.
If anything I expect we'll first see that the assistive-driving technologies will end up causing some of the same issues seen in commercial airliners in that the automated systems are so good the human-crew gets complacent and stops paying as much attention .... which (as has been seen) can cause additional issues when the automation gives up (or goes wonky in some way) and hands full control back to the human who is still legally responsible for controlling the aircraft.
So just my own opinion, but I suspect it'll still be another 5-15 years before automated aircraft are routinely permitted to fly without human supervision (even with all the considerable governmental speed/effort being applied to that problem). I suspect that will also occur at least 10-20 years before fully self-driving vehicles become commercially available.
I suspect that will be the case due in large part because needing to know when/how to bend/break the rules and the liability/accountability issues involved with that sort of decision making will be more of a necessity on the ground since there's a greater probability of having to deal with intentional & adverse behavior by humans attempting to engage in either criminal or other anti-social activity. For example (a serious question); how do current technologies handle road-raging drivers attempting to run someone else in a tech-assisted vehicle off the road? ..not something that can just be hand-waved away when designing self-driving vehicle (question could also be applied to occupants of another vehicle throwing things at such vehicles too).
It'd be nice to be wrong, but from what I've seen I'm probably still being overly optimistic in my time estimates - but whether I am or not I'll be pleasantly surprised if I see fully self-driving cars in my lifetime (which could extend out another 60-ish years). ...granted it seems the more work that people are relieved of doing the more anti-social some seem to become (I've yet to see more technology really solve anti-social human problems of the world).