________________________________________________________________________________
Okay so if it wasn't for human-intervention, two accidents would have happened within a minute:
https://www.youtube.com/watch?v=GHCQ36Q5zRo&feature=youtu.be...
Considering this feature is out for anyone to use, my question is: who is responsible for the damage? Suppose you have a video like this one clearly showing autopilot being at fault, will Tesla cover the costs of damage? (of both the tesla car and the other party's car?)
The short term answer is probably going to be "the driver, until this becomes a sufficiently common issue that legislators get involved"
The longer term answer is probably going to be "AI insurance". The bigger question is whether it will be the vendors or the owners who pay the premiums. My hope is that the onus would be on vendors, since it's _their_ software, but my gut says that the cost will instead fall upon either owners or the taxpayer.
The feature is not out for everyone to use. I have FSD and do not have this for my Model 3 yet
This seems like a profoundly irresponsible move. After selling expensive snake oil FSD hardware for years, Tesla is putting a flawed system into production that people will trust too much and which will inevitably lead to accidents. This could lead to legislation against FSD and set the whole industry back, preventing more responsible players from bringing their superior products to market in a timely fashion. Maybe a conscious play from Tesla to handicap their more technologically developed rivals?
What is amazing is that Tesla is able to scale testing tremendously with this and that's probably why they'll be the first ones to make it.
Uber, Waymo, etc have to pay people Tesla owners do it for free! Millions of miles...
Can't believe this is allowed on the road.
I can't believe we allow untrained humans to drive.
Being allowed to drive is almost a necessity in today’s world whereas the need for Self-Driving is not. I don’t think it’s a fair comparison to make. The same way there are safety restrictions on vehicles in regards to other aspects, the same could apply to an extremely poor driving algorithm if one were to exist.
We train student drivers under professional or at least careful supervision, and we know the learning curve for humans in general. We test them to specific standards for general driving skills, and only then allow them to drive on public roads. And if they commit specific egregious mistakes we suspend that right.
The AP's learning curve is unknown, we don't train it to any specific standard, we don't expect it to pass any test to validate it can or should be allowed to "drive" on a public road, and the AP's "right" to drive is neither officially given, nor withdrawn after making repeated, serious mistakes. Keep in mind that the AP is "the same driver" across all cars.
Your "argument" has no merit.
Your argument has validity in Germany where they properly test drivers before they are allowed to drive.
But in countries like the USA, India, Perú, I think the AI is the safer option.
> same driver
Major rule of functional safety analysis: Look for common-mode failure mechanisms. If the same fault can suddenly manifest in many units, that undermines one level of "random failure" statistics.
And as a comment on Ars pointed out: these are self-reported videos, so a dozen near-accidents in three hours of videos sets a _floor_ on how bad the driving could be.