Was the driver asleep or something? The car drove quite a bit on the tracks... sure, blame Tesla all you want (and rightly so), but you can't really claim today that the car has "autopilot" unless you're hunting for a lawsuit. So what was the driver doing?
It's rather reminiscent of the old days of GPS, when people would follow it to the letter, and drive into rivers, go the wrong way up a one-way street, etc.
But seriously, Tesla "autopilot" is nothing more than a cruise control you have to keep an eye on. Which means, it's NOT "autopilot." This technology is not ready for the real world. Sooner or later, it's going to cause a major, horrible accident, involving dozens or people. Musk has enough connections to avoid any real-world consequences but maybe enough people will get over their child-like worship of billionaires and stop treating him like he's the next Bill Gates.
Somewhat ironically, autopilot for airplanes is more less attitude/speed holding for most history. More modern systems can now autoland or follow a preprogrammed route (the flight plan plugged into the FMS), but even then changes like TCAS advisories are usually left up to the pilots to handle. Autopilots are also expected to give control to the pilots in any kind of unexpected situation.
So in a way tesla's naming here isn't so off, it's just the generic understanding of the term "autopilot" that is off somewhat. That said, their system is also not doing much more than most other level 2 ADAS systems offer.
On the other hand, Elon loves going off about Full Self Driving mode a lot, and that's absolutely bullshit.
Subways not trains/trams, which makes sense since they are in a mostly closed system. The French one is closed off and doors slide open on the dock so that passengers can board the cars. This particular system also runs on pneumatic wheels on a rail. I guess for easier accuracy with braking/acceleration?
We-ell, there have been bugs causing train collisions, but there also have been train collisions caused by machinist's error or some other misfortune, so.
I don't believe that. Based on how far AI has come in the recent years I think it's only a matter of time before someone (other than Tesla) manages to do it well.
The biggest problem with the Tesla Auto pilot is Elon. Just the fact that he insists on using only camera-based vision because "people only need their eyes to drive" should tell you all you need to know about their AI.
And you believe and think that why? Most of us criticizing do that, because we have some idea what machine learning is and what it simply doesn't solve. It's not a hard to get knowledge.
Now AI may or may not be overhyped but Tesla's self-driving nonsense isn't AI regardless. Just pattern recognition it is not the neural net everyone assumes it is.
It really shouldn't be legal, this tech will never work because it doesn't include lidar so it lacks depth perception. Of course humans also don't have lidar, but we have depth perception built in thanks billions of years of evolution. But computers don't do too well with stereoscopic vision for 3D calculations, and really can do with actual depth information being provided to them.
If you lack depth perception, and higher reasoning skills, for a moment you might actually think that a train driving past you is a road. 3D perception would have told the software that the train was vertical and not horizontal, and thus was a barrier and not a driving surface.
Just pattern recognition it is not the neural net everyone assumes it is.
Tesla's current iteration of self-driving is based on neural networks. Certainly the computer vision is; there's no other way we have of doing computer vision that works at all well and, according to this article from last year it's true for the decision-making too.
Of course, the whole task of self-driving is "pattern recognition"; neural networks are just one way of achieving that.
We have gone from cruise control to cars being able to drive themselves quite well in about a decade. The last percentage points of reliability are of course the hardest, but that's a tremendously pessimistic take.
Yeah, like a digital "ideal line" that the cars can follow.
Maybe even a physical guiding line.
We could even connect all the cars via WLAN (WiFi) to exchange info when they are braking and accelerating. That would increase efficiency.
Maybe we could even connect them physically to have a stronger engine pulling more cars more efficiently.
If we already have an ideal guiding line, we might actually save some asphalt and make the roads more optimised. Use different materials so the tyre particles don't pollute as much.
I don't want to waste any more tax money trying to make one of the least effecient modes of transport more autonomous. Just build an electricrfied tram if thats what you want.
The United States is simply too large and distributed for everyone to use public transportation. It will never happen, so get used to it and try to optimize what will be part of our future.
I just don't understand how someone can read all the warnings, get a driver's license (implying their knowledge of the rules of the road) and presumably have years of driving experience and magically think it's ok to just stop paying attention.
It doesn't matter if the car fully promotes itself as self driving, it doesn't matter if the laws surrounding it still require you to be present and in control.
It's no different than 1000hp cars, just because the car is marketed as such, doesn't magically make it legal to go 200mph.
A perfect example of why calling it autopilot in the first place was a bad idea. The name misrepresents the feature, which is really just lane keeping and a few other minor things.