Skip Navigation
237 comments
  • I am once again begging journalists to be more critical of tech companies.

    But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

    [...] Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.

    This is the wrong comparison. These are taxis, which means they're driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it's precisely when they're all driving).

    We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.

    edit: The leaked data on human interventions was from Cruise, not Waymo. I'm open to self-driving cars being safer than humans, but I don't believe a fucking word from tech companies until there's been an independent audit with full access to their facilities and data. So long as we rely on Waymo's own publishing without knowing how the sausage is made, they can spin their data however they want.

    edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.

    • Journalist aren't even critical of police press releases anymore, most simply print whatever they're told verbatim. It may as well just be advertisement.

      • I agree with you so strongly that I went ahead and updated my comment. The problem is general and out of control. Orwell said it best: "Journalism is printing something that someone does not want printed. Everything else is public relations."

      • The meat of the true issue right here. Journalism and investigative journalism aren't just dead, their corpses has been feeding a palm tree like a pod of beached whales for decades. It's a bizarre state of affairs to read news coverage and come out the other side less informed, without reading literal disinformation. It somehow seems so much worse that they're not just off-target, but that they don't even understand why or how they're fucking it up.

    • I was going to say they should only be comparing them under the same driving areas, since I know they aren't allowed in many areas.

      But you're right, it's even tighter than that.

      • These articles frustrate the shit out of me. They accept both the company's own framing and its selectively-released data at face value. If you get to pick your own framing and selectively release the data that suits you, you can justify anything.

    • @theluddite@lemmy.ml @vegeta@lemmy.world
      to amplify the previous point, taps the sign as Joseph Weizenbaum turns over in his grave

      A computer can never be held accountable

      Therefore a computer must never make a management decision

      tl;dr A driverless car cannot possibly be "better" at driving than a human driver. The comparison is a category error and therefore nonsensical; it's also a distraction from important questions of morality and justice. More below.

      Numerically, it may some day be the case that driverless cars have fewer wrecks than cars driven by people.(1) Even so, it will never be the case that when a driverless car hits and kills a child the moral situation will be the same as when a human driver hits and kills a child. In the former case the liability for the death would be absorbed into a vast system of amoral actors with no individuals standing out as responsible. In effect we'd amortize and therefore minimize death with such a structure, making it sociopathic by nature and thereby adding another dimension of injustice to every community where it's deployed.(2) Obviously we've continually done exactly this kind of thing since the rise of modern technological life, but it's been sociopathic every time and we all suffer for it despite rampant narratives about "progress" etc.

      It will also never be the case that a driverless car can exercise the judgment humans have to decide whether one risk is more acceptable than another, and then be held to account for the consequences of their choice. This matters.

      Please (re-re-)read Weizenbaum's book if you don't understand why I can state these things with such unqualified confidence.

      Basically, we all know damn well that whenever driverless cars show some kind of numerical superiority to human drivers (3) and become widespread, every time one kills, let alone injures, a person no one will be held to account for it. Companies are angling to indemnify themselves from such liability, and even if they accept some of it no one is going to prison on a manslaughter charge if a driverless car kills a person. At that point it's much more likely to be treated as an unavoidable act of nature no matter how hard the victim's loved ones reject that framing. How high a body count do our capitalist systems need to register before we all internalize this basic fact of how they operate and stop apologizing for it?

      (1) Pop quiz! Which seedy robber baron has been loudly claiming for decades now that full self driving is only a few years away, and depends on people believing in that fantasy for at least part of his fortune? We should all read Wrong Way by Joanne McNeil to see the more likely trajectory of "driverless" or "self-driving" cars.
      (2) Knowing this, it is irresponsible to put these vehicles on the road, or for people with decision-making power to allow them on the road, until this new form of risk is understood and accepted by the community. Otherwise you're forcing a community to suffer a new form of risk without consent and without even a mitigation plan, let alone a plan to compensate or otherwise make them whole for their new form of loss.
      (3) Incidentally, quantifying aspects of life and then using the numbers, instead of human judgement, to make decisions was a favorite mission of eugenicists, who stridently pushed statistics as the "right" way to reason to further their eugenic causes. Long before Zuckerberg's hot or not experiment turned into Facebook, eugenicist Francis Galton was creeping around the neighborhoods of London with a clicker hidden in his pocket counting the "attractive" women in each, to identify "good" and "bad" breeding and inform decisions about who was "deserving" of a good life and who was not. Old habits die hard.

237 comments