I've signed a lot of forms that say something like "I certify that the information I have provided is true and accurate". Using ChatGPT doesn't absolve me of that. It shouldn't for them either (but we all know they're held to a different standard).
Devils advocate: in the case of a monthly report, often an LLM is used like "take these current statistics and update last month's report to include them."
As in... the LLM is not developing an opinion it's just presenting the numbers.
Monthly reporting is usually very formulaic. There's no scope for "I propose forming a lynch mob comprised of vigilanties".
This isn't about them using them for monthly reports, this is about them using LLMs for individual incident reports
Pulling from all the sounds and radio chatter picked up by the microphone attached to Gilbert’s body camera, the AI tool churned out a report in eight seconds. ...
Oklahoma City’s police department is one of a handful to experiment with AI chatbots to produce the first drafts of incident reports. Police officers who’ve tried it are enthused about the time-saving technology, while some prosecutors, police watchdogs and legal scholars have concerns about how it could alter a fundamental document in the criminal justice system that plays a role in who gets prosecuted or imprisoned.
“take these current statistics and update last month’s report to include them.”
That is literally the worst use case for an LLM. Something a simple script could do, but it is hard dry data the LLM is free to hallucinate with and people are lazy to check over manually.