Lawton City Council to consider lax AI use policy believed to be drafted with AI
The policy itself, in summary, is meant to clarify that AI is to be used as a tool and that anything produced by AI is to be considered a draft until reviewed by a human.
There are typically two main questions that people go back and forth on over the topic of using artificial intelligence: When should it be used? How much should we trust it? Artificial Intelligence has been a hot topic throughout communities everywhere. While opinions are more easy-going when using it in someone’s personal life, a lot of concerns arise when government officials start to implement it into their routines. Some say there needs to be hard limitations, and others say we should embrace it as it’s not going away. In Lawton’s case, the city council is considering implementing an AI use policy of their own.
The policy was originally put on the April 14 council agenda, but was removed before the meeting, going mostly unnoticed. It is now back on the agenda for the April 28 meeting. However, the document appears to be unchanged. The policy itself, in summary, is meant to clarify that AI is to be used as a tool and that anything produced by AI is to be considered a draft until reviewed by a human.
According to Lawton’s City Clerk and AI detectors, this policy was likely drafted with artificial intelligence.
“I would assume some amount of AI was used during researching and initial drafting? I say that because it is a tool that's used frequently,” Blazek-Scherler stated. “I just don't know who used what or when.”
Blazek-Scherler pointing out that this is a tool that’s used frequently raises the question: why is this only now becoming a policy?
The four-page document, of which only three of those pages contain any actual direction, doesn’t include any specifications or outlines for potential misuse, record retention, or inputting personal/private information. Andy Moore is the CEO of Let’s Fix This, an organization focused on encouraging civic engagement throughout the state. Moore said the conversation around AI has only grown with the booming of data centers.
Moore also clarified that he’s had his own handful of discussions around AI being implemented into government routine, and after being sent the policy, he found multiple issues with it. The first concern is the length of the policy itself, saying that three pages isn’t enough to cover any comprehensive issue.
“I think it’s a good example of a policy being written without enough public input and enough thoughtful conversation about unintended consequences,” Moore stated.
One of the main concerns around AI, according to Moore, is that the output of the system is highly dependent on the input of the user. Many have shared concerns about it being used to back their own agendas. Moore explained that AI systems, like chatbots, are very reassuring to the user, even if it’s something another human might push back on.
“At the end of the day, they have programmed it to lean in and support what the human wants. That’s not the way society functions,” Moore stated.
The City of Lawton has been a part of a multitude of controversial conversations around their use of artificial intelligence when it comes to things that directly impact the lives of Lawtonians. Most recently, the council voted to approve facial recognition technology to be used by the Lawton Police Department. Previously, citizens have shared concerns over the FLOCK camera systems, using AI to create their most recent Homeless Action Plan, and using AI to determine credibility of third party studies. The latter two examples would’ve been impacted by this policy if it had been in place. These examples are only public knowledge because they were discussed during council meetings.
Additionally, artificial intelligence systems are flawed, which is even pointed out in the policy itself. Analyzed data isn’t guaranteed to come out correctly. To address this, the policy does state, “The City recognizes that artificial intelligence tools may produce unintended bias, unequal outcomes, or provide factually false and incorrect information”. This is where human review comes into play.
The policy states that all AI-generated work will be submitted to City Legal for review. However, Moore went on to say that the reliability of human review is based on that person’s trust in AI. There is a blanket statement in the policy that says the city is committed to maintaining compliance with state and federal law. However, like most new technologies, Moore said these things typically advance faster than legislation can keep up with.
“It’s a losing battle all the time,” Moore stated.
In Oklahoma, there are a handful of bills that were presented to the legislature, such as House Bill 3545, which is meant to implement frameworks for AI-generated content on state agency computers. It lists certain actions that are prohibited, like creating deepfakes for malicious purposes, and actions that are permitted, like providing disclosure to content created by AI that hasn’t been reviewed by a human. This bill has currently passed the House and is moving through the Senate.
Moving forward, Moore said that citizens need to be more aware and advocate for additions to the policy that will enhance their ability to know whether the information being presented is credible or not. Moore said the policy could be improved in a few ways, to include guidelines for misuse, restrictions on inputting personal data, and more clear communication with the public on its intended uses as far as which departments have what liberties.
A decision is expected to be made Tuesday, April 28. Dark Roast staff reached out to the City of Lawton multiple times to request an interview with Mayor Booker and Mayor Pro Tem Randy Warren about the policy, but did not hear back.