AI: A Wedge in Public Trust? – PA TIMES Online

AI: A Wedge in Public Trust? – PA TIMES Online

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Parisa Vinzant
May 31, 2024

If you are a public servant, you probably know already your city, police department and/or county government may be using an artificial intelligence (AI) tool to capture everything that you say publicly online on select topics. But if you are a resident, it’s unlikely you would know that your public Facebook or other social media conversations on hot-button topics are being continually monitored and converted into anonymized, easy-to-digest “sentiments” for city decisionmakers by ZenCity, an AI software developer with a track record for winning sole-source or non-competitively bid government contracts.

This gap in awareness raises the perennial question of the lack of government transparency, a cloak that is now being extended to cover the use of emerging technologies like AI. Yet the ethical responsibility of public servants could not be clearer in this instance, as the American Society for Public Administration’s (ASPA) Code of Ethics states under the duty to “promote democratic participation” that public servants are to “be open, transparent and responsive.” ASPA’s Ethical Practices further guides public administrators to “promote timely and continuing dissemination of information about government activities to the community, ensuring a fair and transparent process and educating citizens to make effective contributions.”

As a former technology and innovation commissioner, I have witnessed the public’s reaction to learning from media reports—rather than the city itself—that Long Beach, CA, was using an AI tool to monitor and synthesize public social media comments into anonymized actionable intelligence without their knowledge or prior input. Many residents felt that they were being spied upon, which only deepened existing trust issues. Others raised concerns about the problematic reliance on public social media interactions that typically skew very negative or positive.

Yet, there is another issue specific to AI to consider: the bandwagon effect. The more public servants read positive testimonials by their peers about specific AI tools, or see trusted associations partnering with AI companies, or learn about AI successes at public sector conferences, the greater the likelihood that implicit permission has been given to approve these technologies without much due diligence.

There very well could be benefits to be had with some of these AI tools. But ever-rising pressure on local public servants to adopt the latest AI tools that promise greater efficiency, innovation and cost-effectiveness is leading to a rush to purchase, often without adequate assessment, including proper consideration of ethics, accountability and social/racial equity and justice considerations. Assessment of any AI tool is the first necessary step before purchase, which must include a rigorous risk-to-benefit analysis of the technology.

Seattle’s 2022 Surveillance Technology Determination Report provides a good example of such a technology assessment, although it was completed three years after the city started using the AI tool, ZenCity. It is not uncommon for cities or departments to purchase an emerging technology and then later be forced to justify its use and create governance documents. In some jurisdictions, however, policies and performance metrics for publicly financed emerging tech are never created yet the technology continues to be used.

Seattle stands out in this regard because it publicly shares its ZenCity aggregated data on its Seattle Police Department’s (SPD) Trust and Safety Dashboard and provides an easy-to-understand explainer. A review of the dashboard’s data shows that average trust and safety scores have trended up between 2020 and 2023 except for the approximately ten-month period following the murder of George Floyd in May 2020, when average trust levels significantly decreased and diverged from higher average levels of safety. It is also interesting to analyze the data by the top public concern(s) reported over time and by geographic area.

There are likely two key factors for the Seattle PD’s relative success with deployment of this AI tool. SPD developed “better data reporting and tracking protocols” in response to a consent decree with the Department of Justice in 2012. Also, the city has a 20 year history of prioritizing race and social justice in its operations.

Every city is different, but the commonality is that residents should be involved in the creation of cities’ AI governance policies. If that is too much to expect, at minimum, any time an AI tool is used, especially in community engagement efforts, public servants should describe what AI tool was used and how it was used. Pima County, AZ, did this well for its 2024 Priority Climate Action Plan.

It is problematic that the fast adoption of AI products is largely occurring within the community engagement space and without prior, actual engagement with the community. Local public servants already do not have a consistent track record in inclusive and equitable community engagement due to budget issues, staff capacity constraints and lack of staff and/or leadership buy-in. Now with the flashy entrance of AI, there are both threats and opportunities. If used strategically, ethically and equitably, there could be room for the right AI tool as a complement, not a substitute for cities’ community engagement initiatives.

In the excitement of hopping on the AI bandwagon, it’s easy to forget that the primary aim of for-profit companies is to promote a product rather than serve the public good.

Pause and reflect, then do it right—create an AI strategy or policy by engaging early and often with residents and in meaningful ways. Being “open, transparent and responsive” not only reaps the benefit of thoughtful public input, but earns the coveted reward of public trust.

Author: Parisa Vinzant, MPA, works as a private and public sector strategist and equity/inclusion consultant. She also provides coaching to ICMA members. She served as a technology/innovation commissioner in Long Beach, CA. Parisa applies an intersectional equity lens in her writing exploring topics ranging from ethics, education, democracy, technology, and community engagement. All views are hers alone. Contact her at: [email protected].

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading…

Originally Appeared Here