Microsoft says AI ought to be regulated, transparent

Microsoft Corp. plans a major expansion of artificial intelligence education and job training programs in southeast Wisconsin, along with an increase in its plans for a data center complex now under construction in Mount Pleasant.

The work involves upskilling for non-traditional workers, a data center technician training program, and programs to provide business leaders and their technical and engineering teams with training to help them understand and effectively adopt AI and other emerging technology. The company says it hopes to provide training to 100,000 workers.

The jobs and training programs aim to bolster Wisconsin business and workers’ participation in “the AI economy,” with a multi-pronged approach that will create opportunities for students, workers in need of new skills, business leaders and others. The package of programs will be unique to Wisconsin with a focus on strengthening and building the state’s manufacturing base.

Wednesday, Ryan Harkins, senior director of public policy at Microsoft, spoke at a Metropolitan Milwaukee Association of Commerce event focused on artificial intelligence and public policies. Here are five takeaways from the company’s perspective on AI implementation.

Public trust could be enhanced through transparency

For high-risk AI systems, Microsoft has said it supports the development of a national registry that would give the public an overview of a system as deployed and the measures taken to ensure its safe and rights-respecting performance. Public trust in AI systems could be enhanced by demystifying where and when they are in use, according to the company.

We have lagged behind Europe on privacy laws

“It’s time for the United States to catch up. We’re hopeful that Congress will come around and pass a federal law to regulate privacy. We first started asking them to do so in 2005, and nearly 20 years later, we’re still waiting,” Harkins said.

Around 20 states have passed comprehensive data privacy laws.

There are important lessons from social media

In 2010, technologists and political commentators gushed about the role of social media in spreading democracy during the Arab Spring. A few years later, we learned that social media, like so many other technologies before it, would become both a weapon and a tool.

AI has also been used for harmful purposes. There are legitimate concerns that people have raised for years, such as “deep fakes” of AI-generated audio and video purporting to be real when it’s not.

A labeling requirement should inform people when certain categories of orginal content have been altered using AI, helping protect against the development and distribution of deep fakes. This will require the writing of new laws.

Protections need to be put in place to ensure the benefits of AI are realized while minimizing the risks for harm.

AI can automate mundane tasks and help small businesses

Jobs will be created with the evolution of AI but some positions will be eliminated. Tedious, mundane tasks can be automated, leaving individuals to tackle work that’s more rewarding personally and comes with higher pay.

Another application for AI is helping small businesses navigate regulations.

New York City, for example, has a “one-stop-shop” chatbot that offers algorithmically generated text responses to questions about the city’s bureaucratic maze of required permits. It includes a disclaimer saying it may “occasionally produce incorrect, harmful or biased” information and shouldn’t be considered legal advice.

The cost of collecting data has dropped dramatically

In the 1980s, it would have cost more than a billion dollars to store every book that had ever been written in the history of humankind. Now, it could be done for less than $1,000. That’s an important example because AI uses massive amounts of data.

Originally Appeared Here

Author: Rayne Chancer