The Sierra College Foundation hosted a Community Forum Friday to provide more information in regards to AI (artificial intelligence) and how it will impact the future. Many of the topics discussed included the platform’s influence on education, legal professions, and the need for regulation.
Friday’s panel at Sierra College was moderated by Terry McAteer and panelists included Steve Monaghan, former Chief Information Officer for Nevada County, technology lawyer Eric Little, and California State University, Sacramento professor and AI “guru” Sasha Sidorkin.
About 100 people turned out for the presentation.
Monaghan, who teaches leadership classes, said that the future of AI is uncertain for the common workplace, depending on your profession.
“It’s what’s been out for the last three years, this generative AI that came out with ChatGPT and those technologies that has made it available to the masses, consumerized AI, so that it’s available to your average organization in all sorts of products,” Monaghan said.
He added that people are getting hyped up and there are many who are becoming more disillusioned with the technology while still attempting to better understand it. To put it in perspective, he reminded the audience that in 1994 the world was just getting a grasp on what the internet was and its potential. That, he said, is reflective of what we are all dealing with now in the AI dimension.
With that, he added, the progression from what is currently available through AI to what progresses will almost certainly take less time than it did from the advent of the internet to the daily service most use today.
“Every major software provider is building AI into their applications,” said Monaghan.
Eric Little said his profession as a lawyer has been impacted already.
“AI is becoming much more useful,” Little said. “Not to raise a controversial name, but Elon Musk is going to make a trillion dollars if he sells enough robots over the next 10 years, and he’s pretty good at that sort of thing. When you marry a robot with AI, you’re going to get another technological change, which is going to have dramatic effect.”
Sidorkin agreed with Little and added that AI will affect more people than those just in academia or using it in professional situations.
“Customer service as an industry is really dying because AI can do a better job than most humans,” Sidorkin said.
Little said that he can see that there are many people who are using AI, but using a free version or a less expensive version than what is available. Little believes this could lead users to underestimate AI’s potential.
Little said he spends about $600 per month for his AI services. Other panelists said they didn’t shell out quite as much but lean toward services like Claude and ChatGPT.
Scams being presented by AI are becoming more convincing, added Little, and retirees are often a target for such goings-on.
“I would be very, very cautious about your interactions through the internet, because I see some pretty good scams already, many of them coming from other law firms or (pretending to be) other law firms,” said Little.
Sidorkin added that there is much to be discovered in the world of AI, and even if technology were to not change at all, AI still has over 20 years of innovation behind it; the hardest part is figuring out how to use it effectively.
Little said: “It’s going to have profound impacts on society, both what we see directly in terms of workforce changes, but (also) a post labor economy, if that’s what comes into being. It’s going to change many other things about society, about our culture. It is, I think, enjoyable to sit out there and try to think through what those will be.”
Sidorkin said that although he and his colleagues have not seen productivity growth from AI, it will definitely happen.
“The big question is, how much (will it) increase our productivity, economy wise?” he posited. “Even if by one percent, that means that your Social Security, your pension, (is) probably going to be okay, and the dollar is still going to be more or less one dollar.”
As the world marches toward AI ubiquity, Monaghan said he supports open source models, which he feels democratizes the technology so that more entities can be involved.
“Open source” means the underlying computer code is freely available to others to modify, analyze and use as they please. This can lead to a more collaborative environment, with multiple groups working together to improve and develop the software — and check for vulnerabilities or dangers.
“The only real company that’s really pushing that is from China, which has concerns in being a Chinese product, but they’re the only ones really pushing an open source platform,” Monaghan said. “We need a very robust, competitive marketplace with multiple players in it.”
Despite the promise, Sidorkin is concerned that over-regulation of AI may stifle development and potentially jeopardize America’s current status as the undisputed leader in AI innovation.
“I think over-regulation may kill it,” Sidorkin said. “If we (as a) whole country stopped or slowed down, China won’t.”






