With ISC West 2026 set to take place this month at The Venetian Expo in Las Vegas, artificial intelligence (AI) will most certainly be a very large part of virtually all the conversations going on at the show. Just about every vendor will no doubt either have an AI solution or be working on one.
I believe we are at the outside edge of where AI can take us, both good and bad. In this article (and in Part 2, coming soon to a security trade publication website near you), I want to explore how we can manage and deploy AI for security and monitoring.
Ethical Uses of AI in Security
When we talk about the ethical use of anything, but especially AI, a number of areas immediately come to mind. Some examples could be the use of an AI to write an article on AI or create a presentation, write a song or create an image. Is it ethical to use a tool to do something?
Another ethical question could be: Was the algorithm that is being used trained correctly so that there isn’t a bias of one kind or another? Another test would be: Is the AI being sold really capable of doing the job that it is being advertised as doing? The concept of ethics is far-reaching and means different things to different people.

I think we can all agree that AI is here for the foreseeable future, which means the challenge is going to be: How do we use it correctly? How do we validate that it is working and that no harm can occur because of it?
The alarm and monitoring industry is used to working from standards, federal, state and local regulations and laws but AI is largely not regulated within the context of how it is used in security systems. Even if it was, it is moving so fast there is no way that a prescriptive approach can keep up. Much like UL standards, AI regulation has to be executed as a performance-based standard.
Models to Put Things in Motion
So how do we do this? What are the rules? How do we evaluate, monitor and administer this new thing that we can barely see, touch or even begin to understand in terms of what the future brings? Fortunately, there are a number of models we can use to do exactly this.
- The EU AI Act (Regulatory/Mandatory)
The first one examined is The EU AI Act. The law was introduced in 2024 and entered into force in August 2024, with specific bans becoming enforceable in 2025 across the EU. This is arguably the strictest of the three in this article and explicitly bans some uses and use cases in the security realm.
It is 144 pages and can be found here if you want to see it or if you do work in the EU: EU AI Act.
The EU AI Act started last year. Regulations regarding things including general-purpose AI (large language models), government operations and agencies, as well as high-risk systems in areas including critical infrastructure, have to be compliant starting in Q3 of this year and then the rest of consumer and product AI by Q3 of next year.
Just think about that: from little legislation to fully legislated in about four years. - ISO/IEC 42001 (International Standard)
The second one is ISO/IEC 42001. This one was put together by ISO and is intended to be an international standard. This one is approximately 50 pages, but it is going to cost you to see it; ISO is in the business of writing, publishing, and charging for their standards. It can be found at ISO Standard.
It is the primary global standard for an Artificial Intelligence Management System (AIMS). It helps organizations prove they are managing AI risks responsibly (similar to how ISO 27001 works for Information Security), and a security company or provider can get certified by ISO to be compliant in managing the risks of AI. This standard follows the same methodology of the other ISO standards, so if you are already using one of those, this is designed to flow into them. - NIST AI Risk Management Framework (Voluntary/U.S. Standard)
The last one I want to talk about and the one that those of us in North America will likely be using is the NIST AI Risk Management Framework. This is put together by NIST (National Institute of Standards and Technology), is 48 pages and can be found at NIST AI Framework.
In the next edition of Monitoring Matters, we’ll dive deeper into the NIST framework and explore why it’s the one you’re most likely to be using in your business.






