Generative AI has captured the world’s attention. It has become a powerful tool that companies use to drive digital transformation, thereby boosting operational efficiency and enhancing customer experience. For example, when companies use generative AI to strengthen their customer service systems, their systems can instantly understand and respond to customer inquiries while being able to predict and satisfy potential customer needs. Most companies undertake AI development based on open-source large language models (LLM) coupled with sector-specific data for training and inferencing. This approach has inherent cybersecurity risks and challenges that cannot be overlooked.
In view of this, governments around the world have been working to institute AI regulations over recent years. For example, the EU issued the Ethics Guidelines For Trustworthy AI in 2019, setting key ethical requirements that AI systems should follow. In 2021, the EU proposed the AI Act. In 2022, the U.S. released the Blueprint for an AI Bill of Rights and Canada proposed the Artificial Intelligence and Data Act, both aimed to establish core principles for AI development and ensure AI systems’ trustworthiness. In June 2024, Taiwan’s Financial Supervisory Commission published the Guidelines for the Application of AI in the Financial Industry for financial institutions to follow when they utilize AI.
According to OneDegree Global Senior Partnership Manager Frank Liao, there are two types of AI risks. The first is cybersecurity risks. As AI becomes part of the data structure, the system is exposed to new types of cyber threats, such as prompt injection and jailbreak. These types of attacks manipulate prompts to elicit specific output from LLMs. When companies integrate AI into their systems for processing sensitive data, hackers can exploit these attacks to gain unauthorized access to the system, potentially leading to leaks of sensitive or confidential data.
The second is compliance. As generative AI becomes more and more powerful, companies are increasingly relying on AI for decision making and content creation. This gives rise to concerns over whether AI-enabled decision making and content creation are fair, human-centric, and lawful. OneDegree Global’s Cymetrics Vulcan, an LLM verification platform designed to assess AI models for vulnerabilities and compliance with responsible AI standards, effectively helps companies address these risks. Its fully automated verification capability shortens AI red teaming which would usually take 200 hours to just three hours. With Cymetrics Vulcan, companies can ensure their AI systems are protected from cyber threats and compliant with AI regulations before they bring the systems online and this is achieved at a lower cost and in a shorter timeframe.
OneDegree Global utilizes Amazon Bedrock as the core to probe into the risks of customers’ AI models.
As a pioneer in insurance technology, OneDegree Global provides present-day core solutions for insurance companies, brokerage firms, and applications/platforms. It aims to meet emerging market needs to drive innovations, provide all-round protection for consumers, and create revenue streams for insurance companies. In response to rising cyber threats, OneDegree Global helps insurance companies incorporate cybersecurity measures across every element of their process from underwriting to information security audits. Cymetrics is the cybersecurity arm of OneDegree Global. Apart from being ISO27001 and ISO27017 certified, the Cymetrics team has also obtained ISO 42001 Lead Auditor Certification.
Frank pointed out that although AI ethics has been a focus of attention in recent years and AI system developers are adding cybersecurity measures in their LLMs, LLM hallucinations are not uncommon. Not only could LLMs generate inaccurate or fictitious outputs, but they could also be vulnerable to prompt manipulation, resulting in data leakage or ethical violations. If this occurs on an application service a company offers, it could damage the company’s reputation and business, and worse yet, there could be legal consequences.
The Cymetrics LLM verification platform from OneDegree Global helps companies assess the risks of their AI systems in terms of security, privacy, safety, and fairness, ensuring their systems comply with responsible AI standards. Leveraging comprehensive tests and advanced LLM attack techniques, the platform is able to uncover potential risks in customers’ AI systems. OneDegree Global gathers the AI system risk assessment data and performs analysis using its generative AI services built on Amazon Bedrock – a fully managed service from Amazon Web Services (AWS) that makes high-performing foundation models (FMs) from leading AI companies. Based on the analysis results, OneDegree Global further provides suggestions on remedies and improvements.
OneDegree Global’s automated assessment process appeals to Taishin International Bank
A unique feature of OneDegree Global’s LLM verification platform is that its complete set of automated assessment measures can reduce the time it takes to conduct a full assessment from 200 hours to three hours. This allowed OneDegree Global to win a proof-of-concept (PoC) deal with Taishin International Bank. When Taishin was planning its AI financial service system “Taishin Brain,” it selected OneDegree Global’s AI system assessment service to ensure compliance with Taiwan’s Financial Supervisory Commission’s Guidelines for the Application of AI in the Financial Industry. This endeavor made Taishin the first bank in Taiwan to build a responsible AI system.
As part of its business expansion efforts, OneDegree Global joined the Startup Terrace Kaohsiung AWS Joint Innovation Center (JIC) program in 2024. Tapping into AWS JIC resources, OneDegree Global hopes to commercialize more of its innovative solutions and connect with more customers across different industries, thereby building up its presence in the global marketplace.
Frank noted that in view of frequent cybersecurity incidents, companies are growingly paying attention to the protection of data. They are well aware that traditional tools such as firewalls are of no help in the detection of and protection against cyber threats to generative AI applications. OneDegree Global’s biggest advantage is that it can customize a solution for LLM cybersecurity risk assessment to suit different industry characteristics and requirements. Following its participation in the Startup Terrace Kaohsiung AWS JIC program, it has attended multiple match-making events which enabled it to get in touch with potential customers in the finance, manufacturing, and semiconductor sectors. This is believed to benefit the company’s long-term development.
OneDegree Global plans to expand its LLM cybersecurity risk assessment business from Taiwan to other Asia Pacific countries and the Middle East, where AI applications are growing rapidly across finance, healthcare, and government sectors. As these countries place more and more emphasis on AI-driven innovations and have increasing needs for robust information security solutions that comply with regulations, they offer immense market opportunities. To take it a step further, OneDegree Global will also foray into Europe as part of its all-out efforts to capture AI opportunities.
OneDegree Global launches its AI compliance and security verification solution Cymetrics Vulcan
Photo: OneDegree