
Written by Jonathan Brun, CEO
When we first started Nimonik in 2008, we received a feature request from a large mining company. They asked that we create an auditable paper trail of their efforts to stay in compliance with ever-changing laws and regulations. The company did not just want a legal register or updates about laws, they wanted to ensure they were checking all updates, marking changes, and creating evidence of their good intentions and robust efforts to be a good corporate citizen. This commitment to an auditable paper trail, ensuring trust and longevity, is the very foundation of what we now call the Compliance System of Record.
Logging the decision-making process has always been at the core of what Nimonik does. Today, with AI, creating a robust Compliance System of Record is more important than ever. Creating an audit trail is a necessary step to fighting the growing accountability crisis being generated by AI tools and their rapid deployment.
The world of compliance is changing rapidly. Michael Rasmussen, an analyst for Governance, Risk and Compliance (GRC) solutions has aptly described the shift in compliance solutions. He outlines how most compliance solutions are based on creating workflows and tools for humans to process data, prepare files, and sign off on decisions.
This approach can work, but it is very resource intensive and is often reactive in nature. People are assigned tasks, they complete forms, assign follow-up, and scramble to close out items before deadlines approach. Large Language Models (LLMs) are enabling a shift from reactive compliance to proactive compliance. At Nimonik, we agree with Michael that there is a real shift that can, and should, reshape how compliance is handled. Michael explains the following,
“Most organizations today operate their Governance, Risk and Compliance (GRC) programs like a patient in intensive care: monitored constantly, intervened upon manually, and perpetually one incident away from escalation. This is inefficient, exhausting, and ultimately unsustainable.
A homeostatic GRC system (built on GRC 7.0 – GRC Orchestrate) is different. It is self-aware. It detects weak signals before they become failures. It adjusts behavior within defined tolerances. It escalates only when necessary. Most importantly, it frees leadership to focus on strategic objectives rather than perpetual fire-fighting.” Source
The goal of a homeostatic GRC system is proactive risk management, but achieving it requires a foundation that prioritizes auditable oversight to mitigate the inherent risks of AI.
Risk With an AI Centric Approach
IBM penned a famous saying in 1979, “A computer can never be held accountable, therefore a computer must never make a management decision”. This is truer than ever and, with a race to deploy AI in production environments, there is a great deal of risk being introduced. Traditional methods had risk as well, mostly centered on human error or software breaches, but AI introduced a new category of risk.
The risk of such a radical shift is that too much responsibility is delegated to an AI tool that is ultimately unaccountable. This week, the (in)famous management consulting company Mckinsey saw its internal AI tool, Lilli, hacked. The risks of deploying AI tools are very real and the Lilli hack outlines the risk of outsourcing too much logic to AI. Pretty much all of their data was exposed in an unencrypted format. Brutal.
What was exposed:
- 46.5 million chat messages. From a workforce that uses this tool to discuss strategy, client engagements, financials, M&A activity, and internal research. Every conversation, stored in plaintext, was accessible without authentication.
- 728,000 files. 192,000 PDFs. 93,000 Excel spreadsheets. 93,000 PowerPoint decks. 58,000 Word documents. The filenames alone were sensitive and a direct download URL for anyone who knew where to look.
- 57,000 user accounts. Every employee on the platform.
- 384,000 AI assistants and 94,000 workspaces — the full organisational structure of how the firm uses AI internally. Source
The sheer scale of the breach was remarkable, but this is the quote that stood out to me from the report,
“Silent persistence — unlike a compromised server, a modified prompt leaves no log trail. No file changes. No process anomalies. The AI just starts behaving differently, and nobody notices until the damage is done.”
In an black-box AI system, where humans delegate content creation and analysis to an AI tool, the underlying logic, calculations and assumptions are often hidden. If that logic is compromised it is very difficult to identify that error and there is a good chance it cascades into many downstream errors. This is why it becomes so critical to pair AI decision-making tools and assistants with a validation process by competent humans who create a paper trail of their decision making throughout the process.
This accountability mandate is increasingly being codified into law, with stricter regulations requiring verifiable human oversight, making systems designed for ‘silent persistence’ unacceptable.
To solve this accountability crisis and safely enable AI augmentation, a new standard for compliance data management is required: The Compliance System of Record.
Compliance System of Record
Nimonik focuses on helping business and government access and manage their regulatory requirements, requirements in engineering standards, and their internal corporate documents. The work of finding, updating, and evaluating requirements in these documents was always done by people. However, the sheer quantity of information is often overwhelming for an organization. A typical regulation has hundreds of requirements and a typical company is subject to hundreds of documents or more. You can do the math. AI promises to help process this information and organize it much more efficiently.
We can broadly categorize the current AI LLM tools into two categories:
- Agentic AI that can make decisions and take actions, let’s call this Automation, and
- Augmentation AI that assists you in your work, but does not make final decisions.
While Agents can be powerful tools, Nimonik believes they are sufficiently high risk due to their black box nature and unpredictability that they should not be widely used when it comes to critical decision making. Nimonik rejects Agentic AI for compliance-critical decision making because it violates the core accountability mandate and introduces unacceptable legal risk. Nimonik focuses entirely on Augmentation AI — the only responsible pathway to leverage AI for compliance.
Nimonik firmly believes that AI and LLMs have tremendous potential to assist engineers and compliance specialists in their work. However, the decisions that are taken must remain a decision taken by a competent person. To ensure that there is a clear separation between AI-generated content and information and decisions, Nimonik is focused on ensuring our subscription platform for regulations and standards is in fact a Compliance System of Record.
In the context of compliance, a System of Record must serve as the authoritative source of truth for auditable decision making. A System of Record is defined as,
“A system of record (SOR) is the authoritative, centralized data management system acting as the primary “source of truth” for specific, critical business data. It is the definitive, trusted, and often, the only place where original, up-to-date data (like customer, employee, or financial records) is created, maintained, and validated to ensure data integrity.” Adobe
All organizations will have a myriad of software systems and data hosting locations. Given the complexity of the modern organization, this is inevitable. The challenge is becoming clear: with AI, it is now possible to create content rapidly with little cost. As a consequence of this now abundant amount of content generation capacity, it is probable that more and more content will appear and risk drowning out real human-created information. Companies need to be able to clearly differentiate between AI content, AI recommendations, Human content, and Human validation. Nimonik is taking an approach with AI where AI and human validation live side-by-side, but with clear delineation. Management, auditors and other parties need to be able to see who created, who vetted, who signed off and when. Humans must make decisions, not AI.
Within Nimonik, we are building a Compliance System of Record that allows our customers to track their regulatory requirements across all domains — from EHS to cybersecurity to import and export regulations — and also manage their engineering standards. Companies will need to track the decisions that their team members make regarding requirements in regulations and standards and ensure the decisions and justifications for those decisions were actually done by a competent person. Nimonik therefore tracks and logs all the actions of all users, allowing administrators and relevant parties to generate a report that demonstrates consistent and ongoing efforts to stay in compliance.
The future is fast approaching and Nimonik is collaborating with its customers to ensure we create an auditable system of record for an organization’s efforts to stay in compliance.
Your Compliance Management Solution Starts Here
Leverage Nimonik’s comprehensive, centralized database to track, manage, and analyze regulatory and industry standards data, ensuring seamless compliance across multiple jurisdictions, frameworks, and standards bodies.






