Keeping Your Management System ‘Ordinary’ in the Age of AI

by Julius DeSilva

We’re living in an era where every week seems to bring a new AI tool or software promising to “transform” your business. Predictive analytics, digital twins, algorithm-driven risk models; the buzzwords are endless. And while some of these advances do have their place, I argue that companies must not forget their basics. In a previous career as a mariner, as technology evolved and found their way on ships there was still some value to a simple visual bearing and the information it could give you.

Call me old-school, but I still believe in systems that are owned by people, not platforms. In fact, I’d argue that now more than ever, we need to protect the ordinariness of our management systems, because that’s where the real strength lies.

Don’t Mistake “Ordinary” for “Outdated”

I’ve worked on ships and in boardrooms, with multinationals and mom-and-pop shops. Across the board, the systems that work best are not the flashiest, they’re the ones that are understood, used, and respected. I’ve used fancy preventive/planned maintenance systems and then a simple excel spreadsheet with macros built in. Perhaps surprisingly, the company using the ordinary excel spreadsheet had better maintained equipment.

An “ordinary” system means:

  • Everyone knows their roles and responsibilities.
  • Processes are documented clearly, not buried in folders.
  • Documentation is clear and concise.
  • Records are maintained and can be trusted.

You don’t need artificial intelligence to tell you your maintenance wasn’t done. You need a culture where someone owns the task, completes it, and checks the box honestly.

When the Tool Becomes the Boss

I’ve seen organizations spend small fortunes on digital platforms that promise complete “management system automation.” These platforms often come with dashboards no one reads, workflows no one updates (because they don’t know how to), and training modules people click through just to make them go away. (Let’s be honest, you know how effective your CBT program are!)

Compare that to a simple 8D form built in Excel, yes, plain old Excel. When it’s used properly by a team that understands the process, it becomes a great tool for problem-solving. No licenses, no AI, no data scientists required.

If you’re curious, QMII’s Root Cause Analysis workshop teaches this practical approach. And it works because it’s rooted in thinking, not tech.

PDCA: Still the Smartest Loop in the Room

You don’t need AI to plan, do, check, and act. You need discipline. In a world full of reactive fixes and AI-generated insights, PDCA still calls on people to pause, observe, think, and improve. And frankly, we could all use more of that.

A well-run PDCA cycle doesn’t care whether your data comes from a sensor or a clipboard. What matters is how your team reflects, learns, and adjusts. If you want to sharpen that loop, QMII’s ISO 9001 Lead Auditor Training doesn’t just teach clauses. It teaches systems thinking, real auditing skills, and how to see the story behind the numbers.

Use AI? Sure. But Stay in the Driver’s Seat

I’m not against AI. Let me be clear on that. It’s a tool that, when used wisely, can absolutely support your management system. It can help you analyze patterns in data and generate reports that are helpful. But that’s exactly the point. AI is a tool, not the system itself, and certainly not the leader of it.

I’ve seen organizations fall into the trap of trusting algorithms more than their own people. They install AI to identify when personnel are not using PPE, to generate solutions based on data analysis and when errors occur. But no one stops to ask the most important questions: Does this make sense? Is this what’s really happening? Who validated this? Why did the person not use PPE?

The danger is that we start to mistake output for understanding. AI doesn’t know your organizational culture. It doesn’t know that one department always closes their nonconformities just to get them off the list. Only your team, using their judgment and grounded in your process reality, can make those distinctions.

If you’re going to use AI, integrate it into the PDCA cycle. Feed its outputs into your management review. Use it to inform, but not to dictate. And perhaps most importantly, teach your team to question it. Train them to ask, Where did this data come from? What assumptions are built into this model? What’s missing from the picture?

Own Your System. Keep It Ordinary.

There’s something refreshing about an audit checklist that an auditor actually helped write. Not an AI generate one. That’s real ownership. That’s engagement.

Management systems aren’t meant to be high-tech puzzles. They’re meant to be frameworks that help people do their jobs better. They are not a compliance burden, they’re a strategic asset, but only when they belong to the people who use them.

So here’s my message in conclusion: Keep your system ordinary. And make it extraordinary in how well it’s embraced and used.

Can We Trust AI? 

We see the use of Artificial Intelligence or AI all around us in uses that may be visible to us as also in uses not directly visible to us. It is here to stay and as we learn to live with it, however, there remains a concern about whether we can totally trust AI. Hollywood may have painted a picture of the rise of machines that may instill fear in some of us. Fear of AI taking over jobs, of AI reducing intelligent human beings, and of AI being used for illegal purposes. In this article we discuss what actions can be taken by organizations to build trust in AI, so it becomes an effective asset. The idea is as old as 1909, EM Foster’s “The Machine Stops”. 

What does it mean to trust an AI system? 

For people to begin to trust AI there must be sufficient transparency of what information AI has access to, what is the capability of the AI and what is the programming that the AI is basing its outputs on. While I may not be the guru in AI systems, I have been following its development over the last seven to eight years delving into several types of AI. IBM has an article that outlines the several types of AI that may be helpful. I recently tried to use ChatGPT to provide me with information and realized the information was outdated by at least a year. To better understand how we can trust AI, let us look at the factors that contribute to AI trust issues.  

Factors Contributing to AI Trust Issues 

A key trust issue arises in the algorithm used within the neural network that is delivering the outputs. Another key factor is the data itself that the outputs are based upon. Knowing the data that the AI is using is important in being able to trust the output. It is also important to know how well the algorithm was tested and validated prior release. AI systems are run through a test data set to determine if the neural network will produce the desired results. The system is then tested on real world data and refined. AI systems may also have biases based on the programming and data set. Companies face security and data privacy challenges too when using AI applications. Additionally, as stated earlier there remains the issue of misuse of AI just as cryptocurrency was in its initial phases.  

What can companies do to improve trust in AI? 

While there is much to be done by organizations to address the issues listed above and it may take a few years to improve public trust in AI, companies developing and using AI systems can use a system-based approach to implementing these systems. The International Organization for Standardization (ISO) recently published ISO/IEC 42001 – Management System Requirements for Information Technology AI systems. The standard provides a process-based framework to identify and address AI risks effectively with the commitment of personnel at all levels of the organization.  

The standard follows the harmonized structure of other ISO management system requirement standards such as ISO 9001 and ISO 14001. It also outlines 10 control objectives and 38 controls. The controls based on industry best practices asks the organization to consider a lifecycle approach to developing and implementing AI systems including conducting an impact assessment, systems design (to include verification and validation), control of quality of data used and processes for responsible use of AI to name a few. Perhaps one of the first requirements that organizations can do to protect themselves is to consider developing an AI policy that outlines how AI is used within the ecosystem of their business operations.  

Using a globally accepted standard can deliver confidence to customers (and address trust issues) that the organization is using a process-based approach to responsibly perform their role with respect to AI systems. 

To learn more about how QMII can support your journey should you decide to use ISO/IEC 42001, or to learn about our training options, contact our solutions team at 888-357-9001 or email us at info@qmii.com.  

-by Julius DeSilva, Senior Vice-President