As I’ve previously written, AI is here and is probably not going away. Hype or no, and regardless of what is coming down the pipe in terms of massive job losses, an age of abundance, or even the AI singularity, the existing tools are already massively powerful.

I routinely use the various AIs like ChatGPT and Claude as an alternative to search, to troubleshoot problems, and to analyse datasets. At work, myself and my team make extensive use of Co-Pilot for visual studio, and as I have mentioned previously, these days I rarely physically write code. Increasingly, my value is not the how, it’s the what and why.

AI tools are increasingly being used by other teams, baby steps at first, but I am no doubt when those teams discover the capabilities of these tools (hallucinations and other caveats notwithstanding) the adoption will increase. Other organisations are already asking for our position as to responsible AI use within our organisation, and it seems many are feeling their way. Those of us based in the EU are also subject to the EU AI Act, and, while I think this is putting the cart before the horse somewhat, in that it creates regulation for an industry that doesn’t exist in Europe yet, here we are…

So, I began putting together some non-exhaustive thoughts. These are yet to be ratified, but should serve as a starting point…

Treat AI as if it was a random person off the street

Treat AI as if it were a human you just met, who is external to your organisation. So, no personal, confidential, commercially or security sensitive data to be given to them.

This means you’re probably not going to be able to use ChatGPT to help organise your financial reports. It might allow you to use the AI tools being increasingly included in various products (such as Office 365)… but you’re going to need to check the T&Cs. You’re going to need some protection in place, a contract, an NDA, something.

AI generated content is just like other cut-n-pastes

Best efforts should be made that output (especially but not limited to software) does not infringe on copyright.

Nothing really new here, the same rules apply to AI as for the tried and true techniques of developers past, that is to say blindly cut and paste things from Stack Overflow. It was a problem then, it’s a problem now. Hard as it is to solve in reality, but worth reminding people.

Clearly mark where AI is used

Code or other content that has had significant levels of AI assistance should be clearly commented and marked. Some tools (e.g. Claude code for GitHub) automatically commit under the agent’s name, but not all do so.

I’m a fan of code comments.

A colliery, and this is really part of regulatory compliance, any user facing interaction that interfaces with AI (e.g. a chatbot) or where their data is processed by an AI, should be clearly communicated to the user.

AI should be used, but its use should be clearly marked where users are interacting with or processing their data with it.

The human must be in the loop

All AI generated code must be reviewed, tested and above all understood by a human. Perform code review and change management just as you would if you had any other open source code contribution or pull request from a new junior hire. AI agents must earn their trust to act autonomously, and it’s my opinion that we’re not there yet.

Note I highlighted “understood”.

AI can generate very “truthy” looking code, but it’s up to you to make sure you fully understand what it is actually doing. This is part of your value, human!

Looking to the future

Things are moving fast.

While drafting my notes I was asked for a list of approved tools, and to be quite proscriptive. However, this is such a fast moving field I’m not sure that “whitelisting” is a viable approach. New tools come out faster than it’s possible to look at, and so neither a “it’s ok unless it’s forbidden” approach or the “forbidden unless permitted” / “ask IT” approach is going to work. One would allow potentially unsafe use, and the other would realistically cause people to circumvent the restriction out of frustration.

You want people to work with you, and not against you.

So, rather than come up with a list of commandments, I’ve opted for some guidelines. People will still need to actually think… which might mean I’m doomed here… but rather than try and legislate every possibility, I hope to prompt some thought, and at least mitigate the worst outcomes.

Anyway, let me know your thoughts!