At the Cognito client event on the role of AI in communications, I was asked an excellent question about how to talk to the compliance team about using ChatGPT when they’ve insisted on a total ban on using the tool.


In my answer, I reminded the audience that we’ve been here before and have a playbook for this scenario. Way back in 2010, when I was working for a social media monitoring company, I helped both John Lewis and AMEX launch on social in the UK. There were several hurdles for these clients and others in dealing with the use of social media internally and by their employees.

Cue not bans but well-thought-out policies and guidelines to protect the company and employees’ interests.

The likely reason that a company would ban the use of ChatGPT is they realise if any company confidential or customer information is entered into the tool, it is retained (unless individual privacy settings ate changed) and used for future training of the model.

In multiple AI sessions I’ve run this year, clients have been blindsided that a Generative AI tool would retain information that had been entered.

Samsung became aware of this when some of their teams uploaded company code to the tool.

So rather than an outright ban, we should work with our internal compliance, risk, and legal teams to develop sensible and workable policies and guidelines so that everyone can experiment and feel the power of Generative AI tools without putting the company at risk.

Just as back in 2010, when social media use was “banned”, employees simply pulled out their personal phones and circumvented the policy.

Have you and your teams been thinking about how to experiment safely with ChatGPT in your workplace?

If you could benefit from a similar session I ran for the Cognito clients, please do get in touch.

More information about my sessions on how AI impacts business and society can be found here.