…but is currently approached bottom-up
When we say that the introduction of ChatGPT did not go unnoticed, that’s quite the understatement. In the shortest possible time, companies were exploring all the opportunities it had to offer. If we ask you if you have used ChatGPT yet, the answer is most probably yes. But if we ask you if your company has a company policy for the use of AI yet, it’s a lot less likely that you’ll say yes. That’s because the use of generative AI is currently approached bottom-up.
“Generative AI isn’t introduced in companies in ways we know from other tools and applications, where management and commissions discuss ways to solve a problem and come up with a solution. It’s the employee in the office that decides that it might be a good idea to use ChatGPT to write job ads,” says Marcel Leeman, CEO at Textmetrics.
At Textmetrics, we’re not saying that there is anything wrong with that. Taking the initiative should always be applauded. But there is reason to be cautious. Because even though the use of generative AI can add a lot of value, there are some snags to consider as well.
A top-down approach
A huge advantage of ChatGPT and other AI tools is that they are very intuitive. So, let’s say that you’re a recruiter, and you hear of an easy-to-use tool to write your job ads for you. You’re bound to try it.
“As a company, it’s hard to prevent your employees from experimenting with ChatGPT. So, the best thing you can do is think of a top-down approach. Because if employees are going to use generative AI, it’s important to have rules and guidelines,” says Marcel.
There are good reasons why rules and guidelines are important. Marcel continues: “Let’s use the job ad example. Who says that what ChatGPT generates is a good job vacancy? Recruiters might need help writing a good prompt. And they need to be aware of possible inaccuracies and bias. A lot of AI tools have a tendency to tell lies and ignore things you asked them, like writing in language level B1.”
Rules and guidelines are also important because of ethics, safety, and privacy. What happens with the input you feed an AI tool like ChatGPT? We know that ChatGPT is not GDPR-compliant. So your employees should not share personal data in their prompts. And what do you do when ChatGPT is widely used in your company, and it turns out that it doesn’t respect your intellectual property rights?
The importance of governance
To tackle all of the above, it’s necessary to invest in AI governance. According to Marcel, there should be a top-down approach to the use of AI tools: “A company policy that employees can refer to when they use tools like ChatGPT. A policy in which you set out how they can use AI tools, what for, and what they should keep in mind when they do so. But also one in which you include an incident response plan. So you know what needs to be done when something goes wrong. It’s the best way to manage risk and limit the possible damage.”
The solution Textmetrics offers
At Textmetrics, we are advocates of using generative AI. But if we go back to the job ad example, we do believe you need more control. That’s why we offer software that might seem to be generative AI, but offers a lot more. After generating a job ad for you, the software – Textmetrics’ Text Optimizer – checks the generated job ad for several content-related issues, such as readability, credibility, any bias.
You can be sure it’s safe to use, too. Texmetrics’ software is GDPR-compliant and doesn’t learn from your input and data. You don’t even have to feed it much data since it writes the prompt for you. You just need to let the job ad generator know what job you’re hiring for.