This news article is an AI-generated translation from Dutch to English.
Article written by Kristof Van der Stadt, Editor-in-Chief, Trends DataNews
Anyone seeking to advance their career at the multinational Accenture must demonstrate proficiency in using the company’s artificial intelligence (AI) tools. The consulting firm monitors the login activity of its senior employees on
its AI platforms and links this to opportunities for advancement to
leadership roles. In Belgium, this measure is not currently under consideration.
Accenture Belgium declined to comment on the international reports and limited itself to general statements about customer focus and “a modern workplace for our people that is, of course, AI-driven today”.
Dirk Buyens, professor of human resources management at Vlerick Business
School, sees no legal issues with the approach. “If you look at it purely from a legal perspective, it’s perfectly fine, because you’re allowed to ask employees to behave in a certain way,” he says. He compares it to previous technological
transitions: “A long time ago, people might have also been told, ‘You have to use
the mainframe or your PC.’
”The discomfort felt by some is understandable, he believes, but not decisive.
“It gives a bit of a ‘Big Brother is watching you’ feeling, and not without reason. That’s why communication is so important.”
For Buyens, transparency is the key requirement. Anyone who wants to evaluate their employees on AI usage must make that clear in advance. “It cannot
lead to individual sanctions if clarity has not been provided beforehand.”
He calls Accenture Belgium’s silence a missed opportunity that could have been clarified. Still, he understands the strategic logic. In a consulting firm that guides clients through digital transformation, communicating AI usage is more than an internal measure—it’s a matter of credibility.
“It isn't because you've made more or less use of an AI tool that you are a better employee"
Matthias Feys, ML6
However, there is a European caveat that complicates the discussion.
Buyens points out that what is permissible in the U.S. quickly runs up against the limits of privacy legislation in Europe. When individual login data is used as the basis for disciplinary actions or promotion decisions, GDPR regulations quickly come into play. “Monitoring is allowed, but it cannot lead to individual sanctions unless that is clearly stated from the outset,” he says. The fact that Accenture remains silent on exactly what is being monitored and how detailed that data is makes it difficult for European employees to know where they stand.
Matthias Feys, CTO and co-founder of the Belgian AI company ML6,
calls Accenture’s approach “an idea that lacks nuance.” At ML6,
a similar internal initiative was underway: every engineer had to use a
coding tool by the end of 2025. But the company is deliberately not taking the step of directly linking login frequency to promotions. “No way are we ever
including that in a performance evaluation,” says Feys. “Just because you’ve used an AI tool more or less doesn’t mean you’re a better employee.”
Roos Dumont, AI transformation manager at ML6, points out an even deeper risk. Anyone who uses AI without taking ownership of the underlying reasoning quickly produces work that looks good but is substantively hollow. “If someone with expertise takes a closer look, that house of cards comes crashing down pretty quickly,” she says. ML6, therefore, applies the principle of “separating intention from action”: the employee must own and understand the plan, and only then engage AI to generate the necessary output and execute the plans.
The risk of abuse is also very real. Suppose you’re a consultant at Accenture and you know you’re not the best in the class, but you still want to get promoted. In that case, you might, for example, rely on AI as much as possible. This creates the risk that the organization will promote the wrong person. As an alternative to login metrics, ML6 focuses on attitude metrics. Instead of counting how much someone uses a tool, the assessment focuses on whether someone demonstrates the right mindset toward AI: does someone dare to experiment, does someone actively seek applications that improve work, and is AI used to create more mental space rather than to produce the same results faster?
“We expect a cultural mindset where AI offers opportunities,” says Feys. “The first phase was enabling and daring to try. Gradually, we expect people to effectively
integrate that, but we track that growth in a conversation, not on a dashboard.”
“The lazy ones stay lazy and produce bullshit, and those who work well with AI will fall even further behind.”
— Kathleen Vangronsvelt, Antwerp Management School
Kathleen Vangronsvelt, Professor of organizational behavior and human resources at Antwerp Management School, places the measure in a broader context. “This is actually a unilateral change to the psychological contract,” she says, referring to the implicit mutual expectations between the employer and the employee. Just as with the introduction of hybrid work during the COVID-19 pandemic, such a unilateral shift takes time to take hold and requires explicit communication.
Her main objection is not the measure itself, but the method of measurement. “How often people log in is just a proxy,” says Vangronsvelt. “The question is whether that says anything about the quality of usage.” She also warns of a widening digital divide: employees who already have a good grasp of AI are pulling ahead, while others are falling behind or simply logging in mechanically, adding no value. “The slackers stay lazy and produce nonsense, and those who work well with it will pull even further ahead of them.”
According to Vangronsvelt, Accenture’s external communications also fail to address the question of “why.” What exactly are employees supposed to do with these tools? What is the goal? “If people don’t know exactly what the purpose is and the only metric is how often you log in, feelings of unfairness will surface very quickly.” And that is fundamentally bad for both well-being and performance. In her view, training is not an afterthought but a necessary prerequisite. Without that foundation, the measure risks achieving the opposite of what it intends.
Accenture’s approach sends an unmistakable signal: AI is not an option but an expectation. But anyone who truly wants to know whether employees are using AI effectively must dare to look deeper than the logs. Otherwise, the danger is not that people aren’t using AI, but that they’re using it the wrong way, for the wrong reasons, and that the organization then bases its promotion policy on that. The experts are in clear agreement on this.
Accenture’s approach is a more aggressive version of what many organizations are trying to do more quietly: getting employees on board with AI. But how do you do that effectively, without provoking resistance or sending the wrong signals? Dirk Buyens of Vlerick Business School advocates for nudging. Instead of confronting employees with penalties or strict KPIs, he recommends highlighting the attractive side of AI using concrete, company-specific examples. “Demonstrate that, make it tangible, and don’t do it with general examples but very specifically: ‘I was looking for the name of a company, I entered this prompt, it didn’t yield anything, but with this adjustment, I got it right away.’” He suggests combining this with targeted training in prompting and critical thinking. Because blindly trusting AI output is at least as dangerous as not using AI at all. And make sure managers lead the way. “I think it’s important that they set an example in the use of AI,” says Buyens.
Matthias Feys and Roos Dumont of ML6 take a process-oriented approach. The focus isn’t on the tool itself, but on the business process. Which steps take a lot of time or effort? Where can AI make a difference? Only once that’s clear do you look for the tool that fits best. Dumont: “It’s not a matter of: we have Microsoft Copilot, so how are we going to use it in the company? No, we look at what our processes are today and what and how we can make them AI-ready.” This prevents tools from being deployed to impress rather than to create value.
“All knowledge workers are heading toward an identity crisis”
The discussion about AI use in the workplace has a layer that is often overlooked: how does it affect employees’ professional identity? Matthias Feys calls it the “developer identity crisis.” “You can compare it to an airplane: from the joy of flying to sitting in the control tower. You studied to be a pilot, and suddenly you no longer have to fly yourself.”
The paradox is that the most competent employees, of all people, may experience the most resistance. Kathleen Vangronsvelt points to research indicating that experts sometimes experience greater resistance to AI. Not because they don’t understand the technology, but precisely because they recognize its shortcomings more quickly. “Many experts have a greater reluctance to work with it. If AI then cites an incorrect article, they immediately dismiss it, while you have to understand that it’s a probabilistic machine.”
Dirk Buyens puts it into a broader perspective. The question isn’t just how employees embrace AI, but also what embracing AI means for what they did and who they were. “Ultimately, we’re all going to find ourselves in such an existential crisis to a greater or lesser extent: what can we still do, and how can we distinguish ourselves?” What’s unusual about this transition is that AI is now also affecting the intellectual and creative aspects of jobs—the parts that always seemed protected. That requires something more than an internal email about login statistics.