Instruct AI agents with values to control what is controlling you
- Bas Kemme

- 3 days ago
- 3 min read

Introduction
CEOs of frontier AI companies such as OpenAI, Anthrophic and Heads of AI worldwisde should consider norms and values the AI agents they build will apply to steer behaviour in ways you want.
As autonomous agents will increasingly run parts of your company, much like a growing group of employees without direct leadership involvement, it’s crucial they behave in alignment with the values you believe are best for your company.
While defining norms (what’s good and bad) is essential, embedding clear values (what is perceived as better) is what guides behavior when your agents face conflicting demands. What should your AI do when there’s no single "right" answer, when it has to choose between seemingly conflicting alternatives? For example, a customer service agent might choose a personalized solution for an individual (Individualism) or a group discount (Communitarianism), depending on the customer’s cultural context. These choices matter.
In a recent episode of the Lex Fridman Podcast (#459), the conversation centered on Deep Seek, China, Nvidia, xAI, TSMC, Stargate, and AI Mega Clusters. Beyond the technical advances, what stood out was the looming question of how autonomous these AI models are becoming and what values are being embedded into them.
“As AI becomes more powerful, the question isn’t just what it can do, but what it should do—and who decides that."— Lex Fridman
“This is what people, CEO or leaders of OpenAI and Anthropic talk about—autonomous AI models, which is: you give them a task and they work on it in the background.” — Nathan Lambert
These reflections raise a critical issue:
As AI agents become more autonomous, how do we ensure they make decisions aligned with our human values, especially when those values differ across cultures?
Autonomous Agents as Culture-Bearers
Imagine autonomous agents acting within your company, not as tools waiting for commands, but as self-directed actors, working much like teams of employees without direct oversight. Just like human teams, these agents will encounter dilemmas. What happens when there is no single “correct” answer? Which path should the AI choose?
To guide these decisions, we need more than rules, we need values.
Enter the Seven Dimensions of Culture developed by Fons Trompenaars and Charles Hampden-Turner.
This model offers a powerful framework. It's seven dimensions describe how people (and by extension, agents) navigate dilemmas based on their underlying cultural values. Here are the dimensions, plus examples for each.
Universalism vs. Particularism Should the same rules apply to everyone (universalism), or should we adapt based on relationships and context (particularism)? Example: Should an AI procurement agent stick strictly to procurement guidelines, or make an exception for a long-term supplier who’s late due to a natural disaster?
Individualism vs. Communitarianism Should decisions prioritize the individual or the group? Example: Should a benefits chatbot recommend individual rewards for performance or group incentives to foster team cohesion?
Neutral vs. Affective Should communication remain emotionally neutral, or allow emotional expression? Example: When handling customer complaints, should an agent maintain a calm tone or mirror the customer’s frustration to show empathy?
Specific vs. Diffuse Should relationships and tasks remain separate, or be integrated holistically? Example: Should an internal HR agent focus solely on performance metrics or take the employee’s personal situation into account?
Achievement vs. Ascription Should status be based on accomplishments, or assigned based on age, title, or education? Example: Should an AI mentor program recommend leadership training to a young high performer or to a senior manager based on tenure?
Sequential vs. Synchronic Time Is time linear and task-based, or flexible and parallel? Example: Should a scheduling agent prioritize tasks one after another, or allow overlapping deadlines to match cultural norms?
Internal vs. External Control Do we control our environment, or adapt to it? Example: Should an AI sustainability planner set aggressive internal targets regardless of market volatility, or adapt them based on external pressures?
Why this matters
Autonomous agents are already making trade-offs based on implicit values coded into them. If these values are unexamined, or worse, misaligned, they can lead to unpredictable or even damaging outcomes. But if we embed values deliberately, we can shape agents that behave in ways aligned with our intent, culture, and purpose.
In other words:
"Values are how AI agents resolve dilemmas."
Let’s not wait for AI to “absorb” values by default. Let’s define them clearly, contextually, and globally.
Our call to action
I believe in creating a shared core of values for autonomous agents, with the flexibility to localize where appropriate. If you’re a CEO or Head of AI who cares about making agents more aligned, responsible, and culturally aware, we'd love to hear from you.
We’re currently developing a framework using Trompenaars’ model that organizations can adapt. Reach out, or contact Fons Trompenaars directly if you’d like to help shape how AI agents will behave when you're not in the room.

Comments