Across large enterprises, AI is moving quickly from experimentation into daily work. That shift is forcing leaders to confront issues they can’t delegate to technology: how performance is measured, how people are supported through change, and how values show up when machines start doing more of the work. Not every company is approaching those questions in the same way.
Some organizations are responding by racing for efficiency. Others are stepping back to define, or reaffirm, what kind of company they want to be as AI becomes more embedded in the organization; and what obligations they still owe the people who make the business run.
In practice, this means senior executives are grappling with the social contract between the company and its employees. As AI takes on more execution, leaders must decide what remains human, what becomes automated, and how much disruption their culture can absorb along the way. These are leadership decisions about trust, accountability, and what kind of organization people are being asked to commit to.
At Ingka Group, the largest IKEA retailer in 32 countries, leaders recognized that tension early and set out to adopt AI in a way that wouldn’t put their culture at risk. The technology would move forward, but not without steady leadership and clear support for their people. IKEA’s approach stands out as one example of how a large enterprise is choosing to let values, not just productivity, shape how AI enters daily work.
At IKEA, the commitment to employees shows up in how senior leaders talk about their people and their responsibilities to them. IKEA’s people-first orientation is reinforced explicitly at the executive level. As their Chief People & Culture Officer, Ulrika Biesert, emphasizes, “People have been at the heart of IKEA for over 80 years—and that’s exactly where they’ll stay.” It’s a disciplined approach that’s helping the organization modernize without losing the people who make it work.
What IKEA is doing reflects a deliberate set of leadership choices about how people will be treated as AI changes the nature of work. Those choices are rooted in the company’s values and history, and shape how far and how fast the technology is allowed to go. Other companies will make different calls, like some pushing harder on automation or moving faster on workforce reduction.
There is no single model for competing in an AI-driven market. A strong culture does not automatically mean preserving every job; it means being clear about how human contribution and machine execution are aligned with the organization’s purpose.
Values as a Filter for AI
As companies embark on their AI transformations, leaders are discovering that technology decisions now carry cultural and ethical weight. Before a new tool is deployed, they must ask not just whether it works, but whether it aligns with the kind of organization they want to lead.
At IKEA, those questions are guided by the company’s core values. These values, including togetherness, simplicity, and care for people and the planet, are treated as practical decision criteria for every AI initiative. They show up in the real questions leaders use to evaluate new technology:
- Does this simplify or complicate the work?
- Does this support co-workers and free up time for more meaningful work?
- Does this align with fairness, inclusion, and sustainability?
That discipline isn’t only internal. Led by Chief Digital Officer Parag Parekh, the company signed last year with the Partnership on AI (PAI) to help broaden standards around responsible technology and, in Biesert’s words, to “ensure that AI is developed and applied ethically, in line with our values of inclusiveness and caring for people and the planet.”
That same human centric values-first posture guides how Ingka evaluates partners. The company applies a Digital Ethics Group Rule that requires any AI partner or tool to be “robust, auditable, interpretable, fair, inclusive, and sustainable.”
These practices show how companies can apply AI governance not just to manage risk, but to clarify what they stand for.
Training Leaders Before Scaling Tools
As AI moves from pilots into daily work, more organizations are discovering that leadership readiness matters as much as technical readiness. Rolling out tools before leaders know how to explain, govern, and support them often creates confusion long before it creates value.
One of Ingka’s most significant choices was preparing leaders before rolling out technology. During the company’s previous financial year Between 1 September 2023 to 31 August 2024, the company trained approximately 30,000 co-workers and around around 500 senior leaders on responsible AI so they can discuss the technology with their teams and support co-workers with care as AI evolves the way that work is done.
This is where some companies fall short. Not because employees can’t adapt to new technology, but because leaders talk out of both sides of their mouths. Employees can handle change when expectations are clear. What slows them down is value ambiguity: mixed signals about what the organization stands for, what is changing, and what will not be compromised.
It may not be flashy, but it’s one of the most effective cultural stabilizers available to executives navigating fast change.
Learning in Public: A Culture That Doesn’t Pretend to Have the Answers
How leaders behave during experimentation matters as much as the tools themselves. Pretending to have all the answers can erode trust faster than any technical failure.
Ingka has been testing AI in a range of practical areas: improving demand forecasts, supporting remote sales teams, and helping coworkers with everyday writing and planning. The tools vary, from the BILLY chatbot used by thousands of coworkers to the Hej Copilot, and the company’s own internal AI Assistant (MyAI Porta) that helps with drafting, ideas and improving co-worker workloads Ingka is also experimenting with a GPT assistant to make digital customer conversations smoother.
What stands out in many of the most effective AI pilots is the openness of leaders during the process, a willingness to acknowledge that not everything will work perfectly the first time. Teams tend to respond better when leaders admit what they do not yet know and commit to learning in real time rather than presenting premature certainty.
That kind of transparency plays a powerful role in keeping people engaged. When people see leaders working through the learning curve instead of delivering a polished rollout, it becomes easier to trust both the technology and the change process around it.
When AI Strategy Includes Environmental Impact
Sustainability is also becoming part of the technology conversation. Leaders are increasingly being asked to consider not just what AI can optimize, but what it costs in energy, data, and environmental footprint.
Ingka Group has actually utilized AI to strengthen their sustainability efforts, particularly in food operations across its retail markets. Using AI-enabled measurement and smart scales, Ingka Group has:
- Reduced food waste by an astounding 54%
- Saved more than 20 million meals
Ingka also evaluates energy-efficient model training and responsible data practices, ensuring AI implementation doesn’t increase environmental impact. It’s a continuation of IKEA’s long-standing values-based approach: using AI in responsible and beneficial ways for the many people and the planet.
As more organizations scale AI, choices like these are becoming part of how leaders define what responsible growth looks like in practice.
Five High-Return Leadership Practices for AI-Driven Change
As AI reshapes how work gets done, certain leadership practices are proving especially effective at helping organizations adapt without breaking trust, performance, or culture.
- Build AI literacy in senior leadership before scaling across the workforce.
Executives and managers need a shared, practical understanding of how AI works, what it will change, and what it will not. When leaders are trained first, they can explain what’s happening with credibility, address concerns without fueling anxiety, and anchor decisions in consistent principles. Providing talking points, scripts, and FAQs helps leaders guide teams confidently through uncertainty while supporting employees to upskill and grow. - Redesign work by studying “tasks within a job,” not job titles.
Breaking roles into micro-tasks allows organizations to see where automation can remove friction, where AI can augment human judgment, and where human contribution remains essential. This makes change feel concrete and personal rather than abstract and threatening, helping employees understand how their daily work will improve rather than disappear. - Make responsible AI a true governance practice.
Every AI tool or vendor should meet clear standards before it enters the organization. These standards should go beyond compliance to include reliability, interpretability, fairness, inclusion, and sustainability. Simple acceptance criteria and checklists help ensure consistent decisions and prevent governance from becoming an after-the-fact exercise. - Use everyday conversations as the primary change-management tool.
Short, regular check-ins between managers and employees surface confusion early, build trust, and provide a safe place to discuss how roles are evolving. These micro-feedback loops are often more effective than top-down communications during periods of rapid change. - Treat pilots as shared learning moments.
Organizations that acknowledge pilots will not be perfect and openly share what they learn reduce fear and increase participation. When leaders model learning in public, teams become more willing to experiment, adapt, and improve alongside the technology.
A Closing Note for Leadership Teams
One of the most striking patterns emerging across companies adopting AI is how steady the human side of the organization can remain when leaders stay close to their people. At Ingka, that steadiness has come from leaders showing up, listening to concerns, and staying connected to the day-to-day realities of work.
Plenty of organizations are moving fast on automation, often prioritizing efficiency and speed above all else. Which transformation models will prove most durable over time remains unresolved. IKEA’s experience illustrates one intentional path: aligning AI adoption with a clearly articulated social contract so change is absorbed with fewer internal shocks.
For leadership teams navigating this wave of technological change, the lesson is not to copy IKEA’s choices, but to be equally explicit about their own: to state their values clearly, make tradeoffs consciously, and lead with consistency as work is redesigned. IKEA offers a useful example of what that kind of clarity can look like in practice.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
