Brantley Fry
Human Capital; Fractional CHRO, COS, CAO
Why organizational design and platform connectivity are the two non-negotiables for a future-proof AI strategy.
The same question is dominating conversations in boardrooms everywhere: What are we doing with AI? Across industries, there’s a clear urgency to act and a growing fear of being left behind. In response, organizations are racing to implement AI solutions—and build AI-native tech stacks—often before they’ve fully considered the implications for their people, workflows, and systems.
There is no doubt that AI can be transformative. But getting there requires companies to rethink how work gets done, how knowledge is created and shared, and how technology decisions are made. At its core, AI implementation is more than a technology shift, which raises a fundamental design question for organizations. Are we building systems around our people, or reshaping our people around the systems we choose?
That question points to two areas most companies haven’t fully addressed yet:
These aren’t separate challenges, as the knowledge your people build increasingly becomes embedded in your chosen systems. In this article, we’ll explore both sides of that equation, how to bring your people along with intentionality, and how to rethink the way you evaluate AI platforms before the decisions you’re making now become the problems you’re solving later.
When leaders push teams to adopt AI without a clear strategy, it can feel less like an opportunity and more like a disruption. As Birmingham AI puts it, there’s a meaningful difference between AI that happens to your people and AI that happens for them. For employees to embrace this technology, they need to feel valued in the process and understand that AI is here to make their jobs easier, not replace them.
Leaders can’t stop at a directive to “be more productive.” They need a clear strategy for bringing people along, preserving institutional knowledge, and ensuring accountability for AI’s output.
Efficiency is a common justification for AI adoption, but efficiency toward what end? If leadership can’t articulate the business problem they’re trying to solve, the initiative stalls before it starts. This is where the idea of “start with why” becomes more than a familiar leadership concept. It’s a requirement. Has your leadership team actually articulated the business problem AI is supposed to solve, or is the directive still just “be more productive”?
That “why” must be specific enough to communicate up and down the organization, with AI literacy starting at the top. Leaders need enough familiarity with the technology to understand what it can and can’t do, ask hard questions, and make sure investments align with real objectives.
When the “why” is clear, people can get behind it. For example, one of our healthtech SaaS clients had a workflow full of manual, repetitive processes. By identifying which tasks were repetitive and which required human judgment, they automated the former, freeing up more time for client-facing work. The result wasn’t just greater efficiency; it was a more rewarding work environment. Having a clear purpose gave employees a reason to embrace the change rather than resist it.
As the healthcare example above illustrates, AI should be framed as a way to eliminate tedious work so employees can focus on higher-value and more meaningful tasks. People want to understand what’s in it for them, and if that isn’t clear, adoption slows. Any change management effort ultimately comes down to trust. Employees need to believe this change is being done for them, not to them.
One way to build that belief is to invite employees into the process early. Ask them to look at their own workflows and identify where AI could take repetitive work off their plate. When the ideas come from the people doing the work, adoption feels less like a mandate and more like a shared effort.
Having Human Resources co-own implementation with IT helps support this. Who in your organization owns the people side of AI implementation? If it’s only IT, that’s a gap. IT handles the mechanics of setting up the technology, while HR ensures the “why” is clearly communicated and that employees have a place to raise questions and concerns.
This process only works if the company values are more than something written on a poster. If those values are real, they should shape how AI is introduced, how decisions are made, and how accountability is communicated. And as we’ll see, those values matter not just for how you communicate change, but for what gets built into the systems themselves.
This is also where AI policies come in. Many companies started with policies that simply told employees not to use AI tools. But as AI becomes embedded in operations and platforms, those policies need to catch up by addressing who is accountable for AI-generated output, what data can and can’t be used, and how decisions made with AI assistance are reviewed. The goal shouldn’t be to restrict people, but to give them enough structure to move confidently.
As you work through change management and encourage employees to get curious about how AI can augment their workflows, you also must confront some harder questions. If we’re automating junior roles, what happens to the pipeline that ultimately creates senior-level expertise? Anyone entering the job market can relate to this problem firsthand. Entry-level jobs are getting harder to find.
There’s also the question of who is responsible for signing off on AI’s output. Think of AI as a high-level intern. It’s capable, but it still needs someone to supervise the work. Right now, companies successfully implementing AI are relying on more experienced employees to double-check that the output is valid, and it takes real knowledge to do that well.
That works for now. But if we lose the pipeline that builds that expertise, we eventually lose the ability to validate AI’s output. It’s a chicken-or-the-egg problem we haven’t quite solved for yet. And it gets more complicated when you consider where that expertise is actually going, into the systems themselves.
AI is creating new opportunities for how companies manage and share knowledge. Many are already building internal GPTs trained on SOPs, handbooks, and processes to create a shared source of truth. Don’t overlook your company’s values here. If your internal AI tools aren’t trained with your company’s values in mind, the decisions and outputs they produce won’t reflect them, creating ripple effects across the organization.
These tools also change where institutional knowledge lives. The subject matter expert for a business process used to be a person, but increasingly, it’s the machine.
That also means your people’s expertise is accumulating inside the systems you choose. And once it lives there, a different set of questions emerges. Where does that knowledge live? Who controls it? And how easily can it move across systems and platforms in your tech-stack?
Those are questions most companies can’t answer fully yet, and the reason comes down to how they’re choosing their platforms. Because preparing your people for AI is one set of considerations most companies aren’t thinking through deeply enough. But there’s another that’s just as consequential. The way companies have always made software decisions doesn’t apply to AI. These two challenges are connected because the knowledge your people build, the processes they document, and the expertise they teach the system all end up living inside the platforms you choose. If those platforms can’t talk to each other, that knowledge gets trapped. Most organizations haven’t realized this yet because they’re still in early, contained rollouts.
For decades, companies bought software by choosing the best-in-class solution for a particular department or use case, then figuring out integration later. With the help of third-party tools and consultants, there was usually a way to make those systems talk to each other.
But that approach doesn’t work for AI. These systems learn from the data and workflows they’re connected to, and they get smarter with more context. If your AI tools sit on different platforms, each one is only learning from its own slice of the business. There’s no real way to carry that knowledge across systems.
That shift fundamentally changes what it means to choose a software vendor. You are no longer just buying a tool for a department. Instead, you’re choosing the platform where your company’s knowledge will live and build over time. Building an AI-native tech stack is a very different kind of commitment than selecting a single SaaS product.
To see why this matters in practice, take something as routine as a sales order. A sales order doesn’t just involve sales. It also touches purchasing, inventory, and accounts payable. In a traditional SaaS setup, those functions might sit in different systems, with people bridging the gaps manually.
AI changes that expectation. If each of those functions runs on a separate platform, each system is only learning from its own piece of the process. This means the sales tool doesn’t understand purchasing constraints, accounts payable doesn’t see the full context behind the order, and nobody has a complete picture of what’s happening across the business.
Even the best integrations between point solutions allow only a trickle of data to flow between them—it’s like trying to run a company where every department speaks a different language and communicates through a translator. To deliver real AI value, these tools need full context across the business. Integrations simply can’t provide that, and the economics of the AI market are beginning to shift because of it.
There’s a well-known saying in software: there are two ways to make money, bundling and unbundling. The key is getting the timing right. The SaaS era was defined by unbundling, with specialized tools built for specific functions. Now, we see AI pushing in the opposite direction, because value compounds when data is connected.
It helps to think about the current AI landscape in three tiers:
This is why we believe most point-solution AI vendors won’t survive as independent companies in the next few years. The issue isn’t whether the tools are good. Many are. But if you don’t own the data your AI depends on, the economics become difficult to sustain.
It’s also important to understand the difference between AI-native software and AI layered onto an existing product. When a legacy SaaS vendor adds AI, it often fills a gap in the system. For example, Slack’s AI search is useful, but it’s improving an existing limitation. That’s different from AI-native applications designed from the ground up to deliver service as a software by replicating what a person or team does, not just augmenting an existing tool.
The question every company needs to answer: Which tier do your current AI vendors fall into, and does that risk if they are still standing in three to five years?
Even once you’ve made the right platform decisions, user adoption has to be gradual. There’s a well-known story from consumer marketing that illustrates this clearly. When Betty Crocker introduced the instant cake mix in the 1940s, it wasn’t an instant success, even though the cake performed well in blind taste tests. As the story is told today, the product had removed too much of the process, and consumers felt like something was missing. When the company changed the recipe to require cracking an egg, giving people a greater sense of involvement, sales took off.
AI adoption works the same way. You can’t hand people a system that does everything and expect them to trust it. They need to stay involved, see how the process works, and build confidence in the output before you hand over more to them. In other words, give them an egg to crack.
While adoption should be gradual, it shouldn’t be passive. The decisions you make early, especially around vendors and platforms, shape what your AI can become later. And most companies aren’t asking the right questions before those decisions are made. Here are the three we’d start with.
How many AI vendors do you really need?
Having a single vendor isn’t realistic for most companies, but fewer is almost certainly better than more. Most companies don’t feel this tension in the early stages because they’re still working in contained use cases. It’s when they try to scale that disconnected systems become a real problem, and unwinding those decisions isn’t easy.
Who in your organization is thinking about how these tools connect?
In most companies, there’s a real tension between the department leader who wants to move fast and the IT leader who wants a cohesive strategy before committing. Both perspectives are valid, but someone needs to be thinking about the full picture, not just the immediate use case.
If you do end up with multiple vendors, what coordinates knowledge across them?
In AI, we’re starting to see early versions of an “agent of agents,” which is essentially a traffic cop managing context and memory across systems. It’s an emerging concept, not a solved one, but it signals where things are heading. And as these systems become more interconnected, questions of ownership and control become harder to ignore.
There’s an important question about AI that almost no one is addressing yet. What happens to the accumulated intelligence when a vendor relationship ends?
If AI is handling tasks people used to do, and those people have moved on, the platform becomes the primary source of truth for that knowledge. And it’s not just institutional knowledge we’re talking about. Its workflow configurations, prompt logic, and performance history—all built up over time in the vendor’s system. Some of it you can take with you. Much of it, you can’t.
The knowledge your people built now lives inside the systems you chose. And if that relationship ends, the question isn’t just who owns the software. It’s what happens to everything your organization has built inside it.
The people side and the platform side of AI adoption may seem like separate challenges, but they lead to the same place. The knowledge your people build ends up inside the platforms you choose. Who owns that knowledge, who validates the output, where the data lives, how the team adapts, and what happens when a vendor relationship ends? These are the questions that will determine whether your AI investment compounds or collapses, and the companies that get ahead of them early will be better positioned than the ones trying to solve for each side independently.
If you’re a CEO or senior leader reading this, here are three things you can do now:
What makes all of this more challenging is that AI isn’t a set-it-and-forget-it exercise. As AI moves from early experiments into the core of how companies operate, the implications of these choices compound even further.
None of these is a one-time decision. The people you train on AI today will be teaching those systems how your business works. The platforms you select will determine whether that knowledge grows across the organization or stays locked in silos. If either side is neglected, the other one can suffer. A well-chosen platform doesn’t help if your people don’t trust it, and a well-prepared team can’t do much if the systems they’re feeding their expertise into can’t talk to each other.
This is exactly the kind of work we do at TechCXO. Our fractional executives sit alongside leadership teams to help them think through the people side and the platform side together, from org design and governance to vendor strategy and change management. If any of this hit close to home, we’d love to talk it through.
The same question is dominating conversations in boardrooms everywhere: What are we doing with AI? Across industries, there’s a clear urgency to act and a growing fear of being left behind. In response, organizations are racing to implement AI solutions—and build AI-native tech stacks—often before they’ve fully considered the implications for their people, workflows, and systems.
There is no doubt that AI can be transformative. But getting there requires companies to rethink how work gets done, how knowledge is created and shared, and how technology decisions are made. At its core, AI implementation is more than a technology shift, which raises a fundamental design question for organizations. Are we building systems around our people, or reshaping our people around the systems we choose?
That question points to two areas most companies haven’t fully addressed yet:
These aren’t separate challenges, as the knowledge your people build increasingly becomes embedded in your chosen systems. In this article, we’ll explore both sides of that equation, how to bring your people along with intentionality, and how to rethink the way you evaluate AI platforms before the decisions you’re making now become the problems you’re solving later.
When leaders push teams to adopt AI without a clear strategy, it can feel less like an opportunity and more like a disruption. As Birmingham AI puts it, there’s a meaningful difference between AI that happens to your people and AI that happens for them. For employees to embrace this technology, they need to feel valued in the process and understand that AI is here to make their jobs easier, not replace them.
Leaders can’t stop at a directive to “be more productive.” They need a clear strategy for bringing people along, preserving institutional knowledge, and ensuring accountability for AI’s output.
Efficiency is a common justification for AI adoption, but efficiency toward what end? If leadership can’t articulate the business problem they’re trying to solve, the initiative stalls before it starts. This is where the idea of “start with why” becomes more than a familiar leadership concept. It’s a requirement. Has your leadership team actually articulated the business problem AI is supposed to solve, or is the directive still just “be more productive”?
That “why” must be specific enough to communicate up and down the organization, with AI literacy starting at the top. Leaders need enough familiarity with the technology to understand what it can and can’t do, ask hard questions, and make sure investments align with real objectives.
When the “why” is clear, people can get behind it. For example, one of our healthtech SaaS clients had a workflow full of manual, repetitive processes. By identifying which tasks were repetitive and which required human judgment, they automated the former, freeing up more time for client-facing work. The result wasn’t just greater efficiency; it was a more rewarding work environment. Having a clear purpose gave employees a reason to embrace the change rather than resist it.
As the healthcare example above illustrates, AI should be framed as a way to eliminate tedious work so employees can focus on higher-value and more meaningful tasks. People want to understand what’s in it for them, and if that isn’t clear, adoption slows. Any change management effort ultimately comes down to trust. Employees need to believe this change is being done for them, not to them.
One way to build that belief is to invite employees into the process early. Ask them to look at their own workflows and identify where AI could take repetitive work off their plate. When the ideas come from the people doing the work, adoption feels less like a mandate and more like a shared effort.
Having Human Resources co-own implementation with IT helps support this. Who in your organization owns the people side of AI implementation? If it’s only IT, that’s a gap. IT handles the mechanics of setting up the technology, while HR ensures the “why” is clearly communicated and that employees have a place to raise questions and concerns.
This process only works if the company values are more than something written on a poster. If those values are real, they should shape how AI is introduced, how decisions are made, and how accountability is communicated. And as we’ll see, those values matter not just for how you communicate change, but for what gets built into the systems themselves.
This is also where AI policies come in. Many companies started with policies that simply told employees not to use AI tools. But as AI becomes embedded in operations and platforms, those policies need to catch up by addressing who is accountable for AI-generated output, what data can and can’t be used, and how decisions made with AI assistance are reviewed. The goal shouldn’t be to restrict people, but to give them enough structure to move confidently.
As you work through change management and encourage employees to get curious about how AI can augment their workflows, you also must confront some harder questions. If we’re automating junior roles, what happens to the pipeline that ultimately creates senior-level expertise? Anyone entering the job market can relate to this problem firsthand. Entry-level jobs are getting harder to find.
There’s also the question of who is responsible for signing off on AI’s output. Think of AI as a high-level intern. It’s capable, but it still needs someone to supervise the work. Right now, companies successfully implementing AI are relying on more experienced employees to double-check that the output is valid, and it takes real knowledge to do that well.
That works for now. But if we lose the pipeline that builds that expertise, we eventually lose the ability to validate AI’s output. It’s a chicken-or-the-egg problem we haven’t quite solved for yet. And it gets more complicated when you consider where that expertise is actually going, into the systems themselves.
AI is creating new opportunities for how companies manage and share knowledge. Many are already building internal GPTs trained on SOPs, handbooks, and processes to create a shared source of truth. Don’t overlook your company’s values here. If your internal AI tools aren’t trained with your company’s values in mind, the decisions and outputs they produce won’t reflect them, creating ripple effects across the organization.
These tools also change where institutional knowledge lives. The subject matter expert for a business process used to be a person, but increasingly, it’s the machine.
That also means your people’s expertise is accumulating inside the systems you choose. And once it lives there, a different set of questions emerges. Where does that knowledge live? Who controls it? And how easily can it move across systems and platforms in your tech-stack?
Those are questions most companies can’t answer fully yet, and the reason comes down to how they’re choosing their platforms. Because preparing your people for AI is one set of considerations most companies aren’t thinking through deeply enough. But there’s another that’s just as consequential. The way companies have always made software decisions doesn’t apply to AI. These two challenges are connected because the knowledge your people build, the processes they document, and the expertise they teach the system all end up living inside the platforms you choose. If those platforms can’t talk to each other, that knowledge gets trapped. Most organizations haven’t realized this yet because they’re still in early, contained rollouts.
For decades, companies bought software by choosing the best-in-class solution for a particular department or use case, then figuring out integration later. With the help of third-party tools and consultants, there was usually a way to make those systems talk to each other.
But that approach doesn’t work for AI. These systems learn from the data and workflows they’re connected to, and they get smarter with more context. If your AI tools sit on different platforms, each one is only learning from its own slice of the business. There’s no real way to carry that knowledge across systems.
That shift fundamentally changes what it means to choose a software vendor. You are no longer just buying a tool for a department. Instead, you’re choosing the platform where your company’s knowledge will live and build over time. Building an AI-native tech stack is a very different kind of commitment than selecting a single SaaS product.
To see why this matters in practice, take something as routine as a sales order. A sales order doesn’t just involve sales. It also touches purchasing, inventory, and accounts payable. In a traditional SaaS setup, those functions might sit in different systems, with people bridging the gaps manually.
AI changes that expectation. If each of those functions runs on a separate platform, each system is only learning from its own piece of the process. This means the sales tool doesn’t understand purchasing constraints, accounts payable doesn’t see the full context behind the order, and nobody has a complete picture of what’s happening across the business.
Even the best integrations between point solutions allow only a trickle of data to flow between them—it’s like trying to run a company where every department speaks a different language and communicates through a translator. To deliver real AI value, these tools need full context across the business. Integrations simply can’t provide that, and the economics of the AI market are beginning to shift because of it.
There’s a well-known saying in software: there are two ways to make money, bundling and unbundling. The key is getting the timing right. The SaaS era was defined by unbundling, with specialized tools built for specific functions. Now, we see AI pushing in the opposite direction, because value compounds when data is connected.
It helps to think about the current AI landscape in three tiers:
This is why we believe most point-solution AI vendors won’t survive as independent companies in the next few years. The issue isn’t whether the tools are good. Many are. But if you don’t own the data your AI depends on, the economics become difficult to sustain.
It’s also important to understand the difference between AI-native software and AI layered onto an existing product. When a legacy SaaS vendor adds AI, it often fills a gap in the system. For example, Slack’s AI search is useful, but it’s improving an existing limitation. That’s different from AI-native applications designed from the ground up to deliver service as a software by replicating what a person or team does, not just augmenting an existing tool.
The question every company needs to answer: Which tier do your current AI vendors fall into, and does that risk if they are still standing in three to five years?
Even once you’ve made the right platform decisions, user adoption has to be gradual. There’s a well-known story from consumer marketing that illustrates this clearly. When Betty Crocker introduced the instant cake mix in the 1940s, it wasn’t an instant success, even though the cake performed well in blind taste tests. As the story is told today, the product had removed too much of the process, and consumers felt like something was missing. When the company changed the recipe to require cracking an egg, giving people a greater sense of involvement, sales took off.
AI adoption works the same way. You can’t hand people a system that does everything and expect them to trust it. They need to stay involved, see how the process works, and build confidence in the output before you hand over more to them. In other words, give them an egg to crack.
While adoption should be gradual, it shouldn’t be passive. The decisions you make early, especially around vendors and platforms, shape what your AI can become later. And most companies aren’t asking the right questions before those decisions are made. Here are the three we’d start with.
How many AI vendors do you really need?
Having a single vendor isn’t realistic for most companies, but fewer is almost certainly better than more. Most companies don’t feel this tension in the early stages because they’re still working in contained use cases. It’s when they try to scale that disconnected systems become a real problem, and unwinding those decisions isn’t easy.
Who in your organization is thinking about how these tools connect?
In most companies, there’s a real tension between the department leader who wants to move fast and the IT leader who wants a cohesive strategy before committing. Both perspectives are valid, but someone needs to be thinking about the full picture, not just the immediate use case.
If you do end up with multiple vendors, what coordinates knowledge across them?
In AI, we’re starting to see early versions of an “agent of agents,” which is essentially a traffic cop managing context and memory across systems. It’s an emerging concept, not a solved one, but it signals where things are heading. And as these systems become more interconnected, questions of ownership and control become harder to ignore.
There’s an important question about AI that almost no one is addressing yet. What happens to the accumulated intelligence when a vendor relationship ends?
If AI is handling tasks people used to do, and those people have moved on, the platform becomes the primary source of truth for that knowledge. And it’s not just institutional knowledge we’re talking about. Its workflow configurations, prompt logic, and performance history—all built up over time in the vendor’s system. Some of it you can take with you. Much of it, you can’t.
The knowledge your people built now lives inside the systems you chose. And if that relationship ends, the question isn’t just who owns the software. It’s what happens to everything your organization has built inside it.
The people side and the platform side of AI adoption may seem like separate challenges, but they lead to the same place. The knowledge your people build ends up inside the platforms you choose. Who owns that knowledge, who validates the output, where the data lives, how the team adapts, and what happens when a vendor relationship ends? These are the questions that will determine whether your AI investment compounds or collapses, and the companies that get ahead of them early will be better positioned than the ones trying to solve for each side independently.
If you’re a CEO or senior leader reading this, here are three things you can do now:
What makes all of this more challenging is that AI isn’t a set-it-and-forget-it exercise. As AI moves from early experiments into the core of how companies operate, the implications of these choices compound even further.
None of these is a one-time decision. The people you train on AI today will be teaching those systems how your business works. The platforms you select will determine whether that knowledge grows across the organization or stays locked in silos. If either side is neglected, the other one can suffer. A well-chosen platform doesn’t help if your people don’t trust it, and a well-prepared team can’t do much if the systems they’re feeding their expertise into can’t talk to each other.
This is exactly the kind of work we do at TechCXO. Our fractional executives sit alongside leadership teams to help them think through the people side and the platform side together, from org design and governance to vendor strategy and change management. If any of this hit close to home, we’d love to talk it through.
"*" indicates required fields
Get the latest insights from TechCXO’s fractional executives—strategies, trends, and advice to drive smarter growth.