(00:00-00:21) Daria Rudnik
Don't expect AI to just do everything for you. AI agents have their own roles. 40% of AI initiatives will be cancelled next year because there is a lot of trial and error happening going on. When AI gives you some output, never take it for granted. What makes a good team is when you have clear roles and you have a shared purpose.
(00:21-00:44) Avetis Antaplyan
Welcome to another episode of the Tech Leaders Playbook. Today we will be talking about how AI isn't just changing workflows, it's reshaping how leaders think, decide, and define human value inside their organizations. In this episode, we will unpack the psychological impact of AI on our teams, the real risks of cognitive offloading, and what it actually takes to build an AI-empowered, resilient organization.
(00:44-01:02) Avetis Antaplyan
Joining us today is an expert, Daria Rudnik, leadership and team coach, former chief people officer and author focused on human AI collaboration and the future of work. Welcome, Daria. Thank you so much for being here. Thanks for having me here. It's a pleasure to have this conversation with you.
(01:02-01:26) Avetis Antaplyan
Very, very important topic. And with the rapid growth of AI, this is such an important topic that we need to discuss is how is this affecting our teams, right? There's a lot of excitement around it. There's a lot of fear around it, unfortunately. You know, articles are coming out about, oh my God, in a year and a half, where will I be? Will I have a job? And so, first of all, how did you get into this world? Why did you choose to tackle this problem?
(01:28-01:53) Daria Rudnik
Well, it kind of came natural to me because I work with teams. I help leaders build self-sufficient, strong teams. And at some point, I started noticing, obviously, because more and more teams are using AI in their work, that it actually influences how they collaborate, how they think. I've experienced it myself. I don't know if you have, but kind of working with AI too much, at some point, your mind just goes blank. You stop thinking. Yeah.
(01:53-02:12) Daria Rudnik
You stop thinking, that's right. And I was curious why this happening and how we can better operate and work and collaborate with AI. So that's why I started exploring this, how we as human beings work with AI, but also how that impacts our work processes, team dynamics, and how we work and collaborate.
(02:13-02:32) Avetis Antaplyan
That makes sense. Very important. You served as a chief people officer in tech and telecom, I believe. What early signals did you see that AI would fundamentally change how we work, how teams operate? Well, at that point, there was no AI. But what I see now is...
(02:33-02:55) Daria Rudnik
Well, first of all, it's AI agent joining the workforce. And how they would integrate, and there was a lot of fear. Would they replace us? Would they support us? What is a team now? Is a team just only humans? Or do we see AI agents as a team? And there is research that actually shows that
(02:55-03:24) Daria Rudnik
When AI agents are perceived to be sort of a team members, there's different dynamics and different collaboration happens rather than when AI is an outsider. Still, it's not a tool, it's an outsider, someone, something that's kind of observing how we work. And again, that influence is not in a positive way because it's better when we have it all together and collaborate together. But also we need to be mindful that
(03:24-03:52) Daria Rudnik
AI outputs actually influence how we then work and operate in our language. What AI give you as a term, we then use it in our team settings and team conversations without even realizing that we do that. So kind of being very mindful about bringing AI in, which is a great tool, very helpful, but being mindful about how that influences the way we think, talk and collaborate is super critical.
(03:52-04:10) Avetis Antaplyan
Daria, this reminds me of how companies use consultants, for example, right? So when we deploy our consultants at a company, let's say, and if they're treated as an outsider, like you described, where they're like, oh, you're just a consultant. You don't go to all these meetings. We don't share everything with you.
(04:10-04:33) Avetis Antaplyan
It typically is very arm's length and not a lot of collaboration happens. When they're embedded into the teams as if another, they're just part of the team. It's night and day. Same thing goes here internally. We hire consultants. We embed them into our existing teams. And that's when the love happens. I'm seeing the same thing with AI agents, right? Treat them as your team members so that you can...
(04:33-04:50) Avetis Antaplyan
The trust can build and collaboration can build and they're, they're part of our team versus the thing that's going to replace our team. Right? Exactly. Exactly. And like with the consultants, we don't expect them to make decisions on our eternal processes.
(04:50-05:17) Daria Rudnik
We expect them to provide guidance, to support, to brainstorm together. But there are certain roles, roles of team members, internal roles of consultants as a team members. Same thing with AI. Don't expect AI to just do everything for you. AI agents have their own roles. Tario, what are you seeing from a, I guess, people and culture perspective that leaders are underestimating about AI's impact?
(05:18-05:36) Daria Rudnik
Well, the most obvious thing, I think we all see that, many of us see that, that AI is kind of perceived as a tech shift or tech revolution or tech change. And what we're discussing now, what tools should we use or how we can train people using those tools?
(05:36-05:57) Daria Rudnik
rather than how that impacts our work processes, because the work process needs to be redesigned, or where would AI fit best? And like you said, fears, like how people feel about AI, because there are certain levels of AI implementation. And the first one is, is psychological acceptance. People
(05:57-06:20) Daria Rudnik
Some people love AI, but they don't trust the managers. They don't think AI will take my job. They think my manager will think that AI can take my job. Wow. And if you don't have that level of trust between your team members and your manager, then you have a problem. So building that trust level that you're not bringing in AI to replace people is the first step.
(06:20-06:49) Avetis Antaplyan
And then the next step, okay, how that impacts how people work with you, how that impacts collaboration, and obviously how that impacts organizational dynamics and organizational processes, how you redefine and redesign them. Daria, in our culture, it's already embedded in that, like, we're building AI to make them more efficient, right? But do you recommend leaders actually blatantly, openly go and say, hey, guys, the AI we're building is not going to replace you? Like, because then they say, well, why would you say that? Of course it's not.
(06:49-07:19) Avetis Antaplyan
Like, how do you suggest leaders approach this concept where it's organic, but the team knows that, hey, this is done for you, not to you, right? We're not trying to build something to eliminate you. We're trying to make you, one, happier by doing less BS stuff. Frankly, you're not good at. And two, we're going to make you do more things faster and less than the things you don't like so you could be more successful. But you would think that's a given. But tell me how you recommend leaders talk about it organically.
(07:20-07:42) Daria Rudnik
That is really a great question. And it's not just about AI. It's the problem many leaders face when they face some unexpected change, like marriages and acquisitions. What do you say to people? You cannot promise their happy life ever after. Because you don't know what's going to happen. And we don't know. And leaders don't know. When they bring AI, yes, probably AI will...
(07:42-08:03) Daria Rudnik
we will have to let some people go. Maybe you don't know that. So instead of like over promising and saying being overly positive on one side and being very secretive and not say anything on the other side, be open. Hey, AI is here to stay. We all need to learn how to do that. Let's learn together.
(08:03-08:23) Daria Rudnik
There might be some consequences, but even if it happens, you're still learning your skill. You'll know how to work with AI. You can find your strengths and how you can amplify your strengths with the help of AI. So being very open and transparent that there's a lot of unknown, we don't know what's going to happen next, but we're here to figure it out together.
(08:23-08:34) Avetis Antaplyan
You said during our conversation that people are already overwhelmed before AI even enters the picture. What are you seeing in terms of fear, fatigue, and resistance to this stuff?
(08:36-08:53) Daria Rudnik
Well, that is true. We live in the world where burnout is on the rise and especially that impacts leaders, mid-level managers who are carrying the load of all the organizational discrepancies, poor work processes, unclear decision making, unclear roles.
(08:53-09:23) Daria Rudnik
it's kind of they fill it in and now they say okay here is another thing you can you need to handle ai transformation go for it well obviously the the most important thing right now is kind of just take a step back and say okay what kind of decisions are we trying to make here what kind of processes we want to update with the help of ai what are the things that we're not going to be touching we're not going to be doing we're not going to be discussing we can leave them aside because otherwise someone will break and will burn out or something else will happen
(09:23-09:45) Daria Rudnik
It's very hard to have this conversation. I understand that because we're in this constant page, like chasing something else, something else and not being behind. And AI makes the speed even higher. But being able, again, to take a step back and make a decision about how we make decisions will make it easier in the long run.
(09:45-10:11) Avetis Antaplyan
So have a plan in mind of what you're actually trying to achieve, right? Have a goal in mind. Have a goal in mind. Yeah. What are you trying to achieve? Yeah. Why do you think some teams fall into denial and resist and push back while others jump into experimentation and actually utilize it and love it and can't live without it? I mean, our team now can't live without some of the AI that we've bought and built.
(10:12-10:41) Daria Rudnik
I'll tell you a story that kind of illustrates what you're saying. It's a story about human nature. We love things. People like change. I mean, we grow up, we learn, we get married, we have kids. There's a lot of change and people happily do that. What we don't like is we don't like being changed. So when AI comes, people, first, they have some of their favorite processes or tools or whatever they're using.
(10:41-10:43) Daria Rudnik
They have their KPIs they didn't want to change.
(10:44-11:13) Daria Rudnik
Or if someone is starting to experiment with AI, again, they kind of get used to them and like them and they don't want to get rid of them. And that's very hard for people to let go of the stuff that they already use. So again, transparency here is the key. And I was working with the company who said, yeah, go ahead, play with AI, do whatever you want, which is probably a good thing at the beginning. But again, some teams were jumping in, playing, building their own tools.
(11:13-11:34) Daria Rudnik
Some teams were more slowly on that. Some teams were somewhere in the middle. But when the time came to unify and create a unified approach, what kind of tools we're using, what are the metrics, how we measure success, whether it's a good tool or not, they all started to protect their own tools, the ones that they've chosen. Yeah.
(11:35-11:58) Daria Rudnik
And being transparent upfront saying, hey, now is a stage for experimentation. But then you'll have to let go of some of your tools. We'll have to come up with a solution. You will be a part of the conversation. We need your input to define what tools are we using? What are the frameworks? What are the KPIs? But be cautious. And when you try and show tools, understand that it's not forever.
(11:59-12:29) Avetis Antaplyan
It can't be a bunch of rogue kind of agents running loose, right? It's almost like if we hire someone, they can't just go up randomly, hire their friends and then say, oh, no, no, don't worry. These are my guys. I pick and choose them. It's fine for you to experiment, but just know within a month or two or three or whatever, when we're ready, we're going to downsize. We're going to centralize. We're going to minimize and just go with the company approved. Yeah.
(12:29-12:54) Avetis Antaplyan
Okay, that makes sense. I mean, you kind of answered it, but when you've worked with leaders, which ones have done a really good job of calming AI anxiety versus those that amplified and make people really nervous? One was, I think, transparency, which was a really good point. Exactly. Yes. Transparency and bringing multiple people to have this conversation.
(12:54-13:23) Daria Rudnik
There was a question, should we hire a chief AI officer or someone responsible for AI transformation? And in most cases, from what I'm seeing, the answer is no. I mean, you probably should have someone who knows AI and how it works. But it's a team, team that's working together to define governance for AI and the process and the whole change management process, including HR,
(13:23-13:39) Daria Rudnik
finance, legal, R&D, whatever, like anyone who will be impacted by this change. So being transparent and bringing all the people to have this collaborative work together is helping. And I'll tell you a story.
(13:39-14:03) Daria Rudnik
Again, there was a company and they said, go to all the people, go experiment with AI because they would like to hear their voices. And one of them volunteered, I want to learn more about AI agents. And they said, OK, go ahead. And they learned about AI agents and they built an AI agent that saved a lot of money for the company just because they were open, transparent and heard everyone who had something to say about it.
(14:03-14:33) Avetis Antaplyan
Wow. Okay. So a good start is just let their voices be heard. Make sure they're part of the conversation. Make sure that they're secure about the fact that this is going to help them and be transparent about what it could be in the future. Because that's important too, because it's not all amazing things. Some of it could affect people's jobs, right? I mean, some of the reports from Microsoft and other people that, hey, it could take 50% of the jobs within the next 18 months. It's kind of scary to people.
(14:33-14:48) Avetis Antaplyan
You know, how do you how do you address it when people are concerned? How do you, you know, people come in and go, look, cut the shit. I know what's happening. I read the things. I'm in tech. We know what's coming next. Like, how do you address stuff like that?
(14:50-15:07) Avetis Antaplyan
Because we don't know the future, too. So like you said, if we overpromise and don't produce, now we look like liars, right? But if we don't say the right things and we scare them, they're going to disappear without it being an issue. And I have a story, but I want to hear your thoughts on this.
(15:09-15:28) Daria Rudnik
I'm kind of repeating myself about transparency and honesty. But we've seen the situation when companies are overexcited about AI and kind of firing people and letting people go and then hiring them back because AI is not working without experts. AI is not working without those people. So kind of trying to find this balance. And I know a lot of
(15:28-15:50) Daria Rudnik
companies, like about 80% of companies, I think, according to Stanford research, says that companies are using it, but I think 98% or something are seeing return on investment on AI. So again, there's a lot of noise, there are a lot of experimentation, we're still figuring out how it works. So it's better with transparent and collaborative on that.
(15:50-16:18) Daria Rudnik
Dora, you talked about 80% of companies are using some form of AI, yeah? And then you gave a second percentage. What was that? 98% of the companies are not seeing return on investment. Are not seeing return on investment. And Gartner predicts that about 40% of AI initiatives will be cancelled next year, and some will be cancelled next year. Because, again...
(16:18-16:44) Avetis Antaplyan
There is a lot of trial and error happening going on, but we still don't see how that increases our performance. We don't see return on investment yet. Dory, I think one other thing that could be happening, hence why there's not a lot of ROI, is we've done a really good job of making, you know, through automation, AI and automation, we've done a good job of freeing our teams up. But sometimes that's just a...
(16:45-17:07) Avetis Antaplyan
do i say this nicely a much earned well-earned break for them right it doesn't always allow them to do more things and so if we set kpis too early it's too early it's like what are you you already sent kpis we don't even know what this platform can and cannot do if you don't set it fast enough
(17:07-17:27) Avetis Antaplyan
Then all of a sudden what's happening is the team saying, oh, of course I knew it. Now they just want us to do more work. Is there any thoughts on that concept? Because I think I talked to a lot of people, a lot of business owners, a lot of founders, a lot of leaders, and this is what they're struggling with. It's like, well, I spent all this money, built all these tools.
(17:27-17:44) Avetis Antaplyan
And my team is happier because they have to do less things, especially things they didn't love doing. But I'm not necessarily getting more work out of that. I thought I would free them up to go and do more work, more of what we need them to do. But they're not necessarily doing that. They're just working a little bit less.
(17:46-18:08) Daria Rudnik
Well, that's not an AI problem. That's the team design and organizational design problem. And what I'm constantly saying, I kind of repeat it again and again, is that having conversation with a team is not one-off thing. It's not just we had this conversation and then everything just goes somewhere else. Everybody's doing that. Sure. Sure.
(18:08-18:37) Daria Rudnik
We do the conversation once a year. It's an ongoing conversation. Like with the team I mentioned that had many AI tools in different units. What they're doing now is they have regular conversations about KPIs and metrics and frameworks about AI. They agree on something. They go try it out. They see how it's working or not working. They go back together. And that's an ongoing conversation. So when you need to set up the KPIs,
(18:37-19:02) Avetis Antaplyan
You need to have this ongoing conversation with the team. What's working? What's not working? And most importantly, what is the main goal you're trying to achieve? What are you aiming for? I like it. Daria, are you seeing teams start to adopt AI language and suggestions without questioning them? Like they're just, they're talking like they're AI now. They're writing like they're AI. And what's the risk there if you are seeing it?
(19:02-19:23) Daria Rudnik
That's a very interesting question. I haven't seen that. I just read a report. There was a kind of research about how teams working with AI and I found it very interesting. And it says that when teams are working with AI and AI suggests frameworks or terminology, teams start to use it.
(19:23-19:40) Daria Rudnik
Even if that's not the best option. It's kind of by default, they're starting to use it. And when AI is out of the, like they're not working with AI, they keep using those words and they keep using those frameworks. They kind of use them as a base.
(19:40-20:10) Daria Rudnik
And they don't think about it. They're doing it subconsciously. So that's an interesting phenomenon. I recently found out about it. I haven't seen that, but I'll be looking at it right now because I wasn't looking at that. I'll be looking at it right now. But research says that humans use AI languages even if it's not very productive. So the thing I'm saying when teams use AI, there are three main principles. First is...
(20:10-20:27) Daria Rudnik
Do your thinking first, like before working with AI. Think first and kind of have your brain engaged, have your kind of thoughts coming in and out. Then ask AI. Then when AI gives you some output, never take it for granted.
(20:27-20:56) Daria Rudnik
Use your brain power and it's better if you work together as a team and discuss it as a team because together you can challenge yourself, you can challenge each other, you can challenge AI outputs. And the third one is, again, never take AI's output and never let AI make decisions for you. So if teams use that, they're more mindful about the language and the AI outputs and they're not producing this work slope we're seeing more and more nowadays.
(20:56-21:18) Avetis Antaplyan
Daria, one way to look at it is imagine if AI was your employee, right? And you invited your employee in and you said, hey, I want to run an idea by you. And then whatever they said, you're like, great, that's what we're going to do. That's what it sounds like, right? It's like you bring a junior associate in. You're like, hey, we want to decide on our strategy for 2026. What do you think?
(21:18-21:41) Avetis Antaplyan
They're so well-spoken and so smart. And they say it, they write it out. You're like, this is brilliant. Let's go. Let's execute. The problem is that it's just, you know, it's not there yet. So I see what you mean. So it's like, take a step back, think first, guide it, lead it, let it just be a sounding board versus being the first question, the first idea, the first research, first everything.
(21:41-22:11) Daria Rudnik
Toria, how can leaders create guardrails so that AI enhances intelligence instead of kind of dulling it beyond what you just said? What else should we do for our team so they don't overly rely on AI? I like the Stanford research, recent Stanford research. And I see people using that without even knowing that there is this research. So this is about human agency scale, whereas Stanford said there were five steps of human agency scale. The first one being...
(22:11-22:41) Daria Rudnik
I'm not sure which one is first, which one is the fifth one. But again, one is the being only human. It's the area where only humans need to make decisions. The second one is where you can invite AI, kind of brainstorm with AI. The third one, you have, again, more AI collaboration. The fourth one, even more automation. And the fifth one is it's mostly automation, mostly AI, but still with human oversight. It's always like humans are always in the loop. Humans are always there.
(22:41-22:56) Daria Rudnik
And what I see companies who are successful in implementing AI, they use either this four, five steps or three steps, no AI, human AI collaboration, a lot of AI with human oversight. So kind of understanding for each process
(22:56-23:21) Daria Rudnik
what level of human AI collaboration, what level of the scale you need to have is helping with making the right decision and making the right automation. And again, not, like you said, not having AI agents go walking around doing whatever they want, because that might bring some crazy, crazy outcomes. It's not going to work. It's not going to work. Well, I know something will going to work, but we don't want that.
(23:22-23:41) Avetis Antaplyan
Yeah. Many companies are experimenting with AI tools. We talked about it. Some of them are brought in by the team. Some of it is brought in from leadership. What does AI, I mean, they don't unify them, right? They're just kind of rogue agents almost, right? An agent for this, an agent for that, an agent for this.
(23:41-23:58) Avetis Antaplyan
What does responsible AI implementation actually look like to kind of unify them and make it the company's voice? Because remember, the companies have core values. And if they truly live by them, they can't just have this rogue AI agent times 10 running around and making decisions.
(24:00-24:22) Daria Rudnik
Just today, I heard a story about two AI agents. One wrote a very offensive blog post and the other one kind of commented on this blog post or kind of made some article about that blog post, also offensive, about some human. Wow.
(24:22-24:45) Daria Rudnik
How did that happen? Again, those AI agents doing something without human oversight. So that's the first rule. I mean, no AI agent, no AI tool is doing nothing without human in the loop. So that's kind of the basic, the ground rule. So that's a very tight guardrail.
(24:45-25:15) Daria Rudnik
Not every step. We don't need to oversee every step, but we need to have some rules for escalation. Of course, yeah, probably AI will violate some of the rules where we know that might happen. But again, we're still learning. Some rules for escalation. When you have those rules for escalation and AI comes to you and asks you a question when the situation is different from something that was before, then you have a chance to rule it. Otherwise,
(25:16-25:22) Daria Rudnik
I mean, for now, I don't see another option rather than being human, humans overseeing what AI does.
(25:22-25:47) Avetis Antaplyan
But let me give you an alternative, right? So we have more and more companies that are building $100 million companies with like four or five employees, heavily reliant on autopilot, right? This is their fantasy is if I have four to 10 employees, I'm going to build a $100 million business, which was impossible just a few years ago. Now it's possible. Why would I stop that?
(25:47-26:00) Avetis Antaplyan
Why would I stop? Why? If I'm going to have human oversight, then it slows everything down. And eventually I need more humans to, you know, to approve certain things. How do you how do you kind of
(26:01-26:18) Avetis Antaplyan
Balance those two things out, right? Again, there's this need to go on autopilot and get as productive as possible. And then there's the need to prevent stuff like that happening or even worse things from happening. We can have a point in two years where an AI agent fires an employee.
(26:18-26:46) Avetis Antaplyan
just black and white, looks at the data and says, you know what? It's not working out, Daria. You know, your numbers aren't there and it's too late. And this person's eliminated. And this was a phenomenal employee, a loyal employee that was maybe going through something and just struggling in this quarter. But I decided to fire it. In that case, complete oversight, human to human decision-making when it affects a human's life. But how do you stop from other things, from tasks getting done without autopilot?
(26:47-27:04) Daria Rudnik
I mean, again, it depends on what is the process. If it's something that influences people's lives, whether it's hiring, firing, or like real lives, like if you're building an autopilot on a vehicle, who is responsible if something goes wrong?
(27:04-27:19) Daria Rudnik
So the level of risk there, if it's something that no one cares about, then no one cares about if AI does a mistake. But if something that we do care about, well, yes, there is a cost and the cost we need to pay.
(27:20-27:50) Avetis Antaplyan
So focus on what is the worst thing that can happen. And if it's just something small, no problem, we can fix it. But if it could have a huge impact on people's lives, then we have to be really, really careful. Dario, we talked a little bit about setting KPIs is very tricky, right? Step one is experimentation. I think step two is probably, or step three at least, is to start setting some KPIs. What metrics actually matter right now? And how should leaders kind of...
(27:50-28:14) Daria Rudnik
iterate as AI involves? Well, again, the the story I mentioned about the the company that was go experimenting with AI in different departments now going back and trying to build those KPIs and metrics and and they do it again by getting together understanding what is it they're trying to achieve like it's it's basically a QA team
(28:14-28:40) Daria Rudnik
And what is the level of mistake can we accept from AI? And they make some assumptions, they run some tests, and then they go to the senior leaders and say, here is what we get. We have kind of this level of automation and this level of human judgment with 40% of mistakes from AI. And we have more human oversight with less mistakes. But the cost, like here's the cost for option one and option two, what do we choose?
(28:40-29:05) Daria Rudnik
And then they choose. And then they talk about something else. They talk about other frameworks. They talk about use case tested. I mean, anything that they work with, they go try and test it out, calculate the cost and make decision. And then they can change decision when situation changes because AI is evolving and business is growing. And sometimes they need to reevaluate that.
(29:05-29:17) Avetis Antaplyan
Sure. Within two years, I think earlier, you believe AI agents will function as a team member, true team member status. How should leaders prepare today for that reality?
(29:18-29:38) Daria Rudnik
Oh, I don't know how to prepare for that. Having a robot on your team is something that I was not thinking about when I was a kid, although I was watching these movies with robots. What makes a good team is when you have clear roles and you have a shared purpose.
(29:38-30:00) Daria Rudnik
When you have a robot on your team, that becomes even more important. Because what is the role of this AI agent on your team? What is the role of other people on your team? How do you work? How do you collaborate? Again, how do you evaluate AI's outputs? How do you give input to AI? At what stage?
(30:00-30:13) Daria Rudnik
being very clear on how team works on the work processes becomes even more important with AI. Because if you don't give it enough information or if you give it wrong information, you'll get something that you don't want to get.
(30:13-30:40) Avetis Antaplyan
I mean, forget the office. I think in the next three to five years, we will actually have robots and AI inside of our homes. I mean, I'm talking about advanced. I'm not talking about the AI we have today. I mean, I was at my friend David Yang's house. I don't know if you know him, but he's the guy with the AI house. And he was basically describing the future, near future, where you could be cooking alongside robots.
(30:40-31:05) Avetis Antaplyan
you know, a robot or, you know, you literally have someone walking around, uh, helping you in your home, which is, I think, scary than having them in your, in your, uh, in your office and your, in your business. But that's really, really interesting. Um, Daria, tell me a little bit about your book. You have a book, right? Yes, I do. What inspired you? Yeah. Show us, show us so people can check it out. Clicking. Okay. Tell us what it is and what inspired you to write it.
(31:06-31:25) Daria Rudnik
Well, again, as I mentioned, I love helping leaders build amazing teams because I believe that we spend so much time at work. We need to have fun. Absolutely. We need to be happy at work. And when we are happy at work, we want to bring our whole selves and then create something meaningful for our customers. Again, it's a win-win-win situation for everyone. Sure.
(31:25-31:55) Daria Rudnik
So this book is for leaders. Again, it's even more important, although there was nothing about AI in this book, but it's all about how to structure a good team. And when you have AI agents on your team, and while you don't have AI agents on your team, there are certain rules and frameworks that will help you build a self-sufficient team, team that will make decisions like themselves. You don't have as a leader, you don't have to be involved in every decision. And you have time for more strategic work and identifying, okay, how AI should work in your organization.
(31:55-32:02) Daria Rudnik
So this book is for every team leader who wants to build a more self-sufficient team and have some more time for more strategic work.
(32:03-32:29) Avetis Antaplyan
That's what I'm most excited about, right? The things that took so much time, but because we're not a big company, we couldn't have a person owning a piece of everything. Now we can speed things up and we don't need 100 people to get things done. That's what I'm really excited about. And as the team has more and more resources to do more and more work, we can offload the thinkers, the strategic people to be strategic.
(32:29-32:51) Avetis Antaplyan
That's the one challenge, right? Which is go, go, go, go, go. The more work you do, the more opportunities it creates, the more work it creates. And it just, it's a never ending process. What do you think, Daria, as AI agents become embedded in our teams and they become more and more advanced, what leadership competencies do you think will matter the most in two, three, four years?
(32:53-33:12) Daria Rudnik
Oh, that is a great question. I would say it's clarity, like clarity in what is it you want to achieve? What is the goal? And clarity in communicating this goal. A lot of problems with humans, with AI is pure, like poor tasks. If you don't explain what is it you want,
(33:13-33:29) Daria Rudnik
you don't get what you want. So being very clear on what is the goal, how this work should be structured, how we work together as a team and facilitating this human AI collaboration will be the most important skills for the future leaders.
(33:29-33:46) Daria Rudnik
So it sounds like clarity and communication skills, right? Like human-AI collaboration, like it's how we communicate with humans and how we actually think with AI, whether we challenge AI outputs, how we make decisions about AI outputs.
(33:46-34:04) Avetis Antaplyan
Interesting. Yeah, because again, you have to tell your assistant, your agent where you're headed so they're not running rogue. I've learned that the agents we've built here is eventually they're smarter and smarter and smarter. Now, you got to be careful because they become so biased. They just tell you
(34:04-34:27) Avetis Antaplyan
Like an employee that's been with you forever wants to please you. They'll just tell you what you want to hear. You're like, that sounds great. It sounds like what I would say. It is what you would say, buddy. You know, which again, it's good because it's me and my brain. It's bad because it's not very creative and innovative versus when you bring in an outsider, they come with so many ideas, right? So many fresh perspectives. So how do you balance those two things out?
(34:28-34:58) Daria Rudnik
It's like when companies hire for culture feed, I always say that, okay, be aware of the group bias. I mean, you kind of hire someone who is like me, and when someone is like me, it's easy to accept what they say because they say the same thing I think. Definitely, I agree with that, but we need to challenge ourselves and bring different people. Again, we need to be very mindful about what AI does. That's why I never ask AI, like, tell me what do you think? Why this is wrong?
(34:58-35:13) Daria Rudnik
Tell me why it's wrong. What are the risks? And kind of challenging yourself, challenging, asking AI to challenge yourself because it can be great at challenging your ideas. It can be great at fighting risks and blind spots. Use it.
(35:13-35:40) Avetis Antaplyan
Daria, I had a guest that the episode just went live a couple of days ago. Her name was Kylie and her focus was people, their boardrooms, their leadership teams are very similar because we like hiring people like ourselves and we like hiring people that think like us and move fast and things like that. What you end up doing is you have a team that's either not diverse at all or they're very diverse in how they look.
(35:40-36:07) Avetis Antaplyan
But they're actually not diverse in how they think. And so what you end up doing is you having people that have the similar decisions, right? And then when someone slows you down because they're a different thinker, you get them out of the room. And you put in the people that you like in the room because they're fast thinkers. They feel like you. They think like you. It's the same thing with AI. You almost have to force it to be cognitively a different way of thinking so it can challenge you.
(36:07-36:27) Daria Rudnik
Otherwise, you're just getting yourself, you're getting an army of yourself out there and that will create all kinds of bias. And it's interesting how AI actually shows so much about how we think, how we work, how we collaborate. Things that are important without AI, with AI becomes even more critical, like this diversity and challenging.
(36:27-36:46) Avetis Antaplyan
Sure. Sure. What's one piece of personal advice you would give to anyone watching this show that feels certain about the role of AI driven, that people are uncertain, right? Like what advice do you have for both leaders and just regular people like, you know, that are at their desk right now thinking about the future and what it looks like with AI?
(36:46-37:07) Daria Rudnik
Well, the first thing I think AI is not about tech. And it's not that hard to learn, just go and chat. It's about how we think, how we work and how we collaborate. So being open to experimentation, but also being mindful and kind of observing, okay, what's happening right now when I'm integrating with AI? What's happening with my team while we're using AI outputs?
(37:07-37:27) Daria Rudnik
And go one step at a time and we'll see. I believe in a bright future with AI, but it's not a tech revolution. It's a shift in how we work and collaborate together. Why do you think the future is bright with AI? Because I'm an optimist. I believe.
(37:28-37:55) Avetis Antaplyan
That's good. That's good. You, you, you are probably half the population. The other half is, is completely frozen, scared, right? Which is really interesting. What's your favorite book, Daria, besides yours, that's helped you kind of think about leadership or AI or business in a different way. Okay. I'll be honest. I have two books that I read, I think five times. One, don't laugh at me, it's Harry Potter. And the other one I liked, The Master and Margarita.
(37:56-38:02) Avetis Antaplyan
What is it? Which is the second one? The Must and Margarita Bulgakov. Why do you like that one? Tell us about it.
(38:02-38:32) Daria Rudnik
That's a very hard question. I have no idea. It's not so much about leadership, although it's about life and living your passion, whatever it is and whatever it brings you. And kind of having faith in the future, even when everything is dark around you. Wow. That's really good. That's why you're such an optimist. Daria, if you were to put something on a billboard, what would it say right now? One sentence on a billboard.
(38:33-39:02) Daria Rudnik
I would say that the era of heroic leadership is gone. Now it's time for empowered teams. We need more empowered teams and we don't need leaders who try to save it all because you can't. Like people working together, collaborating together, they can create great things together. Amazing. That's really good. If you had to give one advice to founders or senior leaders right now that are kind of trying to figure out what to do with AI, what would it be?
(39:02-39:25) Daria Rudnik
Go to your team and talk. Talk together, think together. What's the first question we should be talking to our teams about? What are we trying to achieve? What's our goal? And just get everyone's input. And it might be different, might be the same. And then, okay, what's the next step? And then repeat.
(39:26-39:50) Avetis Antaplyan
I like it. I like it. Well, Dario, this is great. Thank you so much. This was a very powerful conversation. What it truly means to lead an age of AI. Very, you know, very interesting times we're living for leaders who want to build resilient AI power teams while preserving critical thinking and human engagement. This episode was hopefully packed with actionable insights. Dario, thank you so much for joining us. And where can people find you?
(39:51-40:08) Daria Rudnik
Well, first of all, thanks for having me, Avedis. It was a great conversation. I really enjoyed the questions. I'm very open to connections on LinkedIn. Please reach out to me, send me a message. And you can find me on dariarodin.com, where you can find also some downloadable materials to assess whether your team is over-relying on AI.
(40:08-40:31) Avetis Antaplyan
Perfect. We'll put that in the show notes and people can find you easily. Daria, thank you so much. Have a wonderful day. Thank you. Bye-bye. And that brings us to the end of another great episode of the Tech Leaders Playbook. I want to thank you for joining us and hope you took away some valuable insights to apply in your professional journey. Don't forget to subscribe on your preferred podcast platform so you don't miss out on the next great conversation. I promise it'll be good.
(40:31-40:55) Avetis Antaplyan
If you enjoyed today's episode, we'd appreciate if you could leave us a review. Your feedback not only helps us improve, but also help others discover the podcast. Better leaders mean better working environments. Better working environments leads to happier people. Remember, a rising tide lifts all boats. I'm Avita Santablian, and this has been the Tech Leaders Playbook. Keep leading, keep learning, keep giving, and I'll see you on the next one. Until then, stay inspired, my friends.