AI in The New Era
AI Governance for SMEs: Where to Start? | Panel: Rudnik, Burtscher, Zampirolo

This panel discussion brings together leading experts to explore “AI Governance for SMEs: Where Do You Even Start?”


Small and medium-sized enterprises (SMEs) often struggle to implement AI governance—caught between complex compliance frameworks and risky ad-hoc approaches. This session provides practical, real-world guidance on building governance systems that are scalable, effective, and tailored for SMEs.


🎤 Panel Speakers:

Daria Rudnik – Founder & CEO, Aidra.ai | ex-Deloitte

Felicia Burtscher – Senior strategist in AI innovation, ethics, and EU AI regulation; contributor to EU AI Act standards

Giorgio Zampirolo – AI governance advisor helping organizations implement structured, accountable AI adoption


🔍 Key Highlights:

Why AI governance is critical for SMEs

Challenges with traditional enterprise frameworks

Building lightweight, practical governance models

Balancing innovation with ethical responsibility

Creating guardrails without slowing down growth

Translating AI regulation into real-world implementation

Moving from AI experimentation to structured adoption


🎤 Part of AI in The New Era Conference (April 15, 2026) A global platform for AI innovation, governance, and leadership.

We explore how SMEs can approach AI governance without getting overwhelmed by enterprise-level complexity.

  • SMEs need practical and scalable AI governance, not heavy bureaucracy
  • Effective governance balances innovation, accountability, and ethics
  • Clear guardrails help organizations adopt AI safely and confidently
  • Leaders must move from ad-hoc experimentation to structured implementation
  • AI regulation becomes valuable only when translated into real operational practices
(00:00-00:19) Daria Rudnik
Most small medium enterprises are either paralyzed by AI governance or ignoring it completely. Why? And is this the technical problem, cultural or structural? And that's what this conversation is about. AI governance for SMEs. Where do you even start?

(00:19-00:41) Daria Rudnik
We are AI Leaders Compass, and AI Leaders Compass brings together practitioners from across the AI transformation spectrum, governance, strategy, leadership, and human collaboration. Together, we focus on one question. How organizations can adopt AI responsibly and make it work in the real world?

(00:42-01:07) Daria Rudnik
And I want to introduce our speakers today, Felicia Butcher, senior strategist at the intersection of AI innovation, ethics, and regulation. She has contributed to the harmonized standards for the European AI Act and built certification systems across Europe, helping organizations translate regulation into practical compliance and leadership strategies. Welcome.

(01:09-01:26) Felicia Burtscher
And welcome Giorgio Zampirolo. He's a strategic advisor and AI governance specialist working with boards, executives and institutions to turn AI complexity into clear strategy, responsible governance and informed decision making. He focuses on helping organizations move from experimentation to structured, accountable AI adoption.

(01:27-01:56) Giorgio Zampirolo
And finally, Daria Rudnik. She's a team architect and executive leadership coach. She's the author of Clicking and co-author of The AI Revolution. She works with leaders globally to build teams and to stay strong, self-led, and to stay effective under pressure.

(01:56-02:22) Felicia Burtscher
She helps organizations integrate AI into ways of working without losing human judgment, ownership and performance. Okay, so let's jump right into the discussion. So, Giorgio, before we talk about how to govern AI, why does governance even matter for a small business? And what's actually at risk if an SME just wins it?

(02:24-02:44) Giorgio Zampirolo
Thank you. That's a great question. First of all, we want to talk about AI looking at scaling, so operationally improving the performance. We have to be careful about scaling the right things. One thing that small and medium enterprises are risking is to eventually

(02:45-03:07) Giorgio Zampirolo
scale the wrong decisions, so use technology to accelerate actually the problems and not the solutions of the situations. To solve this, we need to face the problem of responsibility, so making clear who's responsible for the decisions, which side of the decision-making is given to technology, which side stays with humans.

(03:07-03:37) Giorgio Zampirolo
So it's about responsibility and governance. So that's the core danger and problem that we need to face. There are obviously multiple consequences of this situation. One can be the legal aspect of it. If technology is taking decisions, who's actually responsible for these wrong decisions and problems related to it. And also the risks, because what I see in my work and advising activities

(03:37-04:04) Giorgio Zampirolo
is the constant kind of attempt to integrate AI, but there is not a kind of organized and clear path to governance and management. So the risk of using this technology is building over time and is reaching in different direction within the company. What we can see is the approach to control the situation, to minimize the risk, is to

(04:05-04:33) Giorgio Zampirolo
Identify the use cases to have a kind of conversation around what's the use at the moment in the company and why we're using this technology and what are the results. So I have some cycles of attempts and some measurement of results and a conversation about trying different approaches. So mapping decisions is critical. Assigning the ownership. So who's responsible for what?

(04:33-04:59) Giorgio Zampirolo
It was the path, was the escalation path to talk about consequences of right and wrong decisions. The structure, therefore, is a key point for small and medium enterprises. So we need to define the decision kind of path, define the rules, eventually maybe discuss the rules, evaluate and modify. Because we need to take into account that technology is changing very fast.

(04:59-05:11) Giorgio Zampirolo
So our process needs to adapt very swiftly to the environment. So guidelines are great, but they're always developing together with the tools and the processes.

(05:12-05:36) Giorgio Zampirolo
So therefore, my advice for small and medium enterprises is to transition from experimentation, which is the initial kind of engagement with technology and AI, to scalable adoption. So developing clear guidelines, having decision mapped out, and escalating these rules throughout the organization.

(05:37-06:03) Daria Rudnik
And I have a story to support what Georgia has just said. Georgia said about how important it is to have AI governance and agree who's doing what. And it's a story about a scale-up company who introduced AI at the pilot stage. And they were trying with AI, they were trying different tools. But then when it comes to actually scaling it and using it across the whole organization and making sure what's really working, they didn't have any...

(06:03-06:27) Daria Rudnik
a governance system in place. They didn't understand what are they measuring? How do they understand which tool is working and which tool is not working? And they actually got stuck because those teams using AI tools, they like. They said, okay, I like this tool. I picked this tool. I invested some time and energy into this tool. I don't want to give it up. I want to keep using it. And I like the KPIs. I like the result.

(06:27-06:50) Daria Rudnik
So to make it work, having governance before the pilot, like having clear guidelines and rules before the pilot stage is very important. And the only thing that helped them and will help any other organization that's kind of adopting AI and the fast pace of technology changing is they have meetings, collaborative discussions. OK, what is it we're trying to achieve?

(06:51-07:04) Daria Rudnik
What is working? What's not working? So having those conversations from the start will really help you build the culture of continuous learning and be flexible around what tools you're using and whatnot.

(07:05-07:34) Giorgio Zampirolo
Thank you, Dari. I really like this idea of working in teams and sharing approaches. It's very important to think as a kind of group of different people with different perspectives, because there is not one approach that's going to work. People need to be comfortable, need to be allowed to try and share these kind of results. So this teamwork and communication relationship are really key in this environment.

(07:34-07:46) Felicia Burtscher
Felicia, do you want to add something? Okay. Is governance only about protection or is there also a competitive advantage in getting this right for ICMS?

(07:47-08:05) Giorgio Zampirolo
I believe it's both because we need to look into the inside environment and the outside environment. And obviously, alignment is very important. So governance is a strong tool to develop this right approach to using AI.

(08:05-08:25) Giorgio Zampirolo
I think the results are shared between the outside and the inside. So I don't see anyone prevailing. So it's a strategic tool, but at the same time, it's a tool to avoid risk and dangers. And also with the AI Act, it's going to be also some regulation, strong regulation coming.

(08:26-08:47) Daria Rudnik
Okay, we know governance is important, but Felicia, please tell us, how do you actually make it happen? How small and medium enterprises can create this governance structure for their organizations? Okay, so first of all, I think one of the biggest mistakes SMEs make is assuming they need a full governance framework from day one.

(08:48-09:08) Felicia Burtscher
I think this is a work in progress and you can start small. In reality, governance starts with awareness and visibility. So the first step, I would say, is simply mapping where AI already exists inside the organization. It might sound trivial, but it's

(09:09-09:37) Felicia Burtscher
I think it's really powerful. So many SMEs then discover that AI is already embedded in multiple tools, actually, in CRM systems. We have marketing automation platforms, analytic software, customer support, chat systems where AI is built in. And on top of that, employees often experiment with generative AI tools on their own. So the first governance action

(09:37-09:57) Felicia Burtscher
is essentially a landscape scan, identifying where AI is present, how it is used, and what data flows, what data actually flows through those systems. And then the second step is really establishing clear ownership.

(09:58-10:18) Felicia Burtscher
So because governance fails when responsibility isn't distributed, but accountability. So when responsibility is distributed, but accountability is unclear. So SMEs don't need large committees.

(10:18-10:35) Felicia Burtscher
meet one accountable person who understands the AI landscape and can coordinate decisions. So you could have an AI governance or AI officer. And the third step then is defining

(10:36-11:00) Felicia Burtscher
Simple rules of engagement. For example, what type of data can be used with external AI tools? Which AI systems influence decisions affecting customers or employees? And also knowing when is a human review mandatory? And I think these three simple principles already create the foundation for governance.

(11:01-11:28) Daria Rudnik
I have a follow-up question. And like Georgia just said, things are changing, things are evolving. So for small and medium companies, how can they know that they are on track? Like six months from now, how would they know that everything's working right or that they need to change something? So I would say the best signal is cultural. So if teams...

(11:28-11:45) Felicia Burtscher
begin asking questions about AI outputs rather than just accepting them automatically. And leaders in organizations discuss risk and opportunities openly.

(11:46-12:15) Felicia Burtscher
I mean, there are AI tools coming out on a daily basis. And when they're rather introduced through conversation rather than like informal experimentation. So really have an open discussion about that. And I think at that point, government has moved from just documentation to really behavior and insight into organization.

(12:17-12:47) Daria Rudnik
I think this is a good indicator that something is changing culturally. And apart from this AI landscape in organization and very clear ownership, is there anything else any SME should do regardless of their size or industry or anything? Yeah, so...

(12:48-13:07) Felicia Burtscher
I mean, as I said already, visibility, knowing, really knowing where IAI is used inside the organization, what data it uses, then second accountability. So someone who is responsible for oversight and

(13:08-13:28) Felicia Burtscher
I think the decision and also decision clarity, understanding where AI can assist and when humans must decide. And I think without those three, governance becomes theoretical rather than operational.

(13:29-13:55) Giorgio Zampirolo
Yes, I think it would be interesting to talk about culture because it's easy to say, you know, we need to change culture and embrace this technology. But in the end, the conversation is key. And also the hierarchy within the company might not reflect actually the experience of AI. So it's interesting to share different approaches, compare the results and eventually look for help from the people around you.

(13:55-14:22) Giorgio Zampirolo
So this might not reflect the hierarchical kind of structure. So I think it's very interesting to open up this discussion and try to support people to open up and share different kind of approaches. So this is a key and I think is a never ending kind of work. So developing a different culture is a big task. So it's very important that all the people are committed.

(14:22-14:37) Giorgio Zampirolo
And the company embraces this change and try to kind of support open kind of sharing of ideas and also experimenting together with different approaches. Thank you, Felicia.

(14:39-15:05) Giorgio Zampirolo
Thank you. I think I have a question for Daria. So we spoke about governance, but we need to dive deeper into the way to make it stick. So how can we actually enforce this government and use it on a daily basis? So we can build the governance structure, but then we have to hand it to real people.

(15:05-15:33) Daria Rudnik
with real resistance, real habits and real fears. So how do you make governance something teams actually live by? Well, that's a great question. And like you both mentioned, Felicia and Giorgio, that culture is very important. The way we communicate and collaborate, it's very important. What I've seen teams falling into is like integrating AI, going into like full scale into AI.

(15:33-15:55) Daria Rudnik
I was working with a team, the customer success team, and they were implementing AI across almost all of their processes. They had their calls recorded. They had transcripts from those conversations uploaded to CRM. They had AI to generate insights for CRM and items for backlog.

(15:55-16:20) Daria Rudnik
had AI to generate an agenda for the next conversation. And it was cool at the beginning, but then at some point they started to feel, okay, what am I doing here? What is actually my role here? What am I supposed to be doing when AI is doing all the work? And it's not only that, it's that when they started to have a conversation, okay, what should we prioritize in our backlog? What should be a priority? What's more important for our customers?

(16:20-16:38) Daria Rudnik
Those people couldn't say. They didn't feel it. They lost connection with the customer. So when we're using AI in the wrong way, and there is a wrong way to use AI, we kind of lose connection. We'll use engagement. We use productivity. We use engagement. We use everything.

(16:38-17:07) Daria Rudnik
So what I like helping people with and helping team with is thinking about AI implementation at three levels. And the first level is how I feel about AI, because we know there's a lot of fear when you say, hey, let's introduce great AI tools. People start to think, okay, am I preparing a replacement for myself? Will my manager think that I am replaceable now when I have that tool working for me or with me or instead of me? And people don't like to try to stay away from it.

(17:07-17:22) Daria Rudnik
Some people think and feel about AI as being too technical. Oh, I'm not a tech person. I'm in finance. I'm in HR. I'm in marketing. I don't know how to use that. I don't feel comfortable using those tools.

(17:22-17:41) Daria Rudnik
So the first layer is building this trust and transparency and clarity around why are we using AI? What are we trying to achieve with it? Will it result? Will it end up eliminating some of the roles? And if it will, be very clear and open about it. Say, yes, we might have that.

(17:41-18:01) Daria Rudnik
But here is what you get instead. Here is the skills you're going to learn. Here is how you can improve your visibility and your value, your market value as an expert in AI transformation. Here are the great things that will happen to you if you use AI and if you help us implement AI. So the first layer is how do people feel about AI?

(18:01-18:24) Daria Rudnik
The second layer is how we think with AI. And like I mentioned with this customer success team, they delegated too much to AI and the brain stopped working. There was a research called Your Brain on ChatGPT, which tells us that if we use the wrong cadence of working with AI, for example, we ask AI something to give us,

(18:25-18:54) Daria Rudnik
like create an article about something or create a presentation or create items for backlog without thinking about it first, our brain becomes disengaged very quickly and we lose this ownership with the product. But if we think about it first, like create some draft, very rough, and then ask AI to support us and help it make better, then our brain stays engaged. And with this customer success team, what they started to do is they started to, uh,

(18:54-19:14) Daria Rudnik
talk about their feelings and insights about the conversation with their customers, with their accounts first, and then use AI to kind of generate better transcripts. So they kept this in their brains. So the second layer is how we think with AI and how we collaborate with AI. And the final fourth layer is how we work with AI.

(19:14-19:32) Daria Rudnik
how we integrate AI across different processes. It's like Felicia said, you have AI in most of your, in many, many of the tools already, because I mean, I don't know, it looks like every tool has AI now. So you have it there. But how do you make sure that

(19:32-19:52) Daria Rudnik
you know the ownership. You know what kind of decisions are made by humans, what kind of decisions are made by AI. How do you know what kind of human AI collaboration mode you want to use? Are you using AI to brainstorm or are you using AI to get data or are you using AI to...

(19:52-20:21) Daria Rudnik
create something for you. So what is the goal, what is the role of AI in your work right now? So thinking about the work process, the processes of your organization, where AI lands, and what's the role of AI there. So three levels, see how people think about AI, how people feel about AI, how people think with AI, and how people work with AI, and how it comes with all your work processes.

(20:24-20:42) Giorgio Zampirolo
Very interesting, Daria. I would add also the idea of harmonizing the forces within the company, because what I see many, many times is some people, they adapt quicker. Some other people, they need a little bit more time, experience and small kind of projects.

(20:42-21:07) Giorgio Zampirolo
So it's very important for the companies to monitor and support all the different parts of the company. And also training is very important. Help people even build just the vocabulary to talk about these tools and to share the approach is very important. So I think providing a basic foundation to everybody is key to help organizing the work around it.

(21:07-21:23) Giorgio Zampirolo
And expectation is an interesting point. People would love to jump from zero to 100 with AI and become like AI first company and a champion in productivity. The reality is all the new tools, we need to get used to them.

(21:23-21:51) Giorgio Zampirolo
make different attempts, eventually do something wrong, learn from the mistakes and then try again. So this idea of achieving everything easily is not true. It doesn't work and it's just providing a lot of problems. So it's very important to address the new tool with the resources, with the training and with the idea of allowing people to make mistakes and improve and share the experiences. So I just wanted to add this.

(21:52-22:18) Daria Rudnik
We had a great conversation. I really hope that helped SMEs. And we have two questions from the audience. And I want to ask you those questions. So, Felicia, the first question is for you. What are the first practical steps SMEs should take to establish effective AI governance? So, as I already mentioned, I think if you take these three actions that I mentioned, first, map what you already have.

(22:19-22:31) Felicia Burtscher
So list every AI tool that your team is using today. Second, assign the owner. So one, accountable person for AI decisions.

(22:32-22:59) Felicia Burtscher
And third, have really honest conversation. Ask your team what, you know, touching upon what you just said, what worries you about how we're using AI today? And the answers really show exactly where governance needs to start. And this is, I would say, is the beginning of the kind of foundation you should start off with.

(23:01-23:29) Giorgio Zampirolo
Thanks. And Jorge, the second question is for you. How can organizations balance innovation speed with ethical and regulatory responsibilities in AI adoption? Thank you. The idea of speed is very important, so we need to watch out for the reactions to be in a kind of reactive mode. So technologies change, therefore we need to change everything and we need to adapt. The moment we're switching off our kind of critical thinking, we...

(23:29-23:56) Giorgio Zampirolo
expose ourselves to big dangers so we cannot equalize speed with efficiency because this is not true switching off the brain might feel an acceleration but it's just like opening up to huge kind of dangers so what we have to do is to keep up the conversation use all the brain power that we have in the company don't think about the just the leadership team is the one in charge everybody has a different kind of experience and

(23:56-24:19) Giorgio Zampirolo
different kind of pool of technology and network. So we need to work together. We need to share approaches and we need to do the best for what we have. So I don't think there is one formula that helps everybody. Every company is different. Every industry is different. So we need to work together as a team. We need to close the cycle.

(24:19-24:42) Giorgio Zampirolo
and build this governance structure along the way and reevaluate. I think repetition and trials and also measuring KPIs of the results of our approach is very important because the idea of adopting something is nice, but we need to assess the effectiveness.

(24:42-25:00) Giorgio Zampirolo
And the ethics is once again linked to responsibility. So we need to make clear what AI is doing in the company and which side of decisions is still given to humans. And if we cannot answer...

(25:00-25:25) Giorgio Zampirolo
who owns the decisions that AI is influencing, this is the start of our work. So until we are not clear who's responsible for the decision and why we took that decision, something is not working properly. So this is the beginning of the governance kind of development. And this is the side that small and medium enterprises should work on.

(25:26-25:50) Daria Rudnik
Thank you, Georgian. Well, we talked about why governance is important and that governance is not a bureaucracy. It's not a big document that sits on your desk. It's a culture that you build together through conversations, collaboration, assigning ownership, having clarity on what kind of tools you already have. What are the KPIs? What are the goals? Who makes those decisions and who owns those decisions?

(25:51-26:17) Daria Rudnik
We talked about how important it is to have someone like AI officer who's responsible for all of the AI initiatives and who can monitor the governance and own the process. And we also talked about the three levels of AI adoption, which is how people feel about AI, how they think with AI and how we work with AI. Hope you found it helpful. You can find us on LinkedIn, reach out and let's have this conversation going.