News

Tech Talks Explores What Responsible AI Adoption Really Looks Like

Written by Kaitlyn Elphinstone | Apr 22, 2026 4:00:32 PM

Tech Talks Explores What Responsible AI Adoption Really Looks Like in Practice

As AI adoption accelerates, businesses are being pushed to confront harder questions around governance, trust, and accountability.

On Tuesday, 14 April 2026, business leaders, technologists, and curious professionals gathered at Signal House for a “Tech Talks” discussion titled, AI Adoption: Opportunities, Risks, & Realities sponsored by SteppingStones Recruitment and Cayman Enterprise City. The discussion examined how organisations are beginning to use AI in practice, the risks they are navigating, and what responsible adoption demands.

As AI moves beyond curiosity and experimentation and into everyday business decision-making, organisations are being forced to confront a more practical set of questions. The conversation is no longer just about what AI can do. It is increasingly about where it creates real value, what risks it introduces, and what responsible adoption actually requires.

With perspectives spanning quantitative finance, cybersecurity, AI governance, digital transformation, and human-centred innovation, panellists LeeAnn Janissen, Cristina Spratt, and Mary Davies moved the conversation beyond hype and into the operational realities of adoption. What emerged was a clear picture: responsible AI is not a technology story. It is a leadership and implementation story.

Panellists from left to right: Cristina Spratt, LeeAnn Janissen, and Mary Davies

Why AI Adoption Is Becoming a Leadership Question

One of the clearest takeaways from the evening was that AI adoption is no longer theoretical. Businesses are already seeing productivity gains, process improvements, and new opportunities to work differently. But as organisations become more serious about adoption, the conversation is shifting from possibility to responsibility.

That shift was evident throughout the discussion, particularly in the distinction between experimenting with AI and deploying it in live environments. Leadership teams are increasingly focused on whether they have the people, processes, and safeguards in place to evaluate and use AI-generated output responsibly.

AI Is Powerful, But It Is Not an Authority

Speaking from the perspective of a highly technical operator, LeeAnn Janissen brought a grounded view shaped by decades of experience in quantitative research, capital markets, systems architecture, and cybersecurity. As Head of Research and Portfolio Manager at East Coast Asset Management SEZC, she shared how AI-assisted coding tools had delivered striking productivity gains within her organisation.

What followed, however, was a more important question: how do you trust and safely deploy AI-generated code in a production environment?

That question led her team back to fundamentals. Before AI-assisted work could move into live use, standards had to be defined, testing processes had to be established, and accountability had to remain firmly with human decision-makers.

As LeeAnn put it, “AI is the tool, it’s not an authority.”

That distinction captured one of the evening’s most important themes. AI can move quickly, generate useful work, and accelerate output dramatically, but none of that removes the need for human review, technical validation, or organisational ownership over results.

For businesses looking to scale AI use responsibly, the lesson is straightforward: speed without trust is not enough. Adoption only becomes meaningful when it is supported by review processes, governance standards, documentation, and clearly assigned responsibility.

Governance Comes Before Scale

A central theme of the discussion was that many businesses are starting in the wrong place. Rather than beginning with a tool or vendor, organisations need to start by aligning leadership, clarifying priorities, and understanding the problem they are actually trying to solve.

Cristina Spratt, Founder and Principal Consultant at Tidal Edge Consulting, reflected on how her own thinking around implementation has evolved. What often appears to be an execution challenge is, in fact, a readiness challenge: leadership teams need to define priorities, identify risks, and decide where AI fits within the broader operating model.

One of her most memorable observations framed AI in practical terms: “I see it as an overachieving junior assistant.” Like an ambitious junior team member, AI can be fast and impressive, but it still requires context, review, and oversight.

As Cristina noted, what often fails is not the AI itself, but “the way you implemented it, the way you put the governance around it, and the way that you put the oversight.” The discussion made clear that businesses cannot treat structure and accountability as an afterthought. It is one of the conditions that makes responsible scale possible.

Free Tools, Hidden Risk, and the Need for Guardrails

Security and data privacy were among the strongest concerns raised during the discussion. As AI tools become more accessible, employees are more likely to use them informally, often without fully understanding how data is processed, stored, or reused. Cristina cautioned businesses against using free consumer tools for company or customer information, warning: “Anytime that you use anything free, you’re paying for it somehow. And free means they’re using your data.”

The broader issue, she suggested, is not simply the risk of free tools, but the need for structure. Businesses need approved systems, clear policies, and practical guardrails that allow staff to use AI productively without creating unnecessary security, privacy, or compliance risks.

Meaningful Adoption Starts with the Right Problem

As excitement around AI continues, organisations can easily fall into the trap of starting with the technology rather than the business need. Cristina encouraged attendees to reverse that logic. Start with the priority. Identify the problem. Then determine whether AI is actually the right tool. In some cases, a straightforward automation may be more suitable, less risky, and easier to implement than a more complex AI-based system.

That sounds simple, but it reflects a much more mature approach to adoption. It shifts the conversation away from novelty and toward value creation. It also helps businesses avoid overcomplicating workflows that do not require advanced tooling in the first place.

LeeAnn reinforced a similar point by encouraging businesses to stop thinking of large language models as all-purpose digital colleagues and instead see them as configurable tools that become more useful when applied to specific tasks and workflows.

For organisations trying to make AI practical, that shift in mindset matters. The strongest use cases are often not the broadest or most ambitious. They are the ones where the task is clear, the boundaries are defined, the risks are understood, and the output can be reviewed in context.

Human-Centred Design Still Matters

While much of the discussion focused on governance, implementation, and operational readiness, Mary Davies brought an important wider perspective to the conversation. As Founder of Vadoar Techworks SEZC, her work sits at the intersection of law, technology, and human-centred innovation, with a focus on identifying where systems fail people and building bridges that work in practice.

Mary’s perspective helped ground the discussion in a broader reality: many of the professionals who need to engage with AI most seriously are not developers. They are decision-makers, advisors, executives, and operators who need frameworks, language, and tools that make these systems intelligible enough to govern responsibly.

Reflecting on the pace of change, Mary observed, “There’s no way that I could have imagined that we would go from there to where we are now in three years. Businesses cannot afford to wait for perfect clarity before becoming more informed, more intentional, and more prepared.” 

Responsible Adoption Is an Operating Decision

If there was one message that came through most clearly during the Tech Talks discussion, it was that AI adoption is not simply about access to powerful tools. It is about whether an organisation is ready to use them well.

That means making thoughtful decisions about data, acceptable use, accountability, and where AI can genuinely add value. The businesses that benefit most from AI will not necessarily be the ones moving fastest or experimenting most aggressively. They will be the ones asking better questions earlier, building stronger internal foundations, and approaching adoption as an operational decision rather than a technology trend.

Enterprise Cayman is grateful to all three speakers for sharing their expertise so candidly, and to the audience for contributing thoughtful and relevant questions throughout the evening. For those ready to keep learning, Enterprise Cayman’s upcoming Tech Talks sessions, workshops, and other education programmes offer practical opportunities to deepen AI understanding and build skills that can be applied in the workplace, in business, and across future career pathways. Learn more about upcoming events here. 

A special thank you to our event partner, Steppingstones, and to all attendees who stayed on after the roundtable to continue the conversation. For more information about our monthly Tech Talks series, please visit https://www.enterprisecayman.ky/tech-talks