
What does it take to build AI responsibly from the beginning?
At the Global AI Governance and Innovation Showcase, Danica Damljanovic shared the journey of Sentient Machines and reflected on the impact of the FCA AI Lab Supercharged Academy.
Her message captured one of the most important lessons from the programme:
AI adoption is not only about learning new tools.
It is about building the judgement to use them responsibly.
Beyond Technical Capability
Many AI conversations focus on what technology can do.
But in regulated financial services, the bigger question is often different.
Should it be used?
How should it be governed?
What risks does it create?
Who is accountable?
The Academy challenged participants to think beyond experimentation and explore how AI can be applied in real financial systems.
Building Depth and Structure
Through the programme, participants explored:
- real-world AI use cases
- governance and risk
- responsible adoption
- value creation in financial services
- practical implementation challenges
For companies building AI solutions, this kind of structured learning matters.
It helps move ideas from concept to execution.
It also helps ensure that innovation is grounded in accountability from the start.
The Sentient Machines Journey
Danica’s reflection highlighted the importance of capability-building for companies working in AI.
Responsible AI does not happen by accident.
It requires:
- technical understanding
- regulatory awareness
- clear use cases
- strong governance
- a sense of purpose
This is especially important in financial services, where trust is central to adoption.
The Takeaway
The future of AI in finance will not only be shaped by the technology being built.
It will be shaped by the people and companies building it.
Danica’s journey is a reminder that responsible AI starts early — with the right questions, the right structure, and the right commitment to creating value safely.
