
April 10th, 2026
With technology, we have a tendency to opt for what seems beneficial on the surface – if it works, we accept it. If it’s fast, we adopt it. If it’s powerful, we trust it.
But what happens when the systems making decisions about our health, our finances, or our opportunities are fundamentally invisible?
In this podcast conversation, that question sits at the center. Not as a philosophical idea, but as a practical tension shaping how artificial intelligence is being built and deployed today. Tiffany Wang, Head of EMEA at FLock.io, approaches this tension from an unusual angle. She started not as a technologist, but as someone trained to interrogate systems, question assumptions, and look for where things break under pressure.
Her conclusion is simple: you can’t trust what you can’t see.
Tiffany’s path into AI was anything but linear. Trained as a lawyer, she started in environments where precision mattered, but impact often felt distant.
Early on, she noticed a disconnect. Legal frameworks were designed to be correct on paper, but that doesn’t mean they work in reality. “No matter how smart you design a system… if no one is using it, it has no impact,” she reflects.
That realization pushed her closer to business. At PwC, she expected to find more practical applications. Instead, she encountered another layer of abstraction: frameworks that looked sophisticated, but didn’t always solve the underlying problem.
Over time, a pattern emerged. Systems were often built to look right, not necessarily to work under real conditions.
That tension became the throughline of her career. Rather than staying in environments that optimize for structure, she moved toward ones that allow experimentation, ownership, and accountability.
Joining FLock.io wasn’t so much a leap into AI as a continuation of that trajectory. A move toward building systems that don’t rely on belief, but on design.
Tiffany’s perspective on trust didn’t come from theory. It came from watching how language, incentives, and systems can be shaped to create the appearance of reliability.
“As a lawyer, I learned how easily language can be interpreted in different ways,” she explains. That insight changed how she thinks about trust entirely.
At first, trust was instinctive. If someone said something directly, it felt natural to believe it. Over time, that shifted. Not toward cynicism, but toward a more grounded standard.
“I can’t just trust what someone says anymore. I need to see consistent action, especially under pressure.”
This distinction matters. Trust isn’t about perfection. It’s about whether systems and people behave consistently when conditions are difficult.
That same logic now shapes how she approaches technology.
AI is rapidly becoming infrastructure. It influences decisions across healthcare, finance, defense, and governance. Yet most of these systems operate as black boxes.
A small number of organizations control the data, the models, and the rules. Everyone else is expected to trust that those systems behave as intended. For someone who’s spent years examining where systems fail, that model doesn’t hold up.
FLock.io takes a different approach. Instead of centralizing data and asking users to trust the operator, it uses decentralized federated learning. Data stays local. Models are trained collaboratively, without exposing sensitive information.
The shift is subtle, but important.
It moves trust from who controls the system to how the system is designed.
As Tiffany puts it, the goal isn’t to eliminate risk entirely. It’s to create conditions where misuse becomes harder, and verification becomes possible.
This design choice becomes critical in sectors where trust is fragile and stakes are high.
In healthcare, for example, patients may hesitate to share sensitive information if they believe it could be misused. That hesitation reduces the quality of care and limits the effectiveness of AI-driven insights.
With federated AI systems, that tradeoff changes. Data can contribute to better models without being exposed.
In collaboration with governments, this approach is already being tested. One example involves building secure healthcare data systems where insights can be generated without centralizing patient data.
Another focuses on climate resilience. In regions like the Dominican Republic, where natural disasters disproportionately affect unbanked populations, FLock.io is part of a broader effort to design microinsurance systems that respond faster and more effectively.
The goal isn’t just better models, but better outcomes. Faster support. More equitable access. Systems that reflect real conditions rather than abstract assumptions.
A common misconception is that decentralized or open systems eliminate the need for trust entirely. Tiffany is clear that this isn’t the case. “Trust doesn’t mean no mistakes,” she notes. What changes is where trust lives.
Decentralization doesn’t remove trust; it redistributes it. In a centralized system, you mainly trust one company. In a decentralized one, you trust a combination of open code, transparent rules, incentive design, and governance processes.
At FLock.io, the aim is to reduce blind trust by making more of the system inspectable and by aligning behaviour through staking, validation, and penalties for dishonest participation. So trust doesn’t disappear — it moves from “trust us” to “here are the rules, here is how they’re enforced, and here is the record.” When that trust gets tested, the response has to be transparency, auditability, and governance that people can actually see.”
In “black box” centralized AI corporations like OpenAI or Google, you trust one company not to fail or misuse power. In decentralized AI, trust is spread across open rules, shared verification, and incentives that punish cheating. If something goes wrong, there is a public record of what happened, so people can inspect it rather than just rely on private assurances.
Good behavior is rewarded. Harmful behavior is penalized. Everything is traceable. The system doesn’t assume good intentions. It creates incentives that align with them.
This is a different way of thinking about trust. Not as something granted, but as something continuously validated.
The trajectory of AI is still being shaped. Centralized models will continue to dominate in many areas, but alternative approaches are emerging, especially where trust, privacy, and accountability matter most.
What stands out in Tiffany’s perspective isn’t just the technology, but the mindset behind it.
Systems shouldn’t rely on belief. They should earn confidence through design. That principle extends far beyond AI. It applies to markets, institutions, and any environment where decisions carry real consequences.
For Demia, this is where things become tangible.
Trust isn’t something you layer on at the end. It’s something you build into the system from the start. Into how data is captured. How it’s verified. How it moves. How it connects to decisions and value.
When that foundation is in place, trust stops being a question and becomes a property of the system itself. It comes from building systems where they don’t have to.
FLock.io is an AI research and infrastructure company pioneering enterprise-grade federated learning and distributed AI solutions. Its decentralized federated learning architecture and production-ready platforms (AI Arena, FL Alliance, FLock API Platform and FOMO) enable organizations to train and deploy their own custom AI models on local hardware while maintaining full data privacy, model ownership, and regulatory alignment by design.
🎧 Listen to the full episode: