.png)
February 20th, 2026
In the latest episode of The Future of Trust, we sat down with Richard Savoie, systems builder and co-founder of Adiona Tech, to explore a pattern most organizations recognize too late: trust rarely breaks loudly – it erodes quietly.
A number that feels off. A recommendation that does not match reality. A system that is technically correct, but still ignored.
Richard has spent his career inside that tension, first in regulated medical devices, where mistakes are not theoretical, and later in logistics, where complexity is accelerating faster than people’s willingness to trust automation.
Instead of treating trust as philosophy, he treats it as engineering.
Richard grew up in southern New Hampshire with parents separating when he was very young. His earliest memories were shaped by a family structure that looked different from many of his peers. When a new father figure eventually entered his life, it brought belonging and stability, but it also left a deeper awareness.
“I always then had this undercurrent of distrust,” Richard shared. “I knew that it could all go away.”
That awareness did not make him cynical. It made him attentive. Stability is never permanent. Systems, like families, need to be built with that reality in mind.
One of the most powerful ideas from our conversation was the distinction between passive system failure and active betrayal. When a system fails because it is limited, incomplete, or poorly designed, you lose trust one way. When a person actively deceives you, it lands differently.
Richard experienced that second kind in a professional partnership that unraveled. Reflecting on it, he did not frame the lesson as “trust less.” Instead, he reframed it as calibration. “It was absolutely gutting,” he said. But the deeper realization followed: “People need to qualify and calibrate trust based on the implications. Build in the level of trust that is appropriate to what the outcome might be.”
In regulated healthcare, a bandage and a pacemaker are not held to the same standard. The consequences differ. Richard argues that in business and systems design, we should think the same way. Not every decision deserves the same level of structural trust.
Richard’s early career in medical devices forced that mindset into practice. One project focused on preventing air from entering an infusion line and automatically clamping the system if air was detected. In that context, trust is not branding. It is survival.
He was asked to design part of the solution without relying on software patches later. That constraint forced rigor. You cannot “fix it in production” when the device is already in a hospital. This is a reminder that some systems require first-principles reliability, not iteration after failure.
After years in healthcare, Richard moved into logistics. Although a different domain, the same trust problem persisted.
Adiona Tech builds digital representations of logistics networks so fleets can decide how to move, when to electrify, and how to optimize delivery at scale. These systems do not just analyze, but also recommend action. And recommendations are where trust either forms or fails.
COVID became a stress test. Delivery demand surged, and routing systems that once felt “good enough” began to crack under scale and time pressure. Operators waiting an hour for route outputs that still required manual overrides quickly lose confidence.
When one person stops trusting the tool, the hesitation spreads and trust failure compounds.
Richard explained a core challenge in routing: at scale, perfect optimization is mathematically impossible. You rely on heuristics and approximations, which is fine, until those approximations ignore human reality.
In dense cities, drivers often park once and complete multiple deliveries on foot. A routing system that ignores that might produce a mathematically sound route that feels obviously wrong to the operator. The result? Resistance.
Small adjustments, such as clustering stops based on realistic walking distances, can dramatically shift perception. Suddenly the system aligns with lived experience.
When we moved into AI, Richard offered a critical caution: “AI isn’t just a silver bullet. You’re actually adding more question marks into it.”
More complexity does not equal more trust. In many cases, it increases opacity. And opacity erodes confidence.
Looking forward, Richard sees the convergence of autonomous vehicles, robotics, and AI reshaping delivery. But the real challenge is not the highway. It is the sidewalk.
The “last mile” is complex. The “last meter” is even more so. Buildings, parking constraints, foot traffic, elevators, and human unpredictability all sit between vehicle and doorstep. The teams that win will not just automate movement. They will build systems that are explainable, validated, and aligned with physical reality – because autonomy without trust will stall.
If you are designing systems that recommend or automate decisions, a few lessons stood out from this conversation:
Trust, in every domain we discussed, ultimately comes back to one thing: proof.
In logistics, in electrification, in climate-critical supply chains, the challenge is not generating more data. It is proving that data is real, traceable, and reliable when financial, regulatory, and operational decisions depend on it.
Building the trust layer that secures operational data at the source, verifies lineage across systems, and turns performance into audit-ready proof, Adiona’s role in this clear. Automation alone will not carry the future forward, but verified performance will. In a world of accelerating complexity, the systems that endure will be the ones that earn the right to be believed.
🎧 Listen to the full episode: