My focus is user-centered applied AI — the discipline of designing AI systems around how people actually think, decide, and work. Not AI as a backend feature, but AI as an interface: one that earns trust, supports judgment, and stays useful when it's uncertain. I've spent five years building this at enterprise scale — at Bosch, at eccenca, and as a freelancer — across the full range of system architecture, product design, and frontend engineering.
My PhD looked at this problem rigorously: how do you make AI genuinely adopted and trusted inside large organizations, not just technically deployed? That question shapes how I approach every project. I currently develop and design AI features at eccenca, alongside consulting and engineering engagements where design and AI need to move together.
The hard part of AI in 2026 isn't building a model that works — it's building a product that gets used. Adoption fails when people don't trust the outputs, can't tell when the system is wrong, or find the interaction slower than just doing it themselves. These are design problems, and they have to be solved before the architecture is locked in.
I work from user needs outward. What does this person need to decide? What context do they need to act on it? Where should the AI make a suggestion versus stay silent? Where must a human stay in the loop — and how do you design for that without making the system feel broken? Getting these questions right is what separates AI that gets adopted from AI that gets abandoned.