I design AI systems around how people actually decide and act — not as a hidden backend, but as an interface that communicates uncertainty, supports judgment, and keeps humans in control when the stakes are high.
I hold a PhD in Information Systems from Leibniz University Hannover — “Leveraging User-Centered and Neuro-Symbolic Artificial Intelligence in Information Systems” — and six international papers on human-AI interaction and knowledge engineering, with more still in the pipeline. Underneath the research sit five years of applied work across corporates like Bosch, Mittelstand companies like eccenca, and startups like MyActivities: agents, knowledge graphs, retrieval pipelines, and the interfaces that make them usable. I stay active in science — reviewing, publishing, collaborating — because the standard I hold for client work comes from the same place.
In 2026, model capability is rarely the main constraint. The difficult part is product adoption. AI gets ignored when users cannot judge output quality, cannot recover from errors, or feel slower with the tool than without it. Those are product design failures, not model failures.
My process starts with user decisions: what must this person decide, what evidence do they need, and where should AI propose versus stay silent? Then we design the handoff between model output and human judgment. Companies that get this right build AI people rely on; companies that skip it build AI people bypass.