I bring a sandbox mindset into every collaboration: a boundary-pushing, curiosity-driven environment where assumptions are tested, edge cases are explored, and error becomes insight. This is where human nuance meets system experimentation - and imagination becomes method.
I analyze how both teams and end-users mentally model the AI they interact with. Understanding internal representations allows us to bridge intention and experience - aligning what’s built with what’s actually understood.
Behavioral friction often hides in subtle places. I use qualitative and quantitative tools - interviews, psychometric diagnostics, and user behavior mapping - to surface where your product, or team, is out of sync with its intention. Optimization here is both technical and human.
A bespoke and highly curated coaching container for founders, thinkers, and product leads. We navigate personal and strategic questions through fast-paced, high-trust dialogue - equal parts clarity, co-creation, and confrontation.
When internal misalignments arise, I step in as a neutral guide to mediate tension, reframe breakdowns, and anchor shared vision. I help teams move from dissonance to coherence, because your internal culture is part of the product you build.
I analyze psychosocial and behavioral risks embedded in product design, especially in emotionally intelligent or socially impactful AI. My assessments identify potential harm, misuse, or manipulation risks before they scale, damage trust, or trigger backlash.