Research rigor.
Operator reality.
Dr. Baltrip doesn't lecture about ethics in the abstract. He helps leadership teams build the structures that make AI governance operational: constraints, transparency, accountability, and stop-the-line authority your team will actually use.
Three engagements. One goal.
Each engagement level builds toward the same outcome: governance leaders can actually run — not just discuss.
- The Efficiency Trap, Proxy Trap, and Paperclip Maximizer
- The GOLDEN Framework · six disciplines in 45 minutes
- Glass Box, Andon Cord, and Ethical Moat in plain language
- Q&A and audience interaction
- Live use case audit using the Golden Algorithm Scorecard
- Proxy Trap identification exercise
- Glass Box documentation for top-priority systems
- Andon Cord design: triggers, authorities, protocols
- Team accountability matrix
- Full use case audit (Scorecard for every active AI system)
- AI Constitution drafting and review
- Glass Box documentation protocol
- Andon Cord installation with defined triggers
- Rollout rhythm and review cadence
- Executive brief for board presentation
What Dr. Baltrip talks about
Not too technical. Not too abstract. Not another compliance lecture.
Most AI ethics content falls into one of three traps: too technical (developers only), too moralistic (preachy and disconnected from P&L), or too legalistic (focused on avoiding fines instead of building advantage).
Dr. Baltrip brings a rare combination: academic research rigor from the Loyalty Science Lab, and operator reality from working directly with organizations that are deploying AI systems right now. The result is content that's practically actionable on Monday morning, not aspirationally inspiring on Friday afternoon.
"Ethics is not the brake. It is the steering wheel."
Request Availability
Tell us a bit about your event and audience. We'll respond within 2 business days.