From OTEL to SLMs: Distilling Frontier Model Behaviour from Production Telemetry

Summary

Disclaimer: This summary has been generated by AI. It is experimental, and feedback is welcomed. Please reach out to info@qcon.ai with any comments or concerns.

The presentation discusses methods and strategies for leveraging production telemetry data to enhance AI systems.

  • Key Concepts:
    • Telemetry as a Resource: Production interactions provide valuable training data capture through OTEL traces.
    • Distillation Process: The goal is to distill telemetry into datasets for fine-tuning Small Language Models (SLMs).
    • Platform Capability: Creating a repeatable platform capability that safely rolls out distilled models.
  • Methodology:
    • Extract and filter production data to create training sets for machine learning models.
    • Utilize canary releases and evaluation gates to ensure safe model deployment.
  • Implementation Strategies:
    • Instrument agents with open telemetry from the start to capture essential data for future processing.
    • Emphasis on auto-instrumentation for capturing user interactions effectively as future signals.
  • Challenges and Solutions:
    • Observability Pipeline: Building faster, self-improving AI systems based on detailed observability data.
    • User Feedback Loop: Utilizing user interactions as a continuous feedback mechanism to refine AI behavior.

Ben concludes by emphasizing the importance of maintaining agency over AI tools to ensure they enhance and do not inhibit human capabilities.

This is the end of the AI-generated content.


Your production agents already generate rich training data: every interaction is captured in your OTEL traces. This talk shows how to distill that telemetry into datasets to fine-tune Small Language Models (SLMs) and turn the process into a repeatable platform capability. Learn how to extract, filter, and train on production data, then roll out distilled models safely with evaluation gates, canary releases, and continuous improvement cycles to build faster, cheaper, and self-improving AI systems grounded in your observability pipeline.


Speaker

Ben O'Mahony

Principal AI Engineer @Thoughtworks, Startup Advisor, Speaker, Future Author and Time Magazine’s Person of the Year 2006

Ben is a highly accomplished Principal AI Engineer and Innovator who specializes in building AI Agent Products and Platforms that work. With deep expertise across the entire AI lifecycle, he excels at designing, deploying, and optimizing scalable, cost-effective solutions for enterprise clients. A recognized industry expert, Ben is also a sought-after conference speaker and author of the O'Reilly book: Building AI Agent Platforms (now in pre-release).

Read more
Find Ben O'Mahony at:

Date

Wednesday Dec 17 / 01:20PM EST ( 50 minutes )

Location

Library Reading Room, 3rd Flr

Share