AI for a Scalable AI-Powered Athlete Performance Commentary Platform

Objective

Tully, a sports performance platform, wanted to introduce AI-generated athlete commentary to give coaches faster, more personalized insights. The goal was to do this in a way that could scale to millions of athletes without sacrificing quality or consistency.

Challenge

As Tully’s platform grew, more coaches and programs relied on it to track athlete performance. But analyzing that data and turning it into meaningful feedback was still time-consuming.

They needed a way using AI to:

  • Deliver high-quality, personalized commentary automatically
  • Maintain consistency and accuracy across thousands (and eventually millions) of athletes
  • Ensure the system could scale without constant redesign

At the same time, they wanted to make sure any AI solution was safe, reliable, and aligned with how coaches actually think and communicate.

Solution

The NLP Logix team designed a scalable AI architecture design powered by large language models (LLMs) that could generate athlete commentary in different ways depending on the need.

Instead of relying on a single approach, they created three modes for the Tully team to test out:

  • Single Structure Draft Mode: A structured mode for consistent, predictable feedback. The input data and generation task is split up into multiple LLM calls where more of the work for the LLM is handled upfront, giving it less autonomy but provided more stable outputs. 
  • Agent Enhanced Structure Draft Mode: An enhanced mode that adds richer insights while keeping some structure. The input data and generation is split up into multiple LLM calls and also includes an agent to enhance the generated commentary and should include human review as part of the workflow. 
  • Multi-Draft Mode: A flexible mode that allows the AI to generate more creative, detailed commentary. By assigning the full tasks to the agent in a single pass, it puts all the work on the LLM which can make validation more challenging often requiring more expensive models.

By using NLP Logix’s LogixForge LLM Kit Accelerator, a reusable integration pattern, they  built a system where prompts (instructions to the AI) could be easily swapped and tested on different LLM providers. This made it simple to experiment with different commentary styles without changing the underlying system.

The entire solution was built on a cloud-based architecture that integrates with Tully’s existing platform, ensuring it could grow alongside the business.

Results

The project helped Tully move from an idea to a clear, scalable AI strategy.

Key results included:

  • A validated approach to AI-generated athlete commentary
  • A structured framework to ensure quality and consistency
  • A scalable architecture scalable for millions of users
  • Faster experimentation with new AI features and styles

This positioned Tully to offer AI-powered insights as a premium feature while laying the groundwork for future AI capabilities like performance trends and athlete readiness scoring.

Tech Stack

  • AWS (cloud infrastructure)
  • AWS Bedrock
  • Large Language Models (LLMs)
  • Retrieval-Augmented Generation (RAG)
  • Agent-based AI frameworks
  • PostgreSQL
  • React and Node.js integration
  • LogixForge™ LLM Accelerator