A few weeks ago Applovin posted on their axon blog “the first version of a fully automated, multi-agent production pipeline that generates interactives for advertisers promoting purchases on their websites.” They describe this as:
A coordinated network of AI agents handles the workflow. Each agent performs a specific task, passes output to the next stage, and validates constraints throughout the process. At a high level:
Brand context, product imagery, and messaging constraints are gathered automatically
Multiple structured concepts are drafted
Creative assets are generated and iteratively refined using image and video models
Quality checks are performed before delivery
The complexity remains hidden from our users: they receive ready-to-use outputs without having to manage production.
This was described as being only the first step:
This is the first phase. Our immediate priority is to make AI-generated interactives widely available:
Existing advertisers will receive automatically generated interactives on a regular cadence
New advertisers will get them as part of onboarding and on an ongoing basis
Soon, all advertisers will be able to directly generate interactives with custom inputs for greater brand control
Further, we are expanding capabilities in two directions:
Deeper: More templates, greater adaptability, and improved HTML formats.
Broader: Extending automation into video generation and playables. Our goal is to automate the full creative stack.
We have already begun limited testing of fully automated video generation, producing videos up to 60 seconds for select advertisers. Early results are encouraging, with some generated assets emerging as top performers.
We anticipate expanding automated video generation to a broader group of advertisers in early Q2.
Today they posted a new note describing their tool for automating video creation for clients (addressing the Broader:…. bullet above).



