Utilize Amazon Bedrock Data Automation for Contextual Advertising Video Insights

0

Automating video insights for contextual advertising is a game-changer in the world of digital marketing. The strategy involves matching ads with relevant digital content to provide personalized experiences to viewers. However, implementing this approach in the realm of streaming video-on-demand (VOD) content has posed challenges, particularly regarding ad placement and relevance. The traditional manual content analysis method, where a content analyst painstakingly watches content to place ads strategically and tags it with appropriate metadata, is time-consuming and impractical at scale.

Thankfully, recent advancements in generative AI, specifically multimodal foundation models (FMs), have revolutionized video understanding capabilities and offer a promising solution to these challenges. Leveraging Amazon Titan Multimodal embeddings G1 models and Anthropic’s Claude FMs from Amazon Bedrock in custom workflows can significantly enhance contextual advertising solutions.

Enter Amazon Bedrock Data Automation (BDA), a managed feature powered by FMs in Amazon Bedrock designed to extract structured outputs from unstructured content like documents, images, video, and audio. BDA eliminates the need for complex custom workflows and streamlines the process of extracting rich video insights such as chapter segments, text detection in scenes, and classification of Interactive Advertising Bureau (IAB) taxonomies for nonlinear ads, optimizing contextual advertising effectiveness.

Nonlinear ads are digital video advertisements that coexist seamlessly with the main video content, appearing as overlays, graphics, or rich media elements on the video player. Implementing a solution utilizing nonlinear ads involves the following steps:

1. Users upload videos to Amazon Simple Storage Service (Amazon S3).
2. An AWS Lambda function is triggered by each new video, initiating BDA for video analysis.
3. The analysis output is stored in an output S3 bucket.
4. The downstream system, like AWS Elemental MediaTailor, consumes chapter segmentation, contextual insights, and metadata to make informed ad decisions and enhance the viewer’s experience.

To facilitate the process in a practical example, a dictionary mapping metadata to local ad inventory files is provided to simulate how MediaTailor interacts with content manifest files and requests replacement ads from the Ad Decision Service.

Running the notebooks and following along with the examples requires the following prerequisites:

1. Access to an AWS account with necessary permissions for Amazon Bedrock, Amazon S3, and a Jupyter notebook environment.
2. A Jupyter notebook environment with the appropriate permissions to access Amazon Bedrock APIs for executing the sample notebooks.
3. Installation of third-party libraries like FFmpeg, open-cv, and webvtt-py before executing the code sections.
4. Usage of the Meridian short film from Netflix Open Content under the Creative Commons Attribution 4.0 International Public License as an example video.

Video analysis using BDA involves three main steps: creating a project, invoking the analysis, and retrieving analysis results. By creating a project with defined analysis types and desired result structures, running the analysis via BDA becomes significantly simpler and more efficient, ultimately enhancing the contextual advertising process.

Leave a Reply

Your email address will not be published. Required fields are marked *