AWS Public Sector Blog
Enhancing content recommendations for educators at Discovery Education with Amazon Personalize
This is a guest post by Scott Beslow, lead software engineer and Sebastian Garaycoa, data scientist at Discovery Education. The team at Discovery Education collaborated closely with Amazon Web Services (AWS) to enhance its K12 learning platform with machine learning (ML) capabilities.
Discovery Education (DE) provides standards-aligned digital curriculum resources, engaging content, and professional learning for K12 classrooms. Through its award-winning digital textbooks, multimedia resources, and professional learning network, Discovery Education transforms teaching and learning, creates immersive STEM experiences, and helps improves academic achievement. Discovery Education currently serves approximately 4.5 million educators and 45 million students worldwide, and its resources are accessed in over 140 countries and territories. Discovery Education collaborates with districts, states, and like-minded organizations to empower teachers with customized solutions that support the success of all learners.
The Discovery Education learning platform is known for its expansive treasury of digital resources, which now includes more than 200,000 videos, text-based passages, interactives, audio, podcasts, and images that span all grades, subjects, and critical topics of today. Each month, we add hundreds of new resources—including ready-to-use activities, virtual field trips, video from trusted partners, podcasts, and relevant, curated channels—to excite, engage, and connect students to the real world.
We connect educators to this vast collection of high-quality, standards-aligned content, ready-to-use digital lessons and professional learning resources via personalized recommendations, which increases efficiency, productivity, and engagement with students.
Within our recently launched enhanced learning platform, we embedded Amazon Personalize to make these recommendations. Amazon Personalize allows developers to build applications with the same machine learning (ML) technology used by Amazon.com for real-time personalized recommendations, with no ML expertise required.
The ”Just For You” area of the Discovery Education platform’s homepage now connects educators with a unique, personalized set of resources based on grade level taught, preferences, and resources they have found useful in the past. These recommendations powered by Amazon Personalize, change and adapt based on educator behavior and preferences, enhancing our daily learning platform in a way that is truly adaptive and personalized.
In this post, we describe the architecture used in the application for serving recommendations.
Private, fully managed recommendation system
The primary considerations for building a solution were robustness, scalability, and time to market. By using Amazon Personalize, we incorporated a fully managed machine learning (ML) solution that goes beyond rigid static rule-based recommendation systems and trains, tunes, and deploys custom ML models to deliver customized recommendations to educators across school districts, without the explicit burden of building a model ourselves.
Data collection
First, a high-performing recommendation engine needs data and the correct tracking generated from the interacting application. Having explicit information about educators such as grade, subject area, location, and preference settings is good, but implicit evidence gathered from a raw clickstream provides more insights into the model. We relied on existing persistent data stores that track user clicks—marking an interaction record when a given user interacts with a specific piece of content, along with an event type, such as viewing, downloading, or sharing. This allowed us to collect information at scale about how each participant navigates through and interacts with our resources, identifying behavioral patterns, potentially supporting objective and rich insight into the learning experience beyond self-reports and intermittent assessments.
Data location
After collecting the correct types of data, we store it in a data warehouse that standardizes, preserves, and stores data from distinct sources, aiding the consolidation and integration of all the data. The data is auto-ingested into Snowflake, a cloud data warehousing service provided as a software-as-a-service (SaaS), where it is subsequently transformed for various reporting and analysis needs. This helped automate the transformation of raw data to analytics-ready data marts for evolving business needs.
Data wrangling and feature engineering
After discovering all data sources and storage locations, we extracted useful features (characteristics, properties, attributes) from raw data and subsequent cleaning, structuring, and enriching it into desired formats. We created pipelines to copy data to an Amazon Simple Storage Service (Amazon S3) bucket once a week. This pipeline uses scheduled tasks to curate data in Snowflake, creates tables, and then copies the data to an Amazon S3 bucket. We started out with basic metadata fields in the Items, Users, and Interactions datasets, created an initial dataset group and imported the data to Amazon Personalize. Gradually we iterated by including additional metadata fields to the datasets and creating several dataset groups to help improve the relevance of our recommendations.
Model development
Amazon Personalize provides multiple ML algorithms purpose-built for various personalized recommendation use cases. Initially we created several solutions each with a different recipe. Through experimenting, it was immediately clear that the User-Personalization algorithm was best for our use case and all our subsequent iterations focused on this recipe. User-Personalization recipe is optimized for all personalized recommendation scenarios that can deliver up to date personalization without constant retraining. It predicts the items that a user will interact with, based on the Interactions, Items, and Users datasets. When recommending items, it uses automatic item exploration. Amazon Personalize automatically adjusts future recommendations based on implicit user feedback. We periodically import data into Amazon Personalize, and re-train our solutions. The solution metrics and our own evaluation informed our next iteration. We iterated until we found a suitable solution that performed well and adjusted to our qualitative criteria.
Model testing and evaluation
After training the model, we ran checks for qualitative analysis to supplement offline metrics. Amazon Personalize offline Solution Metrics were our first filter to evaluate solutions. We paid special attention to the “Precision at 5” and “Mean Reciprocal Rank at 25” metrics and less emphasis on Coverage. We did quantitative evaluation of our own by withholding interactions from Amazon Personalize and recreating versions of precision and accuracy metrics. We also performed qualitative checks—by downloading a randomized sample of “Batch Recommendations”—to see at a high-level what types of assets are being recommended and whether the mix of asset types (e.g., videos, channels, lessons, etc.) recommended are sufficiently diverse. We also analyzed the recommendations scores by a breakdown of different user and item segments. For example, we looked at average recommendation scores for relatively new users vs. older users. This was less about evaluating the model per se and more about approximating how the solution metrics will evolve as we learn more about the items that users value.
Extraction transformation load (ETL) development
We created an automated workflow that handles importing the data, creating solution versions (trained models) and finally deploying it in a campaign; this provides a consistent mechanism for repeating at-scale. At the end of each week, we recreate the CSV files that house all of the data in their respective Amazon S3 bucket locations. Once that has completed, an AWS Step Function state machine is triggered on a weekly basis. This leverages several AWS Lambda functions that oversee the creation of a new solution version, campaign, and once completed, updates endpoints to cutover the application to use the new campaign.
Backend service development
After we created a campaign, we use it in our applications to get recommendations. We wrapped the created model inference mechanism with a backend service to improve monitoring, stability, and abstraction. We inserted our newly created recommended items as a row on our homepage to pique the interest of visitors and prompt them to delve deeper. To do this, we updated our existing Python-based backend APIs to add an additional row, populated by retrieving real-time asset recommendations for that user (Amazon Personalize GetRecommendations API call) via the boto3 software development kit (SDK).
Production deployment
To enable us to analyze the efficacy of Amazon Personalize recommendations in terms of business metrics, we exposed multiple recommendation strategies (for example, Amazon Personalize vs. an existing recommendation system) to our users in a randomized fashion and measured the difference in performance in a scientifically sound manner (A/B testing). Early in the school year, we launched the recommendations to a randomly selected group of accounts. We randomly selected another set of accounts that did not have access to the Amazon Personalize recommendations as a control group. Our primary metrics of interest were: click-through-rate and clicks-per-exposure to measure the propensity and quantity of clicks on Amazon Personalize and non-Personalize recommendations in the homepage. We also tracked everything that happens after a click on a recommendation such as the rate at which the user assigns/saves/downloads/watched the recommendation as well as the likelihood that the recommendation impacts the probability that the user returns to the product within a short period (i.e., user retention). After running this A/B experiment for a defined period, we reached a statistically significant result that supported data-driven decision making. At the end of the six-week experiment, we concluded that the addition of personalized content on our homepage more than doubled user engagements with resources on the homepage. In addition, we found a 100% increase in secondary high-value interactions with content, such as assigning, downloading, and sharing.
Learn more
Through our work with AWS, we were able to create a state-of-the-art recommendation engine that suggest videos, channels, images and more tailored specifically to each user’s preferences. Our team continues adding, contextualizing, and organizing exciting new content and timely and relevant resources to the platform each month in response to current events and the ever-evolving needs of educators. These resources are aligned to state and national standards, and help educators bring the outside world into teaching and learning every day. The pioneering use of machine learning within the Discovery Education platform helps educators spend less time searching for digital resources and more time teaching.
Learn more about Amazon Personalize and the cloud for education.
Additional contributors to this blog include:
- Scott Beslow, lead software engineer at Discovery Education
- Johnny Mercado, senior software engineer at Discovery Education
- Sebastian Garaycoa, data scientist for the research and analysis team at Discovery Education
- Marilyn Bass, cloud ETL engineer at Discovery Education
- Parnab Basak, solutions architect for the service creation team at AWS
- Luis Lopez Soria, senior AI and ML specialist solutions architect at AWS