AWS for Industries
Limiting Subscriber Churn by leveraging real-time subscribers’ feedback – part 1 of 2
When resolving network and service incidents, communication service providers (CSPs) have low visibility on how network incidents are perceived by their subscribers. As a result, resolution of network incidents is not optimized towards upholding customer satisfaction, leading to reduced effectiveness in limiting churn and increased cost of first-line support.
This blog series demonstrates a fully serverless approach to building a sentiment analytics and customer engagement solution for CSPs. Considering the size of the subscriber base of CSPs and the need to capture subscribers’ sentiments in real time and adapt to unpredictable incident patterns, a cloud-based solution is the best way to limit subscriber churn while containing costs.A combination of natively integrated AWS services, spanning end-to-end from real-time analytics, block storage, customer engagement, and serverless compute to serverless NoSQL DB, enables CSPs to:
- Identify worst-performing subscribers with advanced analytics in real-time
- Engage subscribers based on their historical network and service performance
- Capture their feedback to validate and weight their incidents
- Monitor the sentiment of the entire subscriber base in real-time and associate it to other dimensions (cell, device, network vendor, and so on)
- Prioritize incident intervention based on subscriber sentiment
This first blog post focuses on step 1 of the above list, covering the real-time ingestion and processing of incidents for the CSP’s entire subscriber base.
Business case
The landscape of the UK telecommunications industry is facing significant market challenges as competitive forces place downward pressure on prices, customers are increasingly more demanding with their bandwidth and speed, and expect a high standard of customer service. CSP revenues have, in fact, fallen by a fifth in the UK over the last ten years, while globally, they fell 5.4 percent from 2019 (1).
Customer churn remains the single greatest challenge to revenue stabilization ranging between 5 percent to 32 percent per year (2). In a saturated market such as this one, service providers should focus on retaining existing customers rather than trying to attract new ones. The probability of selling to an existing customer is 60–70 percent, while the probability of selling to a potential new customer is 5–20 percent. This issue is compounded by the customer acquisition cost Cost of Customer Acquisition (COA) which, on average, is five times greater than retention (3).
In an environment where average revenue per user (ARPU) dipped by 28 % 2010 to 2019, CSPs saw -5.3% decline in share price while paying out a +4.1% in dividends (2015-2020), it is clear this situation is not sustainable (4).
CSP’s major asset is its network. The cost of running the network from an opex and capex perspective is substantial.
As next generations of network technology come online ROI needs to be justified. The industry faces one of its greatest capex demands coming from the need to upgrade to 5G technology, FTTP and implement AI diagnostics and planning capabilities into networks (intelligent network planning).
Combined with these capex challenges, CSPs face significant opex due to the high network maintenance costs.
The average data use per fixed broadband connection increased by 75GB per month (31%) to 315gb in 2019, with a key contributing factor being continued growth in the use of video streaming services (5). As the necessity for connectivity continues to increase, the expectations of customers are also rising. Many customers expect CSPs to provide the closest to 100% operation, good speed and stability, and the shortest downtime while overall prices are falling.
What can operators do to intelligently combat churn and improve their financial position? How can they recoup the costs of network optimization and upgrade in a dual compressing market environment?
To respond to these fast-changing customer needs, CSPs need to enhance their dynamic understanding of customers’ expectations in nearly real time.
One of the main problems that CSPs face is that their customer service is not able to appropriately identify in advance the customers who are most frustrated with their network service and so cannot differentiate the customer service experience effectively, likely contributing to churn and revenue loss.
There are two main types of customers: customers who complain if the service they receive does not meet their expectations, and customers who do not complain but are unhappy with their service (the “silent sufferers”). If CSPs are unable to identify these customers in real time, there is an increasing risk that their marketing team is unable to effectively target sweeteners to the customers who are most likely to churn based on poor network service or simply wrong expectations.
Understanding real-time customer feedback has therefore become imperative to reduce churn. The real challenge then becomes how to use it in the most cost-effective way. Fixing network issues carries a high cost of intervention but not every fault in the network impacts service. So how can we identify the most service-impacting faults?
The answer is linking real-time customer feedback with real-time network data.
The use of real-time customer feedback and proactive care solutions will have a transformative effect on customer churn. Integration of proactive care based on real-time customer sentiment will ensure the costs of network optimization are justified by the substantive reduction in customer churn and optimization of spend on network assets.
Solution
By identifying unhappy customers before they do or do not report an issue, telco companies would be able to:
- Optimize spend to minimize churn by using budget effectively on specific, targeted outbound comms that provide “sweeteners” to customers who are frustrated with their service due to network issues
- Create a framework of service offerings that appropriately target and address actual customer frustration levels—both proactively and reactively
- Optimize costs through the prioritization of resolution of the highest impact faults (that is, those faults creating the highest number of frustrated customers or the greatest intensity of frustration)
The proposed solution wants to help CSPs prioritize incident resolution based on how incidents negatively impact subscribers’ sentiments. By prioritizing the resolution of incidents on cells that see the highest drop of overall sentiment, CSPs increase the effectiveness of their operations department in upholding customer satisfaction, contributing to limiting churn and reducing first-line support costs.
The main characteristics of the solution are:
- At scale—the solution scale to support the entirety of the subscribers’ base
- Real-time—the whole sentiment-gathering process and spawned actions are performed in real time
- Pattern sensitive—incidents are not assessed individually; patterns and correlations are evaluated
- Serverless—entirely serverless, truly pay for what you consume, with no upfront commitment
- In-context engagement—engagement with subscribers is established based on their performance history
This first blog part post focuses on the first stage of the solution, on how network and service incidents are ingested and analyzed (data ingestion), stored (data lake), processed to track every subscriber’s incident performance in real time (event handler and subscriber DB). The section covered is highlighted in the following diagram.
Here are the functional steps of the solution covered in this first blog post:
- Service records (CDR, xDR) are real-time streamed by OSS Probe vendors into AWS
- Service incidents are tracked down for each unique subscriber (phone number) and their frequency is quantified in real time over variable time windows
- A Data Lake stores all incidences and associated frequency data
- Every subscriber’s incident profile is kept up to date in an Amazon DynamoDB
Technical description
The solution demonstrated in this post is built in the Europe (London) Region. You can choose any other AWS Region where the following services are available
- Amazon Kinesis Data Streams
- Amazon Kinesis Data Analytics
- Amazon Kinesis Data Firehose
- Amazon S3
- AWS Lambda
- Amazon Simple Queueing Service
- Amazon DynamoDB
For more information about AWS Regions and where AWS services are available, visit the Region Table.
The following prerequisites must be in place to build this solution:
- An AWS account
- The AdministratorAccess policy granted to your AWS account (for production, you should restrict access as needed)
- Event records about CSP service performance fed by a third-party OSS probe-based monitoring solution
Data Ingestion
The following diagram illustrates the data ingestion block of the architecture. It is tasked to capture event records from the probe-based monitoring solution, isolate service incidents, and compute incident rates for every subscriber in real time, and deliver enriched incident records to the data lake.
Data Sources
This post uses the Kinesis Data Generator tool to simulate event records. There are four types of event records, whose details are provided in the following table:
Event Record name | Network protocol of origin | Record Structure | Numerosity |
Call—Signaling | Call Control | Timestamp, MSINDN, type, cell, status | One record per call |
Call—Media | RTP | Timestamp, MSINDN, type, cell, status | At least one record per call |
Video—User Plane | HTTP, Streaming protocols | Timestamp, MSINDN, type, cell, status | One record per streamed video |
Web Browsing—User Plane | HTTP | Timestamp, MSINDN, type, cell, status | One record per browsed webpage* |
- Event Record name—the name of the record
- Network protocol of origin—Layer 7 network protocol containing the information reported in the event record. The list of options provided is not exhaustive.
- Record Structure—the structure of the event record as seen in this post
- Timestamp—time when the event happened on the CSP network
- MSISDN—subscriber identifier (phone number)
- Type—the type of record (one of the four event record names)
- Cell—cell id univocally identifying 2G/3G/4G/5G cell. This is assumed to be extracted from the network’s control plane protocol at either access or core interfaces, correlated with the service event, and enriched in the event record. This logic is implemented by the OSS probe vendor.
- Status—termination status of the service event. For example: failed, success, dropped, etc.
- Numerosity—number of records per service event
* Definition of a webpage is subject to the OSS probe vendor’s interpretation
Copy the code found at GitHub (KDG_CALL_DROP, KDG_CALL_QUALITY, KDG_LOW_BITRATE, KDG_VIDEO_STALLING) for the Kinesis Data Generator to replicate the incident patterns I have used for this blog post.
Amazon Kinesis Data Streams
Four data streams are created, one per event record type. This arrangement spawns four parallel chains of Kinesis services, whose output is written into the data lake. The end-to-end creation process detailed below must be implemented for all four branches independently.
To create a Kinesis data stream, complete the steps listed in Creating a Stream via the AWS Management Console . Enter the number of shards according to your requirements; for this post, Provisioned Capacity mode with one shard is selected. Enable server-side encryption.
The below screenshot shows one of the four data streams, capturing the Call Media event record. The call media stream will be used as an example throughout the data ingestion section. Please repeat the same procedure for the other streams, namely Call Signalign event record, Video User Plane event records, Web Browsing event records.
Amazon Kinesis Data Analytics
Create a Kinesis data stream
To create an Amazon Kinesis Data Analytics application, complete the steps listed in its SQL Developer Guide. Select SQL runtime option. Enter tags as desired.
Configure source input
To configure a streaming source as input, complete the following steps from the same console as the previous section:
- Select Choose source
- Select Kinesis data stream
- Select the relevant choice from the dropdown menu. Following the flow of the call media event record started in the previous section, the data stream previously considered is set as Streaming data source in the configuration page of the newly created analytics function.
- Select Disabled in the Record preprocessing with Lambda section
- Select the Create IAM role option in the Access permission section
- Choose Discover schema in the Schema section. Schema is automatically discovered provided event records are being ingested while the discovery takes place.
- Inspect the schema to verify it matches the one shown in the screenshot below.
Add real-time SQL application
To add a real-time analytics application, complete the following steps:
- On the application hub page, choose Go to SQL editor.
- When asked whether you would like to start your application, choose Yes, start application.
- Copy the SQL code found at GitHub (call drop, call quality, low bitrate, video stalling) and follow the instructions contained in the README file. The SQL application processes the input data stream in real time and reports statistics and other relevant information to an output stream. The SQL application performs the following steps:
- It isolates incident records from all event records.
- On an individual subscriber basis (MSISDN), it calculates the incident ratio over a 15-minute window and the incident ratio over a 60-minute window.
- It includes record attributes and calculated incident metrics in an output in-application stream
- Any other record structure can be supported by easily modifying the Amazon Kinesis Data Analytics SQL application code.
- Choose Save and run SQL. Verify that the Ouput_Stream in-application stream resembles the record structure seen in the screenshot below.
Add a destination
To connect an in-application stream to a Kinesis data stream in order to continuously deliver SQL results, complete the following steps in the Destination—optional section:
- Choose Connect new destination.
- On the subsequent page, choose Kinesis data stream.
- Choose Create new button.
- To create an Amazon Kinesis Data Stream, complete the steps listed in the Developer Guide under the heading Creating a Stream: To create a data stream using the console. Enter the number of shards according to your requirements; for this post, Provisioned Capacity mode with one shard is selected. Enable server-side encryption.
- In the In-application stream section, select Choose an existing in-application stream
- Scroll down and select the Output_Stream option
- Choose CSV in the Output format selection
- Select the Create IAM role option in the Access permission section
- Choose Save and continue
Encrypt data at rest
Follow the instruction at How Do I Get Started with Server-Side Encryption? to set encryption at rest for your data on Amazon Kinesis Data Analytics.
Amazon Kinesis Data Firehose
Create an Amazon Kinesis Data Firehose Delivery Stream
To create an Amazon Kinesis Data Firehose delivery stream, complete the steps listed in its Developer Guide. More specifically:
- Type a Delivery stream name
- Choose Kinesis Data Stream as Source
- Select the Kinesis data stream previously created from the dropdown menu
- Choose Disabled in the Transform source records with AWS Lambda section
- Choose Disabled in the Convert record format section
- Choose Amazon S3 as Destination
- Choose Create new to create a new bucket
- Enter a Bucket name, in this post called “event_bucket”
- Leave 5 MiB as Buffer size
- Adjust the Buffer interval to suit your real-time requirements
- Enter the following prefix based on the stream: CALL_DROP/, CALL_QUALITY/, LOW_BITRATE/, VIDEO_STALLING/
- Enable S3 compression at will
- Enable S3 encryption
- Enable Error logging
- Select the Create IAM role option in the Access permission section
Data lake
The following diagram illustrates the data lake block of the architecture. It receives and stores all the events types (Incidents, Engagement, Feedback, Sentiment) over time, constituting the data source queried by the Offline Analytics section. For every event written to the data lake, one S3 notification is spawned, triggering the event handler section.
Object Storage
An Amazon S3 bucket was already created in this phase.
Amazon S3 Data Structure
In this section, we are defining the structure of the Amazon S3 bucket to store all incident, engagement, feedback, and sentiment records.
- Open the Amazon S3 bucket previously created, in this post called “event_bucket”
- Choose Create folder to create three distinct folders
- The three folders are named
- Engagement
- Feedbacks
- Sentiment
- Choose Enable in the Server-side encryption section
The structure of the Amazon S3 bucket will appear as illustrated below:
Folders CALL_DROP, CALL_QUALITY, LOW_BITRATE, and VIDEO_STALLING contain incident records. These folders map to the four Amazon Kinesis Firehose delivery streams and, as soon as the first record is written, they are automatically created.
Access and Permission
Navigate to the Permissions section of the S3 bucket and verify that public access is disabled.
Managing storage lifecycle
Manage your storage lifecycle to retain only the required data and only long enough to satisfy business functionality.
Bucket versioning
Consider enabling bucket versioning. With versioning you can recover more easily from both unintended user actions and application failures.
Bucket access logging
Consider logging requests using server access logging in order to enable auditing.
Encryption in transit
Add the following bucket policy by Adding a bucket policy using the Amazon S3 console.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::event_bucket ",
"arn:aws:s3:::event_bucket /*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
Event Handler
The following diagram illustrates the event handler block of the architecture. It is a serverless event manager pipeline whose objective is to parse events being written to the data lake, extract relevant information, and update the subscriber DB accordingly. The block is triggered by the Amazon S3 event notification.
For illustrative purposes, service instance names are shortened in the diagram above.
Lambda1 is called S3-get-object
Lambda2 is called SQS-Poller
Amazon SQS FIFO queue is called incident_queue.fifo
Lambda1: S3-get-object
Create Lambda function
Complete the steps contained in this Create Lambda Function with the following amendments:
- Type S3-get-object-NEW in the Function name field
- Select Python 3.9 as the runtime version
- Keep the default execution role selection
Upload the code
Complete the following steps:
- Select the newly created Lambda function from the main dashboard where all Lambda functions are displayed
- Beneath the Function overview panel, select the Code tab
- In the Code Source panel, copy and paste the code found at GitHub and follow the instructions contained in the README file.
Set up Trigger
Complete the following steps:
- In the Function overview panel, choose Add trigger
- Select S3 in the Trigger configuration panel
- Select the S3 bucket name previously created in the data lake block from the dropdown menu of the Bucket field (in this post called “event_bucket”)
- Select All object create events in the Event type dropdown menu
- Tick the Enable trigger box
- Choose Add
Set up Destination
No destination is configured for this Lambda function.
Permissions
Following the least permissions policy, please complete the following steps:
- Beneath the Function overview panel, Select the Configuration tab, then follow for the Permissions section (as illustrated below)
- Select the role name within the Execution Role panel to open up the IAM console page
- Three permissions policies must be enabled, whose JSON representation is here reported, where AWSaccountnumber is to be replaced with your AWS account number:
- Get and List Objects from Amazon S3 bucket previously created, in this post called “event_bucket”
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::event_bucket" } ] }
- Send Message to incident_queue.fifo
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "sqs:SendMessageBatch", "sqs:SendMessage" ], "Resource": ["arn:aws:sqs:eu-west-2:AWSaccountnumber:incident_queue.fifo" ] } ] }
- Lambda Execution Role to log events
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:eu-west-2: AWSaccountnumber:*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:eu-west-2: AWSaccountnumber:log-group:/aws/lambda/S3-get-object-NEW:*" ] } ] }
- Get and List Objects from Amazon S3 bucket previously created, in this post called “event_bucket”
Environment Variables
Still in the Configuration tab, follow for the Environment Variables section. Configure the following Environment Variables by replacing AWSaccountnumber with your account number:
- Key = queue_url, Value = https://sqs.eu-west-2.amazonaws.com/AWSaccountnumber/incident_queue.fifo
FIFO SQS: incident_queue.fifo
Create an Amazon SQS queue
Complete the creation steps contained in the following Creating an Amazon SQS queue (console), with the specifics:
- At point 3, choose FIFO
- At point 4, type the name “incident_queue.fifo”
- At points 5a to 5e, keep the default choices
- At point 5f, Enable content-based deduplication
- At point 6, choose Basic method
- At Define who can send messages to the queue selection, choose Only the queue owner
- At Define who can receive messages from the queue selection, choose Only the queue owner
- At point 7, enable encryption
- At point 8, enable dead-letter queue
Why Amazon SQS FIFO Queues
The reason an Amazon SQS FIFO queue is required is to assure the messages are processed in the order they have been added to the queue. Lambda 1 is triggered by an Amazon S3 notification generated when an object is put onto Amazon S3. Such object contains multiple rows, one row being one incident that happened in a 5-minute window. Lambda1 parses the content of the object and writes one message to the Amazon SQS queue per row. For example, in this screenshot taken from Amazon CloudWatch, one Lambda1 invocation wrote 687 SQS messages.
Among these 687 incidents that happened in a 5-minute window, two incidents might have happened for the same subscriber. It is important that this specific subscriber’s record on DynamoDB is updated with the two incidents in the right order.
Lambda2: SQS-Poller
Create Lambda function
Complete the steps contained in Create a Lambda Function with the following amendments:
- Type “SQS-Poller1-NEW” in the Function name field
- Select Python 3.9 as runtime version
- Keep the default execution role selection
Upload the code
Complete the following steps:
- Select the newly created Lambda function from the main dashboard where all Lambda functions are displayed
- Beneath the Function overview panel, select the Code tab
- In the Code Source panel, copy and paste the code found at GitHub and follow the instructions contained in the README file.
Set up Trigger
Complete the following steps:
- In the Function overview panel, choose Add trigger
- Select SQS in the Trigger configuration panel
- Select the incident_queue.fifo from the dropdown menu of the SQS Queue field
- Type “1” in the Batch size panel
- Tick the Enable trigger box
- Choose Add
Set up Destination
No destination is configured for this Lambda function.
Permissions
Following the least permissions policy, please complete the following steps:
- Beneath the Function overview panel, select the Configuration tab, then follow for the Permissions section (as illustrated below)
- Select the role name within the Execution Role panel to open up the IAM console page
- Three permissions policies must be enabled, whose JSON representation is here reported, where AWSaccountnumber is to be replaced with your AWS account number:
- Get and Update Items on Amazon DynamoDB Subscriber_table
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "dynamodb:GetItem", "dynamodb:UpdateItem" ], "Resource": [ "arn:aws:dynamodb:eu-west-2:AWSaccountnumber:table/Subscriber_table" ] } ] }
- Receive from and Delete message on Amazon SQS queue incident_queue.fifo
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "sqs:DeleteMessage", "sqs:GetQueueAttributes", "sqs:ReceiveMessage" ], "Resource": [ "arn:aws:lambda:eu-west-2:AWSaccountnumber:function:SQS-Poller1-NEW", "arn:aws:sqs:eu-west-2:AWSaccountnumber:incident_queue.fifo" ] } ] }
- Lambda Execution Role to log events
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:eu-west-2: AWSaccountnumber:*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:eu-west-2: AWSaccountnumber:log-group:/aws/lambda/SQS-Poller1-NEW:*" ] } ] }
- Get and Update Items on Amazon DynamoDB Subscriber_table
Environment Variables
Still in the Configuration tab, follow for the Environment Variables section. Configure the following environment variables by replacing AWSaccountnumber with your account number:
- Key = queue_url, Value = https://sqs.eu-west-2.amazonaws.com/AWSaccountnumber/incident_queue.fifo
Subscriber DB
The following diagram illustrates the subscriber DB block of the architecture. It is based on an Amazon DynamoDB table, whose objective is to store incidents, engagement, feedback, and sentiment by the subscriber. The table is updated following new events and triggers the engagement handler through the DynamoDB stream event.
Amazon DynamoDB
Create an Amazon DynamoDB table
Complete the following steps to create an Amazon DynamoDB table:
- Open the DynamoDB console.
- Choose Create Table.
- In the Create DynamoDB table screen, do the following:
- On the Table name box, enter “Subscriber_table.”
- In the Partition key box, for the Primary key, enter “PhoneNumber.” Set the data type to String.
- In the Table setting, select Use default settings.
- Choose Create.
Set Capacity
Complete the following steps to set the Capacity to meet your traffic requirements:
- Navigate to the Capacity tab of the Subscriber_table just created
- Choose the on-demand option. This choice will remove any performance bottlenecks, albeit with cost implications.
- Choose Save.
Enable DynamoDB stream
Complete the following steps to enable DynamoDB stream functionality:
- Navigate to the Overview tab of the Subscriber_table just created.
- Choose Manage DynamoDB stream.
- Choose New Image in the Manage Stream panel.
- Choose Enable
Database structure
The Subscriber_table auto-populates itself as soon as the SQS-Poller1-NEW Lambda function starts writing on it. The resulting table structure will appear as in the following extract.
Every event pertaining to a single PhoneNumber updates the same record. They are captured and grouped into Lists of Map objects, called after the nature of the event: (engagement, feedback, incident, sentiment).
Data retention strategy
The DynamoDB TTL feature allows you to set expiration timers on the data fields written to a given DynamoDB table to retain only the required data and only long enough to satisfy business functionality.
Conclusion
AWS services provide CSPs with the ability to build a customer in-context engagement solution that meets the needs of their evolving operations.
In this first post, we have explored the first section of an end-to-end solution, which tracks every subscriber’s incident in real time. This section constitutes the foundation to collect and analyze incident occurrence profiles and provide a single pane of glass for tracking subscribers’ quality of service.
The next blog post in this series will explore how CSPs can utilize this foundational layer to validate network and service incidents directly with subscribers in real time. By directly capturing their sentiment following recurring incident patterns, CSPs can prioritize operations with the objective of reducing churn and minimizing the strain on first-line support.
References
- https://www.globenewswire.com/news-release/2020/09/28/2099863/0/en/Global-Telecommunications-Network-Operators-Market-Review-Q2-2020-Capex-Drops-to-10-Year-Low-Revenues-Sink-Amidst-Spread-of-COVID-19-Pandemic.html
- https://www.computerweekly.com/blog/The-Full-Spectrum/How-churn-is-breaking-the-telecoms-market-and-what-service-providers-can-do-about-it
- https://www.invespcro.com/blog/customer-acquisition-retention/
- https://datahub.analysysmason.com/dh/
- https://www.ofcom.org.uk/research-and-data/multi-sector-research/cmr/cmr-2020/interactive
Contribution
- Business Case – Ludovica Chiacchierini and Tom Edwards
- Technical description – Christian Finelli (AWS) and Angelo Sampietro (AWS)
- Intro, Solution, Conclusion – Christian Finelli (AWS), Angelo Sampietro (AWS), Ludovica Chiacchierini, Tom Edwards