AWS Quantum Technologies Blog
Australian Red Cross Lifeblood collaborates with AWS to optimize rostering
This post was contributed by Gili Rosenberg, Yaroslav Kharkov, Kyle Booth, Martin Schuetz, Brajesh Gupt, and Helmut Katzgraber from the Amazon Web Services and Jonathan Odes, Eka Tian, Andrew Clarke, Stephanie Ong, and Aaron Zhao from Lifeblood.
The Australian Red Cross Lifeblood (Lifeblood) is an Australian non-profit collecting more than 1.6 million blood donations in the last year (up 600,000 from 2022). Lifeblood is embarking on a digital transformation to use advanced computing to find innovative solutions to hard problems across areas that Lifeblood serves including blood and organ matching, tissue-typing, breast milk, or other life-giving biological products.
Collecting blood donations would not be possible without the thousands of Lifeblood nurses across about 100 donor centers. However, ensuring that the donor centers are staffed with the appropriate number of nurses with the right level of expertise while considering other real-world factors is a hard combinatorial optimization problem.
In this post, we’ll tell you about the work Lifeblood did with help from AWS to explore how advanced computing can not only optimize nurse scheduling, but also lower the cost of collecting blood donations. In this proof-of-concept, Lifeblood was able to achieve a theoretical reduction of close to 7% in costs, or a reduction of 44% in the cost per blood donation when demand was doubled – critical for being able to keep up with increasing demand.
The Amazon Quantum Solutions Lab prepares customers – like Lifeblood – for a future with quantum computing while delivering value today – with advanced solutions, including optimization and machine learning.
Optimizing nurse scheduling
Today we’ll discuss the approach we took to tackle the opportunity to optimize nurse scheduling – consisting of a quantum proof of concept and a state-of-the-art classical optimization method based on constraint programming (CP).
We’ll outline our quantum proof of concept, using Amazon Braket, the quantum computing service of AWS.
Then we’ll describe our solution based on state-of-the-art classical optimization, which is able to solve the full industrial-scale problem in several hours.
Applying advanced computing to a manual process
Lifeblood currently uses a semi-manual procedure to produce staffing schedules. The process is time consuming, doesn’t take into account important factors (like anticipated donor-demand volumes), and uses set shifts rather than more granular time slots.
Lifeblood wanted to achieve several objectives:
- Mapping out the next steps in their journey towards being quantum-ready.
- Upskilling their workforce – they wanted to be capable of applying quantum and classical techniques to solve real-world problems they’re interested in.
- Gaining business intelligence – could they simulate a change to an operational variable and then observe how it would affect the cost per collection?
- Drive operational efficiency – albeit using synthetic data, they wanted to understand the optimization techniques required to theoretically reduce cost-per-collection.
Lifeblood worked with AWS Professional Services and the Amazon Quantum Solutions Lab, and used Amazon Braket to achieve those goals.
We structured the project into three parallel tracks:
- Quantum proof of concept (POC)
- Classical optimization solution
- Cross-skilling and training
The classical track delivered a classical solution solving the full industrial-scale problem. The quantum POC used the QuEra neutral-atom quantum device on Amazon Braket to solve a simplified version of the scheduling problem. Both tracks delivered a series of learning sessions to transfer knowledge to Lifeblood’s technical staff.
The major goal
The objective of this optimization problem is to schedule donor center staff in a way that minimizes the cost of collecting blood donations from registered donors, while satisfying various constraints. In Figure 1, we provide an illustration of what a nurse schedule for a donor center might look like for a typical two-week period.
To describe the optimization problem in more detail, we must define the decision variables, constraints, and the objective function.
Decision variables: Our primary decision variables represent the assignment of nurses to 30-minute time slots. The time scale (horizon) is given as a number of 2-week periods.
Constraints: The nurses available to each center are employed through a mix of full-time, part-time, and casual contracts, and designated as registered nurses or nursing assistants. It’s also important to track whether a nurse is trained to provide CPR, and whether that nurse can be a session leader. The constraints dictate various requirements about how many nurses – and what type of nurses – should be rostered in each time slot:
- Maximum and minimum ratio of donors per nurse
- Minimum number of nurses of various types in each slot
- Minimum and maximum shift length, and minimum gap between shifts
- Number of days off in a 2-week period, and maximum number of consecutive days on
- Minimum hours per 2-week period
- Opening/closing times
- Requirements for setup slots (before opening) and cleanup slots (after closing)
Objective: The optimization is done over a given demand curve – the number of donors registered for each time slot (see Figure 2). That means the total number of collections is fixed, so minimizing the cost per collection can be accomplished by minimizing the total staffing cost. Base hourly cost is provided. There are also some situations in which a premium is paid, over and above the base cost, including weekend pay, afternoon pay, and overtime pay.
Quantum track
Let’s dig into how we model and solve a simplified version of this scheduling problem on quantum hardware. For this track we developed a POC hybrid algorithm (that is, both quantum and classical) geared towards today’s quantum devices.
Analog Hamiltonian Simulation
Analog Hamiltonian Simulation (AHS) is the name for a paradigm in quantum computing that takes a different approach to the more widely used paradigm of programming with quantum circuits. Instead of a sequence of gates, each acting only on a couple of qubits at a time, an AHS program is defined by the time- and space-dependent parameters of the Hamiltonian in question.
These devices can simulate non-trivial states of quantum matter and can solve classical combinatorial optimization problems. You can find out more about running AHS programs on QuEra’s Aquila device on Amazon Braket by checking out our Getting Started documentation and our example notebooks.
The key physical mechanism underpinning the properties of Rydberg atom devices is based on the Rydberg blockade. This phenomenon prevents two atoms that are close together (closer than the blockade radius) from being simultaneously excited. The Rydberg blockade is the key ingredient that makes neutral-atom devices well suited for certain (NP-hard) Maximum Independent Set problems.
The Hamiltonian of the Rydberg atom array represents the cost function, and reads as
H=−Δ(t) ∑i ni + Ω(t) / 2 ∑i σix +∑i,j Vijninj.
Here Δ(t) and Ω(t) are (classical) control parameters defining the annealing schedule; σix is the X operator between the excited (Rydberg) state and the ground state of the i-th atom; Vij refers to the van der Waals interaction between Rydberg atoms; and ni is the occupation number for atom i: ni=0 if the atom is in the ground state and ni=1 if the atom is in the excited (Rydberg) state.
Next, we devised a problem formulation compatible with this Rydberg Hamiltonian, eventually allowing us to solve this problem on Rydberg quantum hardware.
Formulation
For demonstrating a POC quantum algorithm we used a simplified formulation of the scheduling problem: a granularity of 8 hours, so that each day contains morning, afternoon, and night shifts. In this simplified formulation, we included only the following hard constraints:
- Maximum and minimum ratio of donors-per-nurse
- Minimum gap between shifts
Geometric embedding and hybrid algorithm
Our approach for solving this scheduling problem with Rydberg atom arrays is based on a geometric embedding where each atom is associated with one nurse at a particular shift. Atoms in the ground state correspond to nurses that are not scheduled and atoms in the Rydberg state correspond to scheduled nurses. The blockade mechanism automatically enforces the minimum gap constraint, since two consecutive shifts for the same nurse must have a time gap. For simplicity we assume a minimum gap between shifts of one shift.
A very important part of the scheduling problem is the demand coverage constraint. We proposed a new approach based on adding ancilla qubits in the close proximity of the atom, where the excited state should be penalized. The ancilla atom suppresses the probability of the Rydberg state in the corresponding data qubit, which is within the blockade radius from the ancilla atom. Overall, this biases the solution towards fulfilling demand in slots where demand has not been satisfied yet.
Our hybrid algorithm then solves the (simplified) scheduling problem in an iterative approach, combining quantum execution with classical processing. Specifically, the schedule for each nurse is solved separately and the final solution is obtained by combining partial solutions. The candidate solutions are generated by sampling from the quantum processing unit (QPU) for each nurse. While iterating over nurses we dynamically update the shifts where the demand has been satisfied and place the ancilla atoms at the corresponding locations to block scheduling more nurses to these shifts. This iterative hybrid scheme is illustrated in Figure 3.
During a single pass over all nurses, we select only a single QPU sample for each nurse before proceeding to the next nurse. We collected multiple samples due to the stochastic nature of quantum computing. It could happen that after a single pass, the demand constraint is violated because the QPU does not – in general – succeed in minimizing the cost function in every sample.
To maximize the shot efficiency of the hybrid algorithm, we used a resampling scheme. After a single pass over all nurses, we cache the QPU samples and attempt to generate a variety of candidate solutions by drawing random samples from the cache. We track the list of feasible solutions and their costs during the resampling phase, and return the minimum cost after the resampling is completed.
When scaling the problem size by increasing the number of slots and nurses we observed that the demand constraint is only satisfied in a small fraction of samples. However, the solution can often be fixed by performing local moves: for example, adding nurses for the slots with unsatisfied demand, or swapping shifts with the neighboring shift. We added this solution “recovery” step as the final stage of the hybrid algorithm. This improved the success rate of the algorithm significantly – both the fraction of candidate solutions with satisfied demand, and the distance from optimality (the “gap”).
Results and discussion
We implemented the algorithm we just described as a Hybrid Job on Amazon Braket. We’ve summarized our results in Figure 4, showing the (relative) difference in cost between the best solution found with our quantum algorithm, and the optimum in percentage points for different numbers of slots and nurses.
We found the average difference from the optimal values across twelve problem instances (with 10-30 nurses and 6-30 slots) to be below 1%.
We found that the solution “recovery” step in the hybrid algorithm was crucial to achieve high quality solutions. By adding solution “recovery”, the fraction of candidate solutions increased from ~0.1% to 80-90%, which improved the algorithm’s convergence, as a function of the number of resampling iterations, by 1000x.
Classical track
Formulation
We formulated the full industrial-scale optimization problem as a CP model and then used the state-of-the-art CP-SAT solver (which is part of OR-Tools). We used the demand curve shown in the problem statement (see Figure 2) to construct an optimization model. The model has 85k variables and 832k constraints.
Results and discussion
The schedule returned by CP-SAT after an hour showed a reduction of nearly 7% in the cost, compared with the estimated baseline. It had a gap (confidence bound) of about 1% – the difference between that solution and the lower bound on the cost. The solver guarantees that no schedules exist with a cost lower than that lower bound.
Next, we checked how the number of nurses rostered for each slot changed over time, compared with the demand (see Figure 2). More nurses are rostered than strictly necessary to cover the demand, due to the minimum number of hours. To reduce cost, the solver moved additional hours beyond the minimum needed away from the weekends and afternoons.
The fact that the number of nurses is greater than the demand (see Figure 2) indicates that there is room for additional demand. This motivated us to increase the demand and check how this affected the cost (see Figure 5, left axis). We saw that the cost increased by only 12% when doubling the demand (a 100% increase) and that the cost per collection dropped drastically – a 46% reduction (the same Figure 5, but right axis).
Wrapping up
These two tracks, coupled with learning sessions, defined the beginning of Lifeblood’s quantum learning journey. We demonstrated a theoretical cost reduction of 7% – and a cost reduction of 46% when doubling the demand. Although using synthetic data, this hints at the theoretical improved efficiency and business intelligence Lifeblood might be able to attain if they put an advanced scheduling solution in place.
If you have complex scheduling problems, or in general combinatorial optimization problems, and want to understand how state-of-the-art quantum and classical optimizations could apply to your business – today or in the future – reach out to the Amazon Quantum Solutions Lab to start a conversation.