Research

Harmony [PDF]
Computing Frontiers, May 2013

Harmony is a technique for extracting the multiprocessor scheduling policy from commodity operating systems. This technique combines high-level synthetic workloads with low-level instrumentation to fingerprint an operating system's multiprocessor scheduling policy. Harmony can be used to detect simple aspects of multiprocessor scheduling policy, like the expected latency in detecting processor load imbalances. This tool can also be used to infer information about more complex multiprocessor scheduling behavior, such as how heterogeneous workloads are distributed amongst processors. We demonstrate Harmony's ability to extract multiprocessor scheduling policy by performing an analysis of three Linux schedulers: O(1), CFS, and BFS. Using Harmony, we discover some interesting facets of multiprocessor scheduling policy under Linux. Most intriguing, perhaps, is BFS's policy of providing increased processor affinity to low priority processes.


Towards Transparent CPU Scheduling [PDF]
Doctoral Dissertation, August 2011

In this thesis we propose using the scientific method to develop a deeper understanding of CPU schedulers; we use this approach to explain and understand the sometimes erratic behavior of CPU schedulers. This approach begins with introducing controlled workloads into commodity operating systems and observing the CPU scheduler's behavior. From these observations we are able to infer the underlying CPU scheduling policy and create models that predict scheduling behavior. We have made two advances in the area of applying scientific analysis to CPU schedulers. The first, CPU Futures, is a combination of predictive scheduling models embedded into the CPU scheduler and user-space controller that steers applications using feedback from these models. We have developed these predictive models for two different Linux schedulers (CFS and O(1)), based on two different scheduling paradigms (timesharing and proportional-share). Using three different case studies, we demonstrate that applications can use our predictive models to reduce interference from low-importance applications by over 70%, reduce web server starvation by an order of magnitude, and enforce scheduling policies that contradict the CPU scheduler's. Harmony, our second contribution, is a framework and set of experiments for extracting multiprocessor scheduling policy from commodity operating systems. We used this tool to extract and analyze the policies of three Linux schedulers: O(1), CFS, and BFS. These schedulers often implement strikingly different policies. At the high level, the O(1) scheduler carefully selects processes for migration and strongly values processor affinity. In contrast, CFS continuously searches for a better balance and, as a result, selects processes for migration at random. BFS strongly values fairness and often disregards processor affinity.


CPU Futures [TechReport]

CPU Futures is a system designed to enable application control of scheduling for server workloads, even during system overload. CPU Futures contains two components: a novel in-kernel herald that anticipates application CPU performance degradation and a user-level feedback controller that responds to these predictions on behalf of the application. In combination, these two subsystems enable fine-grained application control of scheduling; with this control applications can define their own policies for avoiding or mitigating performance degradation under overload. We implement CPU Futures within two different Linux schedulers, and show its utility by building two case studies on top of the system: Empathy, which limits the CPU interference caused by low-importance batch programs, and SheepDog, which prevents web requests from starving on a heavily-loaded web server. Through experiment, we find that CPU Futures are not only useful, but also have a low-overhead.