David Durst

Final Year CS PhD Candidate at Stanford
I'm on the job market.
Advised by Kayvon Fatahalian and Pat Hanrahan
durst@stanford.edu
GitHub
LinkedIn
CV
Blog

Photo of David Durst

I am in the final year of my Ph.D. in Computer Science at Stanford University. I analyze human behavior traces and train models to imitate humans.

My research focuses on data systems, machine learning, and computer architecture. I developed a system for imitating human behavior within the AI performance constraints of a commercial video game. To create this system, I trained an efficient transformer-based learned movement controller, curated a dataset, built an in-memory data pipeline, reverse-engineered an integration interface to the game, and created quantitative metrics for evaluating similarity to human behavior distribution. I deployed the controller in a bot and collaborated with industry partners to assess its commercial impact. This video shows the bots moving like humans as they play against themselves. My other research includes Aetherling, a dependently typed programming language that produces FPGA image processing accelerators with a scheduler that automatically trades-off resource utilization and throughput.

I am advised by Professors Kayvon Fatahalian and Pat Hanrahan and was supported by an NSF Graduate Research Fellowship and a Stanford Graduate Fellowship in Science and Engineering.

Presentations

GDC 2023 - March 2023
The talk for the Hallucinations paper. In this talk, we explain that anti-cheat and cheat developers are in a cat-and-mouse cycle of generation and detection: anti-cheat struggles to improve behavior detectors while cheaters easily change behavior generators. Then, we propose hallucinations with configurable behavior. The hallucinations can reverse the cycle by enabling anti-cheat developers to generate behaviors that cheaters must detect. Finally, we demonstrate the effectiveness of our hallucinations. We show that three popular cheat programs fail to detect them in Call of Duty.

Stanford AHA Monthly Meeting - October 2020
Reconfigurable accelerators promise an exciting set of benefits compared to other processors in the cloud and on mobile devices. They can enable application implementations that are more parallel, more energy efficient, and have lower latency. However, it can be challenging to predict the real-world situations where reconfigurability delivers these benefits. In this talk, I will examine five benchmarks that represent workloads interesting to Adobe. These benchmarks show that there is a precise niche of applications that benefit from reconfigurability: applications that can be implemented in a manner that takes advantage of custom cache hierarchies and specialized functional units. For other applications, there are other ways to improve performance including programming languages and compilers that efficiently use existing, non-reconfigurable accelerators with greater peak compute performance and memory bandwidth.

PLDI 2020 - June 2020
The conference talk for the Aetherling paper. In this talk, I focus on Aetherling's data-parallel IR with space- and time-types, a higher-level input language whose types are unaware of space and time, and a simple set of rewrite rules for converting from the higher-level language to the space-time IR.

Spark Summit East 2016 - February 2016
TopNotch is a framework for quality controlling big data through data quality metrics that scale up to large data sets, across schemas, and throughout large teams. TopNotch's SQL-based interface enables users across the technical spectrum to quality control data sets in their areas of expertise and understand data sets from other areas. I was the project lead and main developer for TopNotch while I worked at BlackRock.

Spark-NYC Meetup - September 2015
This presentation addresses the disparity between the current and desired big data user experiences. In this presentation, I demonstrate a web application with a scatterplot matrix visualization that allows non-technical users to utilize Spark to analyze large data sets.

Publications and Preprint

This preprint is under submission and is available upon request.
David Durst et al.

ACM TECS 2023
Kalhan Koul, Jackson Melchert, Kavya Sreedhar, Leonard Truong, Gedeon Nyengele, Keyi Zhang, Qiaoyi Liu, Jeff Setter, Po-Han Chen, Yuchen Mei, Maxwell Strange, Ross Daly, Caleb Donovick, Alex Carsello, Taeyoung Kong, Kathleen Feng, Dillon Huff, Ankita Nayak, Rajsekhar Setaluri, James Thomas, Nikhil Bhagdikar, David Durst, Zachary Myers, Nestan Tsiskaridze, Stephen Richardson, Rick Bahr, Kayvon Fatahalian, Pat Hanrahan, Clark Barrett, Mark Horowitz, Christopher Torng, Fredrik Kjolstad, Priyanka Raina

PLDI 2020
David Durst, Matthew Feldman, Dillon Huff, David Akeley, Ross Daly, Gilbert Louis Bernstein, Marco Patrignani, Kayvon Fatahalian, Pat Hanrahan

DAC 2020
Rick Bahr, Clark Barrett, Nikhil Bhagdikar, Alex Carsello, Ross G. Daly, Caleb Donovick, David Durst, Kayvon Fatahalian, Kathleen Feng, Pat Hanrahan, Teguh Hofstee, Mark Horowitz, Dillon Huff, Fredrik Kjolstad, Taeyoung Kong, Qiaoyi Liu, Makai Mann, Jackson Melchert, Ankita Nayak, Aina Niemetz, Gedeon Nyengele, Priyanka Raina, Stephen Richardson, Raj Setaluri, Jeff Setter, Kavya Sreedhar, Maxwell Strange, James Thomas, Christopher Torng, Leonard Truong, Nestan Tsiskaridze, Keyi Zhang