Crichton
Computer Science at
Brown University
My goal is to empower people to use the full potential of computers through programming. To reach that potential, we need to combine a science of programming with a science of people. For me, these sciences are programming language theory and cognitive psychology, respectively. My research draws on these fields to make programming more usable, learnable, and powerful.
In Fall 2025, I am starting as an assistant professor at Brown University, where I am founding the Cognitive Engineering Lab. I am recruiting PhD students! If you are considering my group, then talk to me in-person at OOPSLA 2024, or contact me over email. Previously, I completed my PhD at Stanford, advised by Pat Hanrahan and Maneesh Agrawala. I am currently finishing up my postdoc at Brown with Shriram Krishnamurthi.
I do research primarily between PL and HCI. I build systems like program slicers, document languages, and type system visualizers. I develop theories like working memory for programmers, psychometrics for programming languages, and type-safe templates for System F. I work on systems languages like Rust, proof assistants like Lean, and UI tools like the browser. My Rust research has been used by over 100,000 developers to date. My research garden (below) explains my current interests in greater detail.
Research Garden
This is where I grow my ideas. Click on the sections that interest you.
Concepts
Research interests focused a broad vision for the future.A cognitive theory of computational abstractions
Motivation
Programmers and CS researchers often claim that a language or system is intuitive
or for humans
. Our field rests on assumptions like structured programming is preferable to
and GOTO
sstatic typing is better for large teams than dynamic typing.
And fundamentally we all believe that our choice of tools—our choice of computational abstractions—has a significant impact on our productivity, and on the correctness and performance of our programs. Would you choose to program a large system in assembly instead of C? I expect not!
This raises the question: how can we justify these beliefs? How can I argue that one set of abstractions is better than another? What are the key variables (task, programmer, environment, etc.) that affect this judgment? Programmers will often use metrics like lines-of-code, but we need better metrics. And while it's important to answer these questions for our existing beliefs, the longer term goal is to help programmers make predictions about the human factors of future systems before the systems are deployed. I am advocating for a predictive cognitive theory of abstraction, as opposed to the iterative design methods predominant in HCI.
My angle
I approach this problem from the lens of cognitive psychology: using cognitive theories (e.g., working memory, perception, planning) and cognitive experimental methods (e.g., free recall, eye tracking, thinkalouds) to build an understanding of the cognitive aspects of programming. I focus on two angles:
-
How does an abstraction influence a person's ability to decompose programming problems? A central task in programming is to take a problem (
build a website to do X
,analyze Y feature of a dataset
) and break it down into pieces that correspond to features of the language or library at hand (e.g., for loops orList.map
). I think a key issue is understanding what it means tothink with
an abstraction, so one could argue why an abstraction would be better or worse for decomposition.-
An interesting domain to study is tabular data analytics, because there are dozens of meaningfully different ways to express the same concept. I wrote the Expressiveness Benchmark (2020) as a side project to experiment with different metrics for comparing the expressiveness of analytics languages and APIs.
-
-
How do abstractions affect the load on cognitive resources during different programming tasks? People have low-level cognitive resources like perception, working memory, and long-term memory that contribute to the higher-level process of conscious thought. Abstractions can influence code's perceptual structure (e.g., reading order or visual salience) and mnemonic structure (e.g., how many facts are needed to understand a piece of code).
-
In
The Role of Working Memory in Program Tracing
(CHI 2021), I looked at how the most primitive abstractions of programming (variables, arithmetic) relate to working memory. I would love to continue this work with higher-level abstractions!
-
Project ideas
These ideas are suitable for a strong programmer with a deep interest in psychology, cognition, and human-centered design of programs. You should (a) have enough programming experience to have a good intuition for what makes programming hard, (b) have enough experience with UI tools to productively implement experimental interfaces, and (c) be prepared to read a lot of psychology books and papers.
-
What is a better metric than lines-of-code for comparing the expressiveness / brevity / quality of languages or libraries? Here's a hypothesis: a proxy for the complexity of a program is the complexity of the argument for why the program is correct. You could operationalize this idea in a proof assistant: is the size-of-proof-of-correctness a better correlate for code complexity than lines-of-code?
-
What are the principles of good examples for illustrating an abstraction? It is broadly true that abstract concepts are best explained through concrete examples, at least at first. But there is an infinite space of examples—how should a teacher (could be a person or an ML model) sample from this space? How should sequences of examples be used to progressively build a mental model of the abstraction in the learner's mind?
-
Is the Sapir-Whorf hypothesis true for programming languages? That is, to what extent does learning a programming language affect how a person thinks about programming problems, even if they're using a different language? For example, if I learn about list processing primitives (map, filter, fold) and then go write for-loops in C, will I approach the same tasks differently?
Technical foundations of technical communication
Motivation
Technical communication today (textbooks, academic papers, documentation, etc.) largely looks the same as it did a century ago: plain words wrapping static figures. This is a tragedy. Computers have the capacity to significantly improve technical communication in two ways:
-
Multimodal representations of information. Prose is just one of many representations: charts, diagrams, audio, and video can be more cognitively effective for communicating information (depending on the task and reader). Computers have the capacity to both present and generate multimodal representations in ways that traditional documents cannot.
-
Dynamic interfaces into information. Unlike a book, a computer can talk back. A person can interactively explore complex topics and receive explanations contextualized to their background. This offers a way out of the one-size-fits-all style of communication that is standard today.
In fact, computers have been able to do both of these things for decades. That raises the question: why don't we do these things more often? My thesis is that constructing multimodal dynamic documents (rich documents
) is simply too much effort. It is the domain of radically talented individuals like Grant Sanderson (3Blue1Brown), Bartosz Ciechanowski, and Bret Victor. Case in point, when machine learning researchers tried to produce such articles at Distill, they stopped because it was too much damn work:
We don't believe [anymore] that having a venue is the primary bottleneck to authors producing more Distill-style articles. Instead, we believe the primary bottleneck is the amount of effort it takes to produce these articles and the unusual combination of scientific and design expertise required.
My angle
I think we need to simultaneously make progress at each layer of the stack of document technologies:
-
What are the most important capabilities of rich documents to design for? That is, if you could make one change to every document in the world, what would have the biggest impact in improving our ability to communicate? First, this means distinguishing flashy features from real cognitive impact. For example, animations are not always better than static pictures. Second, this means weighing the different trade-offs in document design. How much training do readers need in a given feature? How hard is it to for authors to work with? Can it be rendered in the browser, on a phone, or on a Kindle?
-
For example, I first got thinking about documents because I was deeply dissatisfied with my ability to use LaTeX for basic kinds of readability improvements. I rewrote one of my PL papers as a website to try out different concepts for improving the academic paper format:
A New Medium for Communicating Research on Programming Languages
(HATRA @ SPLASH 2022).
-
-
What is a stable format for reading and sharing rich documents? PDF has served us well for 30+ years, but it's time for something better. However, while the web technology stack (HTML + CSS + JS) can effectively implement most rich documents, it does not have a stable format for reading and sharing. I cannot easily download a rich document, read it offline, send it to a friend, upload it for peer review, and so on.
-
I implemented a small prototype of web-based document format based on the EPUB file format. See:
Portable EPUBs.
-
-
How can we combine the brevity of document languages with the power of programming languages? Or put another way, what is the LaTeX / Markdown of the future? Raw HTML is not a pleasant format in which to hand-write documents, rich or otherwise. I want a document programming language that both (a) makes prose and common document elements easy to write, and (b) provides powerful and modern mechanisms for computation (i.e., not unhygienic macros à la TeX).
-
In
A Core Calculus for Documents
(POPL 2024) I laid out a basic formalism for describing what computation means within a document. -
Nota is a document programming language that I have been tinkering on for some time.
-
Project ideas
These ideas are suitable for someone with deep experience in browser technologies and an interest in document design, reading comprehension, or knowledge systems.
-
What are technical mechanisms that can make academic papers easier to understand? For example, take an impactful paper in your field that's considered difficult to understand. What is the optimal presentation of the paper to most effectively communicate its information to the reader? See:
Scientific Communication as Sequential Art
. -
How can we compare the different notations and mechanisms of document languages? For example, what would a 7GUIs or Rosetta Code or TodoMVC look like for documents? Can we more rigorously compare new document language designs on what documents they make possible or easy?
-
Make EPUB a viable alternative to PDF. Take this prototype and get it to work for real. How can you address all the needs of a rich document format: layout, accessibility, security, and so on? (Alternatively: design your own format!)
-
Make Nota a viable successor to LaTeX / Markdown. Take this prototype and get it to work for real. What would a good editing experience (e.g., Overleaf) look like for Nota? (Alternatively: design your own language!)
Radical teaching tools
Motivation
The field of ed-tech
has largely failed to deliver technologies that meaningfully improve education beyond logistics. Technology may have even had a net harm on education through student surveillance, AI grading, and attention-grabbing money holes. This is a tragedy.
Yet, I remain optimistic that computers have a significant untapped capacity to improve the quality of education. I am not interested in making education easy. Learning new things will always require struggle. I am interested in making that struggle as productive as possible, by combining what we know about cognition and computing to create radical teaching tools.
My thesis is that the vast majority of education is essentially ad hoc. Whether it's a teacher designing an assignment for their class, or a developer writing a code comment to explain a tricky concept, or a UI designer deciding what help text to put on a button — all these decisions rest on a folk psychology of pedagogy.
My angle
I want to build computational tools that systematize education by embodying theories of what makes education effective, especially for computer science education. I have three broad interests:
-
What are representations of concepts that better faciliate comprehension? The form of a concept (prose, diagram, video, etc.) significantly affects a person's ability to understand that concept. See:
The Humane Representation of Thought
(2014), Things That Make us Smart (1994),Why a Diagram is (Sometimes) Worth Ten Thousand Words
(1987). I am interested in all kinds of representational design, especially related to programming such as type systems and runtime state.-
The Aquascope visualizer for Rust is a good example of a new form of visual representation for flow-sensitive types, as described in
A Grounded Conceptual Model for Ownership Types in Rust
(OOPSLA 2023).
-
-
How can focused skill training achieve better learning outcomes than general practice? Simply practicing a skill can be less effective than doing training exercises targeted at specific learning outcomes. See:
Perceptual Learning Modules in Mathematics (2010)
,Sexing Day Old Chicks
(1987),The Role of Deliberate Practice in the Acquisition of Expert Performance
(1993). I am interested in creative ways that computers can provide training environments that are not feasible with just standard physical materials (the Perceptual Learning Modules is a good example). -
How can we reliably create assessments that are valid measures of skill? Performance on a test should ideally correlate to performance on a real-world skill that we care about, but such
validated instruments
are difficult and time-consuming to create. I am interested in methods for computationally generating assessments grounded in data that improves the validity of the questions. See: Quizius (2019).-
In
Profiling Programming Language Learning
(OOPSLA 2024), we used psychometrics to analyze quiz data in the Rust Book Experiment. Specifically, we used question discrimination to measure thequality
of questions based on their correlation to overall quiz scores.
-
Project ideas
These ideas are suitable for someone with both software engineering and data analysis skills, as well as a significant interest in education.
-
What is a visual representation for explaining the behavior of a program over time? Program state visualizations like Python Tutor can show a program state at a given point in time, but effective comprehension can sometimes rely on seeing behaviors unfold in time. What is a visual notation for compactly representing changes in program state over time? See: CrossCode (UIST 2023) and Log-It (CHI 2023).
-
How can you teach someone debugging more effectively than just having them debug? Debugging is an essential computer science skill. Yet, CS programs usually expect students to learn debugging by simple practice: encounter a bug, then fix it, rinse and repeat. Are there training regimens that could help students acquire the elements of debugging expertise more quickly? For instance, could you design something a perceptual learning module for reading error messages or StackOverflow answers?
-
Can we generate assessments of programmer expertise based on a computational cognitive model? An assessment is trying to figure out what you know about a subject. Usually the relationship between the questions and the subject is implicit in the mind of the question writer. But if we have an explicit cognitive model of the subject, then in theory we could generate questions designed to learn information about the model. See:
WatChat: Explaining Perplexing Programs by Debugging Mental Models
. (2024)
Technologies
Research interests focused on a particular system.A new era of systems programming with Rust
Motivation
Rust is one of the most exciting developments in programming languages in the last 15 years. This is for two reasons:
-
Rust
solves
the issue of memory safety without garbage collection. This makes both PL theorists and security engineers happy. -
Way more importantly, Rust makes both systems programming and functional programming accessible to a broad audience. This is true because (a) Rust is an extremely practical language with great tooling, (b) Rust avoids much of the accumulated complexity of C++, and (c) Rust was designed by people who care about PL theory.
Rust is taking great ideas in PLT (e.g., type classes, algebraic data types, expression-oriented programming) that have been stewing for decades in languages like OCaml and Haskell, and Rust is finally bringing these concepts into the limelight. As a result, people are using Rust to totally rethink the design of systems: game engines, web servers, 2D graphics, cryptography, linear algebra, serialization, parsing, even string APIs (just to name a few!).
My angle
I have been working with Rust since 2014. I am deeply embedded in the Rust community: I contribute to the toolchain, give talks about Rust, and tens of thousands of people have used my experimental Rust textbook. I have main two research angles on Rust:
-
How can people learn and use Rust more effectively? Rust is a complex language. People need help to both ramp up into that complexity when learning, and to manage that complexity when programming.
-
I have been running a long-term experiment on a fork of the Rust Book to evaluate methods for teaching Rust.
A Grounded Conceptual Model for Ownership Types in Rust
(OOPSLA 2023) describes a new way to think about Rust's ownership system as embodied in the Aquascope visualizer.Profiling Programming Language Learning
(OOPSLA 2024) describes how we embedded quizzes into the book to both figure out which concepts were hardest for learners, and to evaluate interventions in the book. -
I created the Flowistry program slicer for Rust. Flowistry helps Rust programmers read complex code by visualizing which code depends on a given expression.
Modular Information Flow through Ownership
(PLDI 2022) describes how Flowistry uses Rust's ownership types in a clever way to understand how functions induce dependencies just by their type signature.
-
-
How can we use Rust's features to design systems that are safer, faster, more correct, and more usable? Because Rust's feature set is so different from traditional OOP languages, it's a big open question how systems should be architected in Rust.
-
I want to systematize the process of mapping from conceptual objects to concrete APIs. What kinds of concepts should you encode as traits? As enums? And so on. I wrote
Typed Design Patterns for the Functional Era
(FUNARCH @ ICFP 2023) as a first step in this direction. -
Not really for research, but I've built a lot of Rust APIs like indexed collections, browser-based logging, and lock-free data structures.
-
Project ideas
These ideas are suitable for students who are strong systems programmers, especially those (a) interested in PL or HCI, and (b) with Rust experience. For instance, you should be able to understand and contribute to a complex system (100K+ LOC).
-
Could you design a higher-level language on top of Rust (imagine OCaml or Javascript), but automatically lift the entire Rust library ecosystem into the higher level? How much of Rust could you get
for free
rather than reimplementing everything from scratch? -
How can you teach people to design Rustic systems from first principles? Once people understand the mechanisms of Rust, how can you build on that knowledge to think architecturally?
-
If you understood the full dependency graph within a Rust codebase (e.g., using Flowistry), how could a developer use the dependency graph as a devtool? For instance, could you automatically propose refactorings of messy code by analyzing the dependency graph?
-
What would a better debugger than
println
orgdb
look like for Rust? How could you make use of a more powerful UI toolkit than the terminal? -
How can we help the Rust developers make more informed design decisions about the future of the language? How should we be designing surveys or user studies or codebase analyses to test claims about prospective language features?
Reactive interfaces in the browser
Motivation
Building user interfaces is not easy. Human perception and cognition do not fit neatly into computer-friendly abstractions. It is a fundamental challenge in computer science to design UI programming models that scale, i.e., that help people perform complex tasks. Today, the most powerful UI infrastructure is the browser (i.e., HTML + CSS + JS). The last 20 years have seen an unending parade of browser-based UI programming models: jQuery, Backbone, React, Solid, and Svelte, just to name a few.
Despite this progress, it is still way too difficult to express even simple interfaces! And it's high time to bring UI programming back into academia. Besides Flapjax (OOPSLA 2009), D3 (VIS 2011) and Elm (PLDI 2013), most recent progress on UI programming has been made within industry. Yet, UI programming is a fascinating topic at the intersection of PL and HCI, because it forces us to evaluate the correctness of programs in fundamentally human-centered terms. We cannot rely on the traditional PL vocabulary of type safety and undefined behavior to tell us if an interface is unusable. But I also don't think we must throw up our hands and say welp, humans are too complicated, UI programming just has to be ad hoc.
My angle
I have been doing frontend web development since 2010, when I was first employed by a web design firm in my hometown of Des Moines, Iowa. I view the browser as a magnificent (if messy) achievement, and I view Javascript as an increasingly unnecessary evil. (Nonetheless, I maintain a Javascript build tool orchestrator.)
I have only begun approaching UI programming as a research topic recently, so I'm keeping an open mind on which direction to take. But these are my preliminary angles:
-
How can PL techniques improve UI programming models? That is, how can we use compilation, program analysis, and type systems to design better UI programming models compared to just working at the library-level? Elm and Svelte are good preliminary examples of this idea, but I want to go farther around ideas like dependency analysis and formal verification.
-
I formalized the computational aspects of templating languages in
A Core Calculus for Documents
(POPL 24). I hope to use the language described here as the basis for future research on UI programming.
-
-
How can layout algorithms be more extensible and composable? Layout algorithms largely exist in silos that don't work together and can't be easily modified. For instance, I can't easily compose a graph layout algorithm like Graphviz with a reactive UI framework like React. If I don't like how my browser justifies text, I can't easily reach into the internals and tweak its parameters. I am interested in systems like Penrose (SIGGRAPH 2020) and Bluefish (2023) that provide more extensible and compositional layout primitives.
Project ideas
These ideas are suitable for students who are strong UI programmers, especially those with an interest in PL theory. For example, you should be able to implement a React-like library from scratch.
-
How can a UI programming language generate reactive updates equivalent to hand-written code? The main issue addressed by newer UI frameworks is the performance of reactivity. That is, when the program's state changes, how efficiently can that change propagate to the UI? React provides an execeptionally clean and expressive model for reactivity, but it comes at a cost: its reconciliation algorithm can be tricky to optimize. Later frameworks like Solid and Svelte limit expressiveness in exchange for performance, but they still use dynamic dependency analysis (i.e., signals or runes) to which components depend on which state.
What if you built a UI language specifically around static dependency analysis? I think you could generate UIs that react more efficiently than any framework today. The key question is whether such a language would be expressive enough to permit a wide range of UIs.
-
How can dynamic visual representations be formally verified? Say you are generating an interface, a document, or a diagram. Assume this visual object is dynamic in some way: perhaps it changes based on the reader's screen size, or permits customization (different fonts or font sizes), or allows for interactivity. The author cannot physically look at the artifact's appearance under all possible reading conditions. This is a perfect use case for formal verification — can we check whether the visual artifact satisfies important properties under all possible conditions?
If this interests you, ask me about my Lean prototype of a formally verified document layout engine.
Proof-oriented programming with Lean and F*
Motivation
The F* tutorial describes proof-oriented programming as:
a paradigm in which one co-designs programs and proofs to provide mathematical guarantees about various aspects of a program's behavior, including properties like functional correctness (precisely characterizing the input/output behavior of a program), security properties (e.g., ensuring that a program never leaks certain secrets), and bounds on resource usage.
I think proof-oriented programming is crucial to advancing computer science as a field. Proofs provide guarantees that techniques like unit testing and fuzzing cannot, improving the reliability of software. More broadly, proofs require us to actually understand every piece of our software — a proof does not permit hand-waving.
Proof-oriented programming is just now becoming possible. Older proof assistants like Coq paved the way as shown by systems like CompCert (2008). Newer proof assistants like Lean and F* are providing better devtools and compilation strategies, supporting ambitious research like Project Everest.
However, these languages are extremely difficult to use. Implementing a large system would generally be considered the realm of open research, not mundane engineering. The most interesting work here is often done by people with PhDs in programming language theory. As with Rust, I do not believe these techniques will truly advance the field until they can be adopted en masse.
My angle
I have been experimenting mostly with Lean in the last year or two. I am not an expert on proof assistants (although luckily we do have an expert at Brown!), so I am mostly bringing my HCI lens to this area. These are two big questions on my mind:
-
How can we best scaffold the complexity of a proof assistant during the learning process? Making effective use of a proof assistant relies on a broad base of knowledge. A user should ideally know functional programming, monadic effects, the Curry-Howard correspondence, constructive logic, computability theory, abstract algebra. The proof assistant itself comes with a giant standard library consisting of thousands of theorems and dozens of tactics. I'm interested in how we can slowly ramp up users into this complexity, while still ensuring that users can accomplish useful tasks along the way.
-
How can users have a rich conversation with the proof assistant? When authoring a proof, users need to understand the context of the proof and the actions of the assistant. The goal pane interface is one good step in this direction. But users should be able to better understand tactics (why did they succeed or failed? what was tried?), theorems (which ones could be relevant to a given goal?), and type classes (why can a given instance be found or not found?).
Project ideas
These ideas are suitable for students who have a strong background in PL theory and at least moderate experience with a proof assistant, along with an interest (but not necessarily significant experience) in human factors research.
-
Given a proof goal, how can a tool automatically deduce the set of relevant tools and learning resources to writing the proof? How should this information be presented to the user?
-
What would a query language for theorems look like? What is the Hoogle equivalent for Lean or F*?
Publications
This is a representative set of my publications. Click the button above to see an exhaustive list.
Conference Papers
Profiling Programming Language Learning
This paper documents a year-long experiment to "profile" the process of learning a programming language: gathering data to understand what makes a language hard to learn, and using that data to improve the learning process. We added interactive quizzes to The Rust Programming Language, the official textbook for learning Rust. Over 13 months, 62,526 readers answered questions 1,140,202 times. First, we analyze the trajectories of readers. We find that many readers drop-out of the book early when faced with difficult language concepts like Rust's ownership types. Second, we use classical test theory and item response theory to analyze the characteristics of quiz questions. We find that better questions are more conceptual in nature, such as asking why a program does not compile vs. whether a program compiles. Third, we performed 12 interventions into the book to help readers with difficult questions. We find that on average, interventions improved quiz scores on the targeted questions by +20\%. Fourth, we show that our technique can likely generalize to languages with smaller user bases by simulating our statistical inferences on small N. These results demonstrate that quizzes are a simple and useful technique for understanding language learning at all scales.
A Core Calculus for Documents
Passive documents and active programs now widely comingle. Document languages include Turing-complete programming elements, and programming languages include sophisticated document notations. However, there are no formal foundations that model these languages. This matters because the interaction between document and program can be subtle and error-prone. In this paper we describe several such problems, then taxonomize and formalize document languages as levels of a document calculus. We employ the calculus as a foundation for implementing complex features such as reactivity, as well as for proving theorems about the boundary of content and computation. We intend for the document calculus to provide a theoretical basis for new document languages, and to assist designers in cleaning up the unsavory corners of existing languages.
A Grounded Conceptual Model for Ownership Types in Rust
Programmers learning Rust struggle to understand ownership types, Rust’s core mechanism for ensuring memory safety without garbage collection. This paper describes our attempt to systematically design a pedagogy for ownership types. First, we studied Rust developers’ misconceptions of ownership to create the Ownership Inventory, a new instrument for measuring a person’s knowledge of ownership. We found that Rust learners could not connect Rust’s static and dynamic semantics, such as determining why an ill-typed program would (or would not) exhibit undefined behavior. Second, we created a conceptual model of Rust’s semantics that explains borrow checking in terms of flow-sensitive permissions on paths into memory. Third, we implemented a Rust compiler plugin that visualizes programs under the model. Fourth, we integrated the permissions model and visualizations into a broader pedagogy of ownership by writing a new ownership chapter for The Rust Programming Language, a popular Rust textbook. Fifth, we evaluated an initial deployment of our pedagogy against the original version, using reader responses to the Ownership Inventory as a point of comparison. Thus far, the new pedagogy has improved learner scores on the Ownership Inventory by an average of 9\% (N = 342, d = 0.56).
Modular Information Flow through Ownership
Statically analyzing information flow, or how data influences other data within a program, is a challenging task in imperative languages. Analyzing pointers and mutations requires access to a program's complete source. However, programs often use pre-compiled dependencies where only type signatures are available. We demonstrate that ownership types can be used to soundly and precisely analyze information flow through function calls given only their type signature. From this insight, we built Flowistry, a system for analyzing information flow in Rust, an ownership-based language. We prove the system's soundness as a form of noninterference using the Oxide formal model of Rust. Then we empirically evaluate the precision of Flowistry, showing that modular flows are identical to whole-program flows in 94\% of cases drawn from large Rust codebases. We illustrate the applicability of Flowistry by using it to implement prototypes of a program slicer and an information flow control system.
The Role of Working Memory in Program Tracing
Program tracing, or mentally simulating a program on concrete inputs, is an important part of general program comprehension. Programs involve many kinds of virtual state that must be held in memory, such as variable/value pairs and a call stack. In this work, we examine the influence of short-term working memory (WM) on a person's ability to remember program state during tracing. We first confirm that previous findings in cognitive psychology transfer to the programming domain: people can keep about 7 variable/value pairs in WM, and people will accidentally swap associations between variables due to WM load. We use a restricted focus viewing interface to further analyze the strategies people use to trace through programs, and the relationship of tracing strategy to WM. Given a straight-line program, we find half of our participants traced a program from the top-down line-by-line (linearly), and the other half start at the bottom and trace upward based on data dependencies (on-demand). Participants with an on-demand strategy made more WM errors while tracing straight-line code than with a linear strategy, but the two strategies contained an equal number of WM errors when tracing code with functions. We conclude with the implications of these findings for the design of programming tools: first, programs should be analyzed to identify and refactor human-memory-intensive sections of code. Second, programming environments should interactively visualize variable metadata to reduce WM load in accordance with a person's tracing strategy. Third, tools for program comprehension should enable externalizing program state while tracing.
Workshop Papers
Typed Design Patterns for the Functional Era
This paper explores how design patterns could be revisited in the era of mainstream functional programming languages. I discuss the kinds of knowledge that ought to be represented as functional design patterns: architectural concepts that are relatively self-contained, but whose entirety cannot be represented as a language-level abstraction. I present four concrete examples embodying this idea: the Witness, the State Machine, the Parallel Lists, and the Registry. Each pattern is implemented in Rust to demonstrate how careful use of a sophisticated type system can better model each domain construct and thereby catch user mistakes at compile-time.
A New Medium for Communicating Research on Programming Languages
Papers about programming languages involve complex notations, systems, and proofs. Static PDFs offer little support in understanding such concepts. I describe Nota, a framework for academic papers that uses the browser's interactive capabilities to support comprehension in context. Nota uses hover effects, tooltips, expandable sections, toggleable explanations, and other interactions to help readers understand a language's syntax and semantics. I demonstrate the use of Nota by rewriting a PL paper using its primitives, and also by writing this paper in Nota.
Documentation Generation as Information Visualization
Automatic documentation generation tools, or auto docs, are widely used to visualize information about APIs. However, each auto doc tool comes with its own unique representation of API information. In this paper, I use an information visualization analysis of auto docs to generate potential design principles for improving their usability. Developers use auto docs as a reference by looking up relevant API primitives given partial information, or leads, about its name, type, or behavior. I discuss how auto docs can better support searching and scanning on these leads, e.g. by providing more information-dense visualizations of method signatures.
From Theory to Systems: A Grounded Approach to Programming Language Education
I present a new approach to teaching a graduate-level programming languages course focused on using systems programming ideas and languages like WebAssembly and Rust to motivate PL theory. Drawing on students' prior experience with low-level languages, the course shows how type systems and PL theory are used to avoid tricky real-world errors that students encounter in practice. I reflect on the curricular design and lessons learned from two years of teaching at Stanford, showing that integrating systems ideas can provide students a more grounded and enjoyable education in programming languages. The curriculum, course notes, and assignments are freely available: https://stanford-cs242.github.io/f19/