- Dates: August 28 - September 1, 2017
- Venue: Park City, Utah, USA
- Host: Matthew Flatt
https://www.flickr.com/photos/tkellyphoto/6246702517
Venue
The meeting will take place at the
Treasure Mountain Inn in Park City, Utah.
Park City is best known as a ski destination and as the home of the
Sundance Film Festival. Tourists also visit
in the summer, but late August will be uncrowded.
See
Accommodations below for information on booking a room at the group rate in the meeting hotel.
Getting There
Fly to Salt Lake International Airport (SLC). Park City is about 40 minutes from the airport by car or shuttle.
All Resort is a popular shuttle service, or see
visitparkcity.com.
The hotel recommends
Express for Less (435-649-8999).
A shuttle typically costs $40 one way. A taxi or private car will easily be twice that. Renting a car from the airport is
convenient and likely to be cheaper if 2 or more people carpool. The hotel provides parking, and you won't need a car in
Park City itself, since the hotel is right on Main Street and surrounded by restaurants, shops, and mountains,
but you can get to even better outdoor activities with a short drive into the mountains.
Coordination and Registration
Please
fill out this form to let the organizers know that you will attend and to get help coordinating transportation and/or accommodations.
The registration fee will cover the meeting room, coffee/tea during the meeting, all lunches, and all dinners.
The fee will be collected at the end of the meeting based on actual costs, expected to be $325 per person.
Accommodations
Contract
Treasure Mountain Inn
through the web site or by phone to make a reservation.
We originally had a group rate of $126/night single occupancy, $146/night double occupancy,
but the hotel is now under new management, and the normal advertised rates have been cheaper than that.
Note that the room options include a two-bedroom suite, so consider rooming with another participant to reduce costs.
Schedule
The meeting will start at 8:45am on Monday, August 28 and continue through lunchtime on Friday, September 1.
We plan a hiking excursion into the surrounding mountains on Wednesday after lunch.
Participants
Organizers
- Matthew Flatt (local organizer)
- Eelco Visser (chair)
- Jonathan Edwards (secretary)
Members
- Manuel Serrano
- Jan-Willem Maessen
- Adam Chlipala
- Kim Bruce
- Ross Tate
- Tom Van Cutsem
- Yannis Smaragdakis
- Daan Leijen
- Robby Findler
Visitors
- John Regehr
- Eric Eide
- Luke Church
- Matthias Hauswirth
- Nada Amin
- Heather Miller
- Emerson Murphy-Hill
- Éric Tanter
- Michael Greenberg
Monday
- Adam Chlipala
- Kim Bruce (Slides)
- Jonathan Edwards
- Robby Findler
- Yannis Smaragdakis (Slides)
- Tom van Cutsem
Tuesday
- Nada Amin
- Emerson Murphy-Hill
- Eric Tanter
- Luke Church
- Matthias Hauswirth
- Michael Greenberg
Wednesday
- Eric Eide
- John Regehr
- Tom van Cutsem
Excursion in the afternoon
Thursday
- Heather Miller (Slides)
- Manuel Serrano
- Eelco Visser (Slides)
- Daan Leijen
- Ross Tate
- Business Meeting
Friday
- Matthew Flatt
- Jan-Willem Maessen
- Manuel Serrano, Robby Findler: Esterel
Abstracts
Robby Findler: Racket and Language-oriented Programming
...
Jonathan Edwards: Reifying Programming
The need to think abstractly makes programming difficult, especially for normal people. To make programming more concrete I propose to represent programs as data structures reifying their execution. Programming shifts from editing textual abstractions to direct inspection and manipulation of concrete executions. Reified execution also helps ground abstraction through a new testing mechanism: interventions. An intervention tests and overrides behavior at a specific point within a reified execution, replacing hand-coded tests and mocks. Intervention points are stable because programs are edited structurally, not textually. Interventions also provide scaffolding that assists program development through refactoring-like steps of gradual abstraction.
Matthias Hauswirth: Abstraction-Free Programming
Abstraction is probably the most important idea in programming. However, many people have difficulties with abstract thinking. If we want to teach programming to all, could we do so without requiring abstract thinking? In this this talk we discuss this question and propose three abstraction-free programming "languages" that might help to provide an initial programming experience for people without abstraction skills. We hope to get feedback on our three language ideas and spark a discussion about abstraction and programming in general.
Michael Greenberg: Kleene algebra modulo theories
Are you working on something that can be modeled in terms of simple While programs over some decidable domain? Kleene algebra modulo theories (KMT) is a pay-as-you-go approach for sound, complete, and decidable models of simple programs. The KMT metatheory corresponds directly to an automata-theoretic implementation strategy: for just a few lines of code describing your theory’s syntax and decision procedure, we can decide equivalences in While programs over your theory.
Yannis Smaragdakis: Languages for Large Productivity Gains: What Will they Look Like?
In the past decade, I've worked a lot with (and on) declarative languages. I'll use this experience as a basis to discuss what a high-productivity language of the future (or the present!) might look like. This is a continuation of my Lausanne meeting talk, with more emphasis on the lessons learned. I will argue that a high-productivity language a) will have implementations capable of large performance gains (i.e., the difference between optimized and unoptimized performance will be asymptotically large); b) will need strict limits (such as termination or logical monotonicity) to preserve a reasonable programming model; c) will change development patterns often taken for granted.
Tom van Cutsem: Machine learning and software development: what does the future hold?
I would love to allocate some time for a discussion about how recent advances in machine learning and AI will (or will not) affect programming languages and software development in general. I can prepare a 10-minute introduction to the topic, highlighting a few interesting ideas I've seen recently where people have started to apply AI techniques to software development and programming. For example, Microsoft research recently published a paper where they've trained a neural network to learn to write code. After the intro, I can then act as a moderator for the rest of the discussion. The idea is to discuss long-term trends, not technical details of specific ideas.
Tom van Cutsem: XStream: declarative authoring of distributed, elastic stream processing pipelines
In this talk I will present XStream, a DSL for writing stream processing queries that are to be executed across distributed network nodes. XStream is used in a stream processing system developed at Nokia Bell Labs to do real-time processing of data and media streams. I will briefly discuss the DSL and how queries get compiled. XStream started off as an external DSL but was later revised to become an embedded DSL. I will recount some lessons learned which are pertinent to language design in an industrial research context in general.
Kim Bruce: Exploring the inheritance design space with Grace
Inheritance and delegation are the two most popular ways of modifying classes and objects to create new objects. Delegation is often used with object-based languages, while inheritance is generally the choice with class-based languages. However while investigating how to include inheritance in Grace, we discovered that there are many subtle differences in defining these notions, and many subtleties that can reach out to bite you. In this talk we explore the design space, and discuss how we chose to support inheritance in Grace.
Emerson Murphy-Hill: What Do Blind Developers Need in a Programming Language?
People who are blind use mainstream programming languages and environments to develop software, but a variety of usability challenges impact them disproportionately and require awkward workarounds. In this talk, I'll discuss what those challenges are, describe some preliminary design criteria for a programming language that overcomes those challenges, and ask for audience suggestions about how we can make progress on this problem.
Daan Leijen: Structured Asynchrony with Algebraic Effects
Algebraic effect handlers generalize many control-flow abstractions that are implemented specially in most languages, like exception handling, iterators, or backtracking. In this article, we show how we can implement full support for asynchronous programming as a library using just algebraic effect handlers. The consistent type driven approach also leads naturally to powerful abstractions like block-scoped interleaving, cancellation, and timeout’s that are lacking in other major asynchronous frameworks. We also introduce the concept of ambient state to reason about state that is local to the current strand of asynchronous execution
https://www.microsoft.com/en-us/research/publication/structured-asynchrony-algebraic-effects
Manuel Serrano: Optimistic JavaScript? AOT Compilation
JavaScript? has escaped web pages. It is now also used for
programming web servers, compilers, and other general purpose
tasks. There is even a growing trend for using it for programming
embedded devices. In this context, JIT compilation is ineffective
because it is too memory demanding, and interpretation is too slow for
anything else but simplistic tasks. Static compilation, //a.k.a.//,
ahead-of-time (AOT) compilation, is an alternative approach that can
combine the good speed of JIT compilers and the lightweight memory
footprint of interpreters.
We have designed an AOT compiler for full-fledged
JavaScript?. It relies
on a genuine type analysis called //hint typing//. Contrary to most
approaches, hint typing does not infer types according to the data
structures the program manipulates but according to the best code the
compiler is able to generate. In this presentation, we will present this
analysis and the overall architecture of the compiler.
Adam Chlipala: Correct-by-Construction Code Generation for Elliptic-Curve Cryptography
Cryptography libraries make up some of the most important high-performance code. Every SSL-based Web connection involves a suite of algorithms carefully designed for performance. These crucial functions are even commonly handcoded in assembly by the experts -- of which there are only a handful on the planet with the required skills. Many cryptographic algorithms manipulate large integers with modular arithmetic. A change so seemingly trivial as picking a different large prime modulus for an existing algorithm leads, in practice, to one of these experts spending months rewriting and retesting the assembly code from scratch.
I will present our Fiat Cryptography project, which generates new implementations automatically, specifically for the domain of elliptic-curve cryptography. Even better, our framework is a Coq library, and we generate correctness proofs, too. The results are performance-competitive, coming within about 3X (and shrinking, as we add optimizations) of the latency of world-champion implementations written by hand. We use a combination of traditional verification, partial evaluation, and certified compilation. At the center of the framework is a library for big-integer arithmetic, parametrized by a vocabulary of available data structures. Different prime moduli get the best performance with different data structures, and we prove correctness of the core arithmetic operations for all data structures in our parameter space.
This is joint work with Andres Erbsen, Jason Gross, Jade Philipoom, and Robert Sloan.
Nada Amin: Collapsing Towers of Interpreters
Abstract: Given a tower of interpreters, i.e. a sequence of interpreters interpreting each other, we aim to collapse this tower into a one-pass compiler that removes all interpretive overhead. We present a multi-level lambda calculus that features staging constructs and stage polymorphism: based on runtime parameters, the interpreter can either act as a normal interpreter or generate code, which turns it into a compiler, according to the Futamura projections. We identify stage polymorphism as the key mechanism to make such interpreters compose in a collapsible way.
We present a meta-circular Lisp interpreter on top of this calculus and demonstrate that we can collapse arbitrarily many levels of self-interpretation, including levels with semantic modifications. We discuss several examples: compiling regular expressions through a Lisp interpreter to base code, building program transformers from modified interpreters, and others. We develop these ideas further into an implementation of the reflective language Black, which implements a conceptually infinite tower, where every aspect of the semantics can change dynamically, and as a novel feature, we demonstrate how user programs can be compiled and recompiled under user-modified semantics
Éric Tanter: Designing Gradual Languages
Gradualizing type systems often involves a lot of guesswork. Based on an understanding of gradual types as abstractions of static types, we have developed systematic foundations for gradual typing based on abstract interpretation. This talk reports on recent work in applying this approach to a number of typing disciplines. In particular, we highlight the value of Galois connections in gradual language design, and identify open research questions.
Eelco Visser: The Semantics of Name Resolution in Grace
Grace is a dynamic object oriented programming language designed to aid programming education. We present a for- mal model of and give an operational semantics for its object model and name resolution algorithm. Our main contribu- tions are a systematic model of Grace’s name resolution using scope graphs, relating linguistic features to other languages, and an operationalization of this model in the form of an operational semantics which is readable and executable. The semantics are extensively tested against a reference Grace implementation.
This is joint work with Vlad Vergu and Michiel Haisma
Luke Church: A metaphysical turn for designing programming experiences?
There are a variety of techniques for analysing the design of programming experiences; how these techniques relate to each other has become a contentious issue. To help illuminate the problem we report the findings of in depth interviews with 20+ staff involved in the evolution of Dart, how they reason about changes to the language and what they would like to know.
From this pragmatic grounding, we introduce the notion of 'analytical distance' as a way of talking about the various techniques available. This highlights a major gap between what designers what to know and what our existing techniques can tell them. We speculate that adopting a metaphysical stance on programming language usability might help address this gap.
Eric Eide: A DSL for Secure Cloud-based Collaboration
Despite advances in cloud security, cloud tenants remain vulnerable to
network-based attacks. IP networks allow devices to communicate by default,
and restrictions imposed by firewalls tend to be coarse-grained, largely
static, and centrally administered. What if the network instead followed the
principle of least privilege and allowed tenants to dynamically craft just the
communication policies they need? In this talk I will describe a DSL that aims
to strengthen the network security of cloud tenants and multi-tenant
collaborations. A variant of this DSL has been implemented in a system called
CapNet?, but it remains a work in progress. For example, the DSL does not yet
address DPI,
QoS?, or routing concerns. It also does not yet account for
collaborations between tenants in separately administered clouds. These and
other extensions of the DSL are unresolved language-design challenges.
Matthew Flatt: Designing a Language for Cognitive-Science Experiments
In a typical experiment to test a cognitive-science hypothesis, a human
subject receives a sequence of stimuli, where each stimulus's
properties are determined by a number of experimenter-selected factors
(e.g., color, shape, word). In simple experiments, each subject can see
all possible combinations of factors, and a statistical analysis of the
results is relatively straightforward. When the number of possible
combinations becomes too large for a simple crossing of all
possibilities, or when the relative order of different combinations of
factors is relevant to the hypothesis under test, then choosing a set
of stimuli to obtain statistically useful results can be more
difficult.
In those more difficult cases, experimenters mostly write ad hoc
scripts to generate the stimuli for an experiment. Bugs in the
scripts --- either in the code or the ad-hoc approach used to generate
combinations --- can mean that effort is wasted on experiments that
cover a statistically inclusive range. We are designing a language for
experiment design that generalizes a basic model of factors and
crossings and adds new constructs to impose constraints on the
generated stimuli. A key problem in the implementation is choosing
uniformly from the distribution of stimuli that satisfy the
constraints; recent work on uniform sampling of SAT formulae seems to
be exactly the tool we need, and we target that tool in the design of
our constraint language.
Heather Miller: A Programming Model and Foundation for Lineage-Based Distributed Computation
The most successful systems for “big data” processing have all adopted functional APIs. We present a new programming model we call "function passing" designed to provide a more principled substrate, or middleware, upon which to build data-centric distributed systems like Spark. A key idea is to build up a persistent functional data structure representing transformations on distributed immutable data by passing well-typed serializable functions over the wire and applying them to this distributed data. Thus, the function passing model can be thought of as a persistent functional data structure that is distributed, where transformations performed on distributed data are stored in its nodes rather than the distributed data itself. One advantage of this model is that failure recovery is simplified by design–data can be recovered by replaying function applications atop immutable data loaded from stable storage. Deferred evaluation is also central to our model; by incorporating deferred evaluation into our design only at the point of initiating network communication, the function passing model remains easy to reason about while remaining efficient in time and memory. Moreover, we provide a complete formalization of the programming model in order to study the foundations of lineage-based distributed computation. In particular, we develop a theory of safe, mobile lineages based on a subject reduction theorem for a typed core language. Furthermore, we formalize a progress theorem which guarantees the finite materialization of remote, lineage-based data. Thus, the formal model may serve as a basis for further developments of the theory of data-centric distributed programming, including aspects such as fault tolerance.
John Regehr: Superoptimizing LLVM
LLVM serves as an intermediate representation between several programming languages and a number of target architectures. Although LLVM already supports many IR-level optimizations, plenty of optimizations have not yet been implemented. I'll talk about Souper, a superoptimizer that automatically synthesizes new optimizations for LLVM. Souper can also interact with some of LLVM's dataflow analyses, both learning from them and finding situations where their precision can be improved.
Ross Tate: Discussion on Method Overloading
Method overloading is a feature of many major programming languages, one which people have strong and varied opinions about. Rather than give a talk on overloading, I will conduct a discussion to collect together our experiences with this topic. The goal is not to decide whether method overloading is good or bad, although hopefully from this discussion we will all gain a broader understanding of the topic. Rather, the goal beyond this collection is to identify potential avenues for variations of method overloading that may achieve much of what a designer might value in overloading while avoiding the many pitfalls of overloading that a designer may or may not be aware of.