My PhD dissertation at the University of Edinburgh, Scotland https://www.dhil.net/research/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 

21383 lines
928 KiB

%% 12pt font size, PhD thesis, LFCS, print twosided, new chapters on right page
\documentclass[12pt,phd,lfcs,twoside,openright,logo,leftchapter,normalheadings]{infthesis}
\shieldtype{0}
%% Packages
\usepackage[utf8]{inputenc} % Enable UTF-8 typing
\usepackage[british]{babel} % British English
\usepackage[pdfusetitle,breaklinks]{hyperref} % Interactive PDF
\usepackage{url}
\usepackage[sort&compress,square,numbers]{natbib} % Bibliography
\usepackage{bibentry} % Print bibliography entries inline.
\makeatletter % Redefine bibentry to omit hyperrefs
\renewcommand\bibentry[1]{\nocite{#1}{\frenchspacing
\@nameuse{BR@r@#1\@extra@b@citeb}}}
\makeatother
\nobibliography* % use the bibliographic data from the standard BibTeX setup.
\usepackage{breakurl}
\usepackage{amsmath} % Mathematics library
\usepackage{amssymb} % Provides math fonts
\usepackage{amsthm} % Provides \newtheorem, \theoremstyle, etc.
\usepackage{mathtools}
\usepackage{bm}
\usepackage{pkgs/mathpartir} % Inference rules
\usepackage{pkgs/mathwidth}
\usepackage{stmaryrd} % semantic brackets
\usepackage{array}
\usepackage{float} % Float control
\usepackage{caption,subcaption} % Sub figures support
\DeclareCaptionFormat{underlinedfigure}{#1#2#3\hrulefill}
\DeclareCaptionFormat{subfig}{#1#2#3}
\captionsetup[figure]{format=underlinedfigure}
\captionsetup[subfigure]{format=subfig}
\usepackage[T1]{fontenc} % Fixes issues with accented characters
%\usepackage{libertine}
%\usepackage{lmodern}
%\usepackage{palatino}
% \usepackage{newpxtext,newpxmath}
\usepackage[scaled=0.80]{beramono}
\usepackage[activate=true,
final,
tracking=true,
kerning=true,
spacing=true,
factor=1100,
stretch=10,
shrink=10]{microtype}
\SetProtrusion{encoding={*},family={bch},series={*},size={6,7}}
{1={ ,750},2={ ,500},3={ ,500},4={ ,500},5={ ,500},
6={ ,500},7={ ,600},8={ ,500},9={ ,500},0={ ,500}}
\SetExtraKerning[unit=space]
{encoding={*}, family={bch}, series={*}, size={footnotesize,small,normalsize}}
{\textendash={400,400}, % en-dash, add more space around it
"28={ ,150}, % left bracket, add space from right
"29={150, }, % right bracket, add space from left
\textquotedblleft={ ,150}, % left quotation mark, space from right
\textquotedblright={150, }} % right quotation mark, space from left
\usepackage{enumerate} % Customise enumerate-environments
\usepackage{xcolor} % Colours
\usepackage{xspace} % Smart spacing in commands.
\usepackage{tikz}
\usetikzlibrary{fit,calc,trees,positioning,arrows,chains,shapes.geometric,%
decorations.pathreplacing,decorations.pathmorphing,shapes,%
matrix,shapes.symbols,intersections,tikzmark}
\usepackage[customcolors,shade]{hf-tikz} % Shaded backgrounds.
\hfsetfillcolor{gray!40}
\hfsetbordercolor{gray!40}
% Multi-row configuration
\usepackage{makecell,multirow}
\newcommand{\tablistcommand}{ % eliminates vertical space before and after itemize
\leavevmode\par\vspace{-\baselineskip}}
\newcolumntype{P}[1]{p{#1-2\tabcolsep-\arrayrulewidth}}
% Cancelling
\newif\ifCancelX
\tikzset{X/.code={\CancelXtrue}}
\newcommand{\Cancel}[2][]{\relax
\ifmmode%
\tikz[baseline=(X.base),inner sep=0pt] {\node (X) {$#2$};
\tikzset{#1}
\draw[#1,overlay,shorten >=-2pt,shorten <=-2pt] (X.south west) -- (X.north east);
\ifCancelX
\draw[#1,overlay,shorten >=-2pt,shorten <=-2pt] (X.north west) -- (X.south east);
\fi}
\else
\tikz[baseline=(X.base),inner sep=0pt] {\node (X) {#2};
\tikzset{#1}
\draw[#1,overlay,shorten >=-2pt,shorten <=-2pt] (X.south west) -- (X.north east);
\ifCancelX
\draw[#1,overlay,shorten >=-2pt,shorten <=-2pt] (X.north west) -- (X.south east);
\fi}%
\fi}
\newcommand{\XCancel}[1]{\Cancel[red,X,line width=1pt]{#1}}
% Structures
\tikzset{
port/.style = {treenode, font=\Huge, draw=white, minimum width=0.5em, minimum height=0.5em},
blackbox/.style = {rectangle, fill=black, draw=black, minimum width=2cm, minimum height=2cm},
treenode/.style = {align=center, inner sep=3pt, text centered},
opnode/.style = {treenode, rectangle, draw=black},
leaf/.style = {treenode, draw=black, ellipse, thin},
comptree/.style = {treenode, draw=black, regular polygon, regular polygon sides=3},
highlight/.style = {draw=red,very thick},
pencildraw/.style={
black!75,
decorate,
decoration={random steps,segment length=0.8pt,amplitude=0.1pt}
},
hbox/.style = {rectangle,draw=none, minimum width=6cm, minimum height=1cm},
gbox/.style = {rectangle,draw=none,minimum width=2cm,minimum height=1cm}
}
%%
%% Theorem environments
%%
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[chapter]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{claim}[theorem]{Claim}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
% Example environment.
\makeatletter
\def\@endtheorem{\hfill$\blacksquare$\endtrivlist\@endpefalse} % inserts a black square at the end.
\makeatother
\newtheorem{example}{Example}[chapter]
%%
%% Load macros.
%%
\input{macros}
%% Information about the title, etc.
% \title{Higher-Order Theories of Handlers for Algebraic Effects}
% \title{Handlers for Algebraic Effects: Applications, Compilation, and Expressiveness}
% \title{Applications, Compilation, and Expressiveness for Effect Handlers}
% \title{Handling Computational Effects}
% \title{Programming Computable Effectful Functions}
% \title{Handling Effectful Computations}
\ifdefined\DRAFT
\title{Foundations for Programming and Implementing Effect Handlers\\
(DRAFT \href{https://github.com/dhil/phd-dissertation/commit/\DRAFT}{\DRAFT})}
\else
\title{Foundations for Programming and Implementing Effect Handlers}
\fi
%\title{Foundations for Programming with Control via Effect Handlers}
\author{Daniel Hillerström}
%% If the year of submission is not the current year, uncomment this line and
%% specify it here:
\submityear{2021}
%% Specify the abstract here.
\abstract{%
First-class control operators provide programmers with an expressive
and efficient means for manipulating control through reification of
the current control state as a first-class object, enabling
programmers to implement their own computational effects and control
idioms as shareable libraries.
%
Effect handlers provide a particularly structured approach to
programming with first-class control by naming control reifying
operations and separating from their handling.
This thesis is composed of three strands of work in which I develop
operational foundations for programming and implementing effect
handlers as well as exploring the expressive power of effect
handlers.
The first strand develops a fine-grain call-by-value core calculus
of a statically typed programming language with a \emph{structural}
notion of effect types, as opposed to the \emph{nominal} notion of
effect types that dominates the literature.
%
With the structural approach, effects need not be declared before
use. The usual safety properties of statically typed programming are
retained by making crucial use of \emph{row polymorphism} to build
and track effect signatures.
%
The calculus features three forms of handlers: deep, shallow, and
parameterised. They each offer a different approach to manipulate
the control state of programs. Traditional deep handlers are defined
by folds over computation trees, and are the original con-struct
proposed by Plotkin and Pretnar. Shallow handlers are defined by
case splits (rather than folds) over computation
trees. Parameterised handlers are deep handlers extended with a
state value that is threaded through the folds over computation
trees.
%
To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small
\UNIX{}-style operating system complete with multi-user environment,
time-sharing, and file I/O.
The second strand studies \emph{continuation passing style} (CPS)
and \emph{abstract machine semantics}, which are foundational
techniques that admit a unified basis for implementing deep,
shallow, and parameterised effect handlers in the same environment.
%
The CPS translation is obtained through a series of refinements of a
basic first-order CPS translation for a fine-grain call-by-value
language into an untyped language.
%
Each refinement moves toward a more intensional representation of
continuations eventually arriving at the notion of \emph{generalised
continuation}, which admit simultaneous support for deep, shallow,
and parameterised handlers.
%
The initial refinement adds support for deep handlers by
representing stacks of continuations and handlers as a curried
sequence of arguments.
%
The image of the resulting translation is not \emph{properly
tail-recursive}, meaning some function application terms do not
appear in tail position. To rectify this the CPS translation is
refined once more to obtain an uncurried representation of stacks of
continuations and handlers. Finally, the translation is made
higher-order in order to contract administrative redexes at
translation time.
%
The generalised continuation representation is used to construct an
abstract machine that provide simultaneous support for deep,
shallow, and parameterised effect handlers. kinds of effect
handlers.
The third strand explores the expressiveness of effect
handlers. First, I show that deep, shallow, and parameterised
notions of handlers are interdefinable by way of \emph{typed
macro-expressiveness}, which provides a syntactic notion of
expressiveness that affirms the existence of encodings between
handlers, but it provides no information about the computational
content of the encodings. Second, using the semantic notion of
expressiveness I show that for a class of programs a programming
language with first-class control (e.g. effect handlers) admits
asymptotically faster implementations than possible in a language
without first-class control.
%
}
%% Now we start with the actual document.
\begin{document}
\raggedbottom
%% First, the preliminary pages
\begin{preliminary}
%% This creates the title page
\maketitle
%% Lay summary
\begin{laysummary}
% This dissertation is about \emph{taking back control} from the
% operating system and putting it back into the hands of
% programmers. For too long programmers have been governed by
% unelected primitives
Computer programs interact with the real world, e.g. to send and
retrieve e-mails, stream videos, transferal of data from or onto
some pluggable data storage medium, and so forth. This interaction
is governed by the operating system, which is responsible for
running programs and providing them with the vocabulary to interact
with the world.
%
Programs use words from this vocabulary with a preconceived idea of
their meaning, however, importantly, words are just mere syntax. The
semantics of each word is determined by the operating
system (typically such that it aligns with the intent of the
program).
This separation of syntax and semantics makes it possible for
programs and operating systems to evolve independently, because any
program can be run by any operating system whose vocabulary conforms
to the expectations of the program. It has proven to be a remarkably
successful model for building and maintaining computer programs.
Conventionally, an operating system has been a complex and
monolithic single global entity in a computer system.
%
However, \emph{effect handlers} are a novel programming abstraction,
which enables programs to be decomposed into syntax and semantics
internally, by localising the notion of operating systems. In
essence, an effect handler is a tiny programmable operating system,
that a program may use internally to determine the meaning of its
subprograms. The key property of effect handlers is that they
compose seamlessly, and as a result the semantics of a program can
be compartmentalised into several fine-grained and comprehensible
components. The ability to seamlessly swap out one component for
another component provides a promising basis for modular
construction and reconfiguration of computer programs.
In this dissertation I develop the foundations for programming with
effect handlers. Specifically, I present a practical design for
programming with effect handlers as well as applications, I develop
two universal implementation strategies for effect handlers, and I
give a precise mathematical characterisation of the inherent
computational efficiency of effect handlers.
\end{laysummary}
%% Acknowledgements
\begin{acknowledgements}
Firstly, I want to thank Sam Lindley for his guidance, advice, and
encouragement throughout my studies. He has been an enthusiastic
supervisor, and he has always been generous with his time. I am
fortunate to have been supervised by him.
%
Secondly, I want to extend my gratitude to John Longley, who has
been an excellent second supervisor and has always shown enthusiasm
about my work.
%
Thirdly, I want to thank my academic brother Simon Fowler, who has
always been a good friend and a pillar of inspiration. Regardless of
academic triumphs and failures, we have always had fun.
I am extremely grateful to KC Sivaramakrishnan, who took a genuine
interest in my research early on and invited me to come spend some
time at OCaml Labs in Cambridge. My initial visit to Cambridge
sparked the beginning of a long-standing and productive
collaboration. Also, thanks to Gemma Gordon, who I had the pleasure
of sharing an office with during one of my stints at OCaml Labs.
I have been fortunate to work with Robert Atkey, who has been a
continuous source of inspiration and interesting research ideas. Our
work is clearly reflected in this dissertation.
%
I also want to thank my other collaborators: Andreas Rossberg, Anil
Madhavapeddy, Leo White, Stephen Dolan, and Jeremy Yallop.
I have had the pleasure of working in LFCS at the same time as James
McKinna. James has always taken a genuine interest in my work and
challenged me with intellectually stimulating questions. I
appreciate our many conversations even though I spent days, weeks,
sometimes months, and in some instances years to come up with
adequate answers. I also want to thank other former and present
members of Informatics: Brian Campbell, Christophe Dubach, James
Cheney, J. Garrett Morris, Gordon Plotkin, Michel Steuwer, Philip
Wadler, and Stephen Gilmore.
My time as a student in Informatics Forum has been enjoyable in
large part thanks to my friends: Amna Shahab, Chris Vasiladiotis,
Craig McLaughlin, Danel Ahman, Daniel Mills, Frank Emrich, Emanuel
Martinov, Floyd Chitalu, Jack Williams, Jakub Zalewski, Larisa
Stoltzfus, Maria Gorinova, Marcin Szymczak, Paul Piho, Philip
Ginsbach, Radu Ciobanu, Rajkarn Singh, Rosinda Fuentes Pineda, Rudi
Horn, Shayan Najd, Stan Manilov, and Vanya Yaneva-Cormack.
Thanks to Ohad Kammar for agreeing to be the internal examiner for
my dissertation. As for external examiners, I am truly humbled and
thankful for Andrew Kennedy and Edwin Brady agreeing to examine my
dissertation.
Throughout my studies I have received funding from the
\href{https://www.ed.ac.uk/informatics}{School of Informatics} at
The University of Edinburgh, as well as an
\href{https://www.epsrc.ac.uk/}{EPSRC} grant
\href{http://pervasiveparallelism.inf.ed.ac.uk}{EP/L01503X/1} (EPSRC
Centre for Doctoral Training in Pervasive Parallelism), and by ERC
Consolidator Grant Skye (grant number 682315). I finished this
dissertation whilst being employed on the UKRI Future Leaders
Fellowship ``Effect Handler Oriented Programming'' (reference number
MR/T043830/1).
\end{acknowledgements}
%% Next we need to have the declaration.
% \standarddeclaration
\begin{declaration}[Daniel Hillerström, Edinburgh, Scotland, 2021]
I declare that this thesis was composed by myself, that the work
contained herein is my own except where explicitly stated otherwise
in the text, and that this work has not been submitted for any other
degree or professional qualification except as specified.
The following previously published work of mine features prominently
within this dissertation. Each chapter details the relevant
relations to my previous work.
%
\begin{itemize}
\item \bibentry{HillerstromL16}
\item \bibentry{HillerstromLAS17}
\item \bibentry{HillerstromL18}
\item \bibentry{HillerstromLA20}
\item \bibentry{HillerstromLL20}
\end{itemize}
%
\end{declaration}
%% Finally, a dedication (this is optional -- uncomment the following line if
%% you want one).
% \dedication{To my mummy.}
% \dedication{\emph{To be or to do}}
\dedication{\emph{Bara du sätter gränserna}}
% \begin{preface}
% A preface will possibly appear here\dots
% \end{preface}
%% Create the table of contents
\setcounter{secnumdepth}{2} % Numbering on sections and subsections
\setcounter{tocdepth}{1} % Show chapters, sections and subsections in TOC
%\singlespace
\tableofcontents
%\doublespace
%% If you want a list of figures or tables, uncomment the appropriate line(s)
% \listoffigures
% \listoftables
\end{preliminary}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Main content %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%
%% Introduction
%%
\chapter{Introduction}
\label{ch:introduction}
%
% Programmers tend to view programs as impenetrable opaque boxes, whose
% outputs are determined entirely by their
% inputs~\cite{Hughes89,Howard80}. This is a compelling view which
% admits a canonical mathematical model of
% computation~\cite{Church32,Church41}.
% %
% Alas, this view does not capture the reality of practical programs,
% which perform operations to interact with their ambient environment to
% for example signal graceful or erroneous termination, manipulate the
% file system, fork a new thread, and so forth, all of which may have an
% observable effect on the program state. Interactions with the
% environment are mediated by some local authority (e.g. operating
% system), which confers the meaning of operations~\cite{CartwrightF94}.
% %
% This suggests a view of programs as translucent boxes, which convey
% their internal use of operations used to compute their outputs.
% This view underpins the \emph{effectful programming paradigm} in which
% computational effects constitute an integral part of programs. In
% effectful programming a computational effect is understood as a
% collection of operations, e.g. exceptions are an effect with a single
% operation \emph{raise}, mutable state is an effect with two operations
% \emph{get} and \emph{put}, concurrency is an effect with two
% operations \emph{fork} and \emph{yield}, etc~\cite{Moggi91,PlotkinP01}.
\citeauthor{PlotkinP09}'s \emph{effect handlers} provide a promising
modular basis for effectful
programming~\cite{PlotkinP09,PlotkinP13,KammarLO13}. The basic tenet
of programming with effect handlers is that programs are written with
respect to an interface of effectful operations they expect to be
offered by their environment.
%
An effect handler is an environment that implements an effect
interface (also known as a computational effect).
%
Programs can run under any effect handler whose implementation
conforms to the expected effect interface.
%
In this regard, the \emph{doing} and \emph{being} of effects are kept
separate~\cite{JonesW93,LindleyMM17}, which is a necessary condition
for modular abstraction~\cite{Parnas72}.
%
A key property of effect handlers is that they provide modular
instantiation of effect interfaces through seamless composition,
meaning the programmer can compose any number of complementary
handlers to obtain a full implementation of some
interface~\cite{HillerstromL16}.
%
The ability to seamless compose handlers gives rise to a new
programming paradigm which we shall call \emph{effect handler oriented
programming} in which the meaning of effectful programs may be
decomposed into a collection of fine-grained effect handlers.
The key enabler for seamlessly composition is \emph{first-class
control}, which provides a mechanism for reifying the program
control state as a first-class data object known as a
continuation~\cite{FriedmanHK84}.
%
Through structured manipulation of continuations control gets
transferred between programs and their handlers.
In this dissertation I present a practical design for programming
languages with support for effect handler oriented programming, I
develop two foundational implementation techniques for effect
handlers, and I study their inherent computational expressiveness and
efficiency.
% Alas, this view does not capture the reality of practical programs, which
% may use a variety of observable computational effects such as
% exceptions, state, concurrency, interactive input/output, and so
% forth.
% %
% Instead a view of function as
% Alas, this view does not capture
% the reality of practical programs. In practice a function may perform
% effectful operations such as throwing an exception, referencing
% memory, forking a thread, whose interactions with the function's
% ambient environment are observable~\cite{CartwrightF92}.
%
% Practical programs are inherently effectful as they interact with
% their environment (i.e. operating system) during the extent of their
% evaluation.
%
% Practical programming is inherently effectfulPractical programming involves programming with \emph{computational
% effects}, or simply effects.
% Functional programming offers two distinct, but related, approaches to
% effectful programming, which \citet{Filinski96} succinctly
% characterises as \emph{effects as data} and \emph{effects as
% behaviour}. The former uses monads to encapsulate
% effects~\cite{Moggi91,Wadler92} which is compelling because it
% recovers some of benefits of the opaque box view for effectful
% programs, though, at the expense of a change of programming
% style~\cite{JonesW93}. The latter retains the usual direct style of
% programming by way of \emph{first-class control}, which is a powerful
% facility that can simulate any computational
% effect~\cite{Filinski94,Filinski96}.
% Programmers with continuations at their disposal have the ability to
% pry open function boundaries, which shatters the opaque box view. This
% ability can significantly improve the computational expressiveness and
% efficiency of programming languages~\cite{LongleyN15,HillerstromLL20}.
%\citet{Sabry98}
% Virtually every useful program performs some computational effects
% such as exceptions, state, concurrency, nondeterminism, interactive
% input and output during its execution.
%
%\citet{Filinski96} \emph{effects as data} and \emph{effects as behaviour}
% Control is a pervasive phenomenon in virtually every programming
% language. A programming language typically features a variety of
% control constructs, which let the programmer manipulate the control
% flow of programs in interesting ways. The most well-known control
% construct may well be $\If\;V\;\Then\;M\;\Else\;N$, which
% conditionally selects between two possible \emph{continuations} $M$
% and $N$ depending on whether the condition $V$ is $\True$ or $\False$.
% %
% The $\If$ construct offers no means for programmatic manipulation of
% either continuation.
% %
% More intriguing forms of control exist, which enable the programmer to
% manipulate and reify continuations as first-class data objects. This
% kind of control is known as \emph{first-class control}.
% The idea of first-class control is old. It was conceived already
% during the design of the programming language
% Algol~\cite{BackusBGKMPRSVWWW60} (one of the early high-level
% programming languages along with Fortran~\cite{BackusBBGHHNSSS57} and
% Lisp~\cite{McCarthy60}) when \citet{Landin98} sought to model
% unrestricted goto-style jumps using an extended $\lambda$-calculus.
% %
% Since then a wide variety of first-class control operators have
% appeared. We can coarsely categorise them into two groups:
% \emph{undelimited} and \emph{delimited} (in
% Chapter~\ref{ch:continuations} we will perform a finer analysis of
% first-class control). Undelimited control operators are global
% phenomena that let programmers capture the entire control state of
% their programs, whereas delimited control operators are local
% phenomena that provide programmers with fine-grain control over which
% parts of the control state to capture.
% %
% Thus there are good reasons for preferring delimited control over
% undelimited control for practical programming.
% %
% % The most (in)famous control operator
% % \emph{call-with-current-continuation} appeared later during a revision
% % of the programming language Scheme~\cite{AbelsonHAKBOBPCRFRHSHW85}.
% %
% Nevertheless, the ability to manipulate continuations programmatically
% is incredibly powerful as it enables programmers to perform non-local
% transfers of control on the demand. This sort of power makes it
% possible to implement a wealth of control idioms such as
% coroutines~\cite{MouraI09}, generators/iterators~\cite{ShawWL77},
% async/await~\cite{SymePL11} as user-definable
% libraries~\cite{FriedmanHK84,FriedmanH85,Leijen17a,Leijen17,Pretnar15}. The
% phenomenon of non-local transfer of control is known as a
% \emph{control effect}. It turns out to be `the universal effect' in
% the sense that it can simulate every other computational effect
% (consult \citet{Filinski96} for a precise characterisation of what it
% means to simulate an effect). More concretely, this means a
% programming language equipped with first-class control is capable of
% implementing effects such as exceptions, mutable state, transactional
% memory, nondeterminism, concurrency, interactive input/output, stream
% redirection, internally.
% %
% A whole programming paradigm known as \emph{effectful programming} is
% built around the idea of simulating computational effects using
% control effects.
% In this dissertation I also advocate a new programming paradigm, which
% I dub \emph{effect handler oriented programming}.
% %
% \dhil{This dissertation is about the operational foundations for
% programming and implementing effect handlers, a particularly modular
% and extensible programming abstraction for effectful programming}
% Control is an ample ingredient of virtually every programming
% language. A programming language typically feature a variety of
% control constructs, which let the programmer manipulate the control
% flow of programs in interesting ways. The most well-known control
% construct may well be $\If\;V\;\Then\;M\;\Else\;N$, which
% conditionally selects between two possible \emph{continuations} $M$
% and $N$ depending on whether the condition $V$ is $\True$ or
% $\False$. Another familiar control construct is function application
% $\EC[(\lambda x.M)\,W]$, which evaluates some parameterised
% continuation $M$ at value argument $W$ to normal form and subsequently
% continues the current continuation induced by the invocation context
% $\EC$.
% %
% At the time of writing the trendiest control construct happen to be
% async/await, which is designed for direct-style asynchronous
% programming~\cite{SymePL11}. It takes the form
% $\async.\,\EC[\await\;M]$, where $\async$ delimits an asynchronous
% context $\EC$ in which computations may be interleaved. The $\await$
% primitive may be used to defer execution of the current continuation
% until the result of the asynchronous computation $M$ is ready. Prior
% to async/await the most fashionable control construct was coroutines,
% which provide the programmer with a construct for performing non-local
% transfers of control by suspending the current continuation on
% demand~\cite{MouraI09}, e.g. in
% $\keyw{co_0}.\,\EC_0[\keyw{suspend}];\keyw{co_1}.\,\EC_1[\Unit]$ the
% two coroutines $\keyw{co_0}$ and $\keyw{co_1}$ work in tandem by
% invoking suspend in order to hand over control to the other coroutine;
% $\keyw{co_0}$ suspends the current continuation $\EC_0$ and transfers
% control to $\keyw{co_1}$, which resume its continuation $\EC_1$ with
% the unit value $\Unit$. The continuation $\EC_1$ may later suspend in
% order to transfer control back to $\keyw{co_0}$ such that it can
% resume execution of the continuation
% $\EC_0$~\cite{AbadiP10}. Coroutines are amongst the oldest ideas of
% the literature as they have been around since the dawn of
% programming~\cite{DahlMN68,DahlDH72,Knuth97,MouraI09}. Nevertheless
% coroutines frequently reappear in the literature in various guises.
% The common denominator for the aforementioned control constructs is
% that they are all second-class.
% % Virtually every programming language is equipped with one or more
% % control flow operators, which enable the programmer to manipulate the
% % flow of control of programs in interesting ways. The most well-known
% % control operator may well be $\If\;V\;\Then\;M\;\Else\;N$, which
% % conditionally selects between two possible \emph{continuations} $M$
% % and $N$ depending on whether the condition $V$ is $\True$ or $\False$.
% % %
% % Another familiar operator is function application\dots
% Evidently, control is a pervasive phenomenon in programming. However,
% not every control phenomenon is equal in terms of programmability and
% expressiveness.
\section{Why first-class control matters}
First things first, let us settle on the meaning of the qualifier
`first-class'. A programming language entity (or citizen) is regarded
as being first-class if it can be used on an equal footing with other
entities.
%
A familiar example is functions as first-class values. A first-class
function may be treated like any other primitive value, i.e. passed as
an argument to other functions, returned from functions, stored in
data structures, or let-bound.
First-class control makes the control state of the program available
as a first-class value known as a continuation object at any point
during evaluation~\cite{FriedmanHK84}. This object comes equipped with
at least one operation for restoring the control state. As such the
control flow of the program becomes a first-class entity that the
programmer may manipulate to implement interesting control phenomena.
From the perspective of programmers first-class control is a valuable
programming feature because it enables them to implement their own
control idioms, such as async/await~\cite{SymePL11}, as if they were
native to the programming language. More important, with first-class
control programmer-defined control idioms are local phenomena which
can be encapsulated in a library such that the rest of the program
does not need to be made aware of their existence. Conversely, without
first-class control some control idioms can only be implemented using
global program restructuring techniques such as continuation passing
style.
From the perspective of compiler engineers first-class control is
valuable because it unifies several control-related constructs under
one single construct. First-class control can even be beneficial for
implementing programming languages which have no notion of first-class
control in source language. A runtime with support for first-class
control can considerably simplify and ease maintainability of an
implementation of a programming language with various distinct
second-class control idioms such as async/await~\cite{SymePL11},
coroutines~\cite{MouraI09}, etc, because compiler engineers need only
implement and maintain a single control mechanism rather than having
to implement and maintain individual runtime support for each control
idiom of the source language.
The idea of first-class control is old. It was conceived already
during the design of the programming language
Algol~\cite{BackusBGKMPRSVWWW60} (one of the early high-level
programming languages along with Fortran~\cite{BackusBBGHHNSSS57} and
Lisp~\cite{McCarthy60}) when \citet{Landin98} sought to model
unrestricted goto-style jumps using an extended $\lambda$-calculus.
%
Since then a wide variety of first-class control operators have
appeared. We can coarsely categorise them into two groups:
\emph{undelimited} and \emph{delimited} (in
Chapter~\ref{ch:continuations} we will perform a finer analysis of
first-class control). Undelimited control operators are global
phenomena that let programmers capture the entire control state of
their programs, whereas delimited control operators are local
phenomena that provide programmers with fine-grain control over which
parts of the control state to capture.
%
Thus there are good reasons for preferring delimited control over
undelimited control for practical programming.
\subsection{Why effect handlers matter}
%
The problem with traditional delimited control operators such as
\citeauthor{DanvyF90}'s shift/reset~\cite{DanvyF90} or
\citeauthor{Felleisen88}'s control/prompt~\cite{Felleisen88} is that
they hard-wire an implementation for the \emph{control effect}
interface, which provides only a single operation for reifying the
control state. In itself this interface does not limit what effects
are expressible as the control effect is in a particular sense `the
universal effect' because it can simulate any other computational
effect~\cite{Filinski96}.
The problem, meanwhile, is that the universality of the control effect
hinders modular programming as the control effect is inherently
unstructured. In essence, programming with traditional delimited
control to simulate effects is analogous to programming with the
universal type~\cite{Longley03} in statically typed programming
languages, and having to program with the universal type is usually a
telltale that the programming abstraction is inadequate for the
intended purpose.
In contrast, effect handlers provide a structured form of delimited
control, where programmers can give distinct names to control reifying
operations and separate them from their handling. Throughout this
dissertation we will see numerous examples of how effect handlers
makes programming with delimited structured (c.f. the following
section, Chapter~\ref{ch:continuations}, and
Chapter~\ref{ch:unary-handlers}).
%
\section{State of effectful programming}
\label{sec:state-of-effprog}
Functional programmers tend to view programs as impenetrable black
boxes, whose outputs are determined entirely by their
inputs~\cite{Hughes89,Howard80}. This is a compelling view which
admits a canonical mathematical model of
computation~\cite{Church32,Church41}. Alas, this view does not capture
the reality of practical programs, which interact with their
environment.
%
Functional programming prominently features two distinct, but related,
approaches to effectful programming, which \citet{Filinski96}
succinctly characterises as \emph{effects as data} and \emph{effects
as behaviour}.
%
The former uses data abstraction to encapsulate
effects~\cite{Moggi91,Wadler92} which is compelling because it
recovers some of benefits of the black box view for effectful
programs, though, at the expense of a change of programming
style~\cite{JonesW93}. The latter retains the usual direct style of
programming either by hard-wiring the semantics of the effects into
the language or by more flexible means via first-class control.
In this section I will provide a brief perspective on different
approaches to programming with effects along with an informal
introduction to the related concepts. We will look at each approach
through the lens of global mutable state --- the ``hello world'' of
effectful programming.
% how
% effectful programming has evolved as well as providing an informal
% introduction to the involved core concepts. We will look at the
% evolution of effectful programming through the lens of a singular
% effect, namely, global mutable state.
% The evolution of effectful programming has gone through several
% characteristic time periods. In this section I will provide a brief
% programming perspective on how effectful programming has evolved as
% well as providing an informal introduction to the core concepts
% concerned with each time period. We will look at the evolution of
% effectful programming through the lens of a singular effect, namely,
% global mutable state.
\subsection{Direct-style state}
\label{sec:direct-style-state}
%
We can realise stateful behaviour by either using language-supported
state primitives, globally structure our program to follow a certain
style, or using first-class control in the form of delimited control
to simulate state. We do not consider undelimited control, because it
is insufficient to express mutable state~\cite{FriedmanS00}.
% Programming in its infancy was effectful as the idea of first-class
% control was conceived already during the design of the programming
% language Algol~\cite{BackusBGKMPRSVWWW60} -- one of the early
% high-level programming languages along with
% Fortran~\cite{BackusBBGHHNSSS57} and Lisp~\cite{McCarthy60} -- when
% \citet{Landin98} sought to model unrestricted goto-style jumps using
% an extended $\lambda$-calculus. The power of \citeauthor{Landin98}'s
% control facility was recognised early by \citet{Burstall69}, who used
% it to implement a control abstraction for tree-based search.
% %
% % \citeauthor{Landin98}'s control facility did not gain popularity as a
% % practical programming abstraction~\cite{Felleisen87b}.
% \citeauthor{Landin98}'s control facility is a precursor to the
% undelimited control operator $\Callcc$ (short for call with current
% continuation), which first appeared in the programming language
% Scheme~\cite{AbelsonHAKBOBPCRFRHSHW85}.
%
% The power of \citeauthor{Landin98}'s control facility was recognised early The nature of the first-class control introduced by
% \citeauthor{Landin98} was undelimited. However,
% The early high-level programming languages
% Fortran~\cite{BackusBBGHHNSSS57}, Algol~\cite{BackusBGKMPRSVWWW60},
% and Lisp~\cite{McCarthy60} all hard-wire a particular set of effects
% into their semantics. The usage of effects in these languages is
% completely untracked, although, the languages belonging to the Lisp
% family have adopted a naming convention to suffix names of
% side-effecting operations with exclamation points, e.g. the state
% modification operation is named $\keyw{set!}$~\cite{Dybvig03}.
% The idea of undelimited first-class control was conceived during the
% development of Algol~\cite{Landin65,Landin65a,Landin98}. The probably
% most famous form of undelimited control, $\Callcc$, appeared
% later~\cite{AbelsonHAKBOBPCRFRHSHW85}.
% It is well-known that $\Callcc$ exhibits both time and space
% performance problems for various implementing various
% effects~\cite{Kiselyov12}.
%
\subsubsection{Builtin mutable state}
It is common to find mutable state builtin into the semantics of
mainstream programming languages. However, different languages vary in
their approach to mutable state. For instance, state mutation
underpins the foundations of imperative programming languages
belonging to the C family of languages. They typically do not
distinguish between mutable and immutable values at the level of
types. On the contrary, programming languages belonging to the ML
family of languages use types to differentiate between mutable and
immutable values. They reflect mutable values in types by using a
special unary type constructor $\PCFRef^{\Type \to
\Type}$. Furthermore, ML languages equip the $\PCFRef$ constructor
with three operations.
%
\[
\refv : S \to \PCFRef~S \qquad\quad
! : \PCFRef~S \to S \qquad\quad
\defnas~ : \PCFRef \to S \to \Unit
\]
%
The first operation \emph{initialises} a new mutable state cell of
type $S$; the second operation \emph{gets} the value of a given state
cell; and the third operation \emph{puts} a new value into a given
state cell. It is important to note that getting the value of a state
cell does not alter its contents, whilst putting a value into a state
cell overrides the previous contents.
The following function illustrates a use of the get and put primitives
to manipulate the contents of some global state cell $st$.
%
\[
\bl
\incrEven : \UnitType \to \Bool\\
\incrEven\,\Unit \defas \Let\;v \revto !st\;\In\;st \defnas 1 + v;\,\even~v
\el
\]
%
The type signature is oblivious to the fact that the function
internally makes use of the state effect to compute its return value.
%
The body of the function first retrieves the current value of the
state cell and binds it to $st$. Subsequently, it destructively
increments the value of the state cell. Finally, it applies the
predicate $\even : \Int \to \Bool$ to the original state value to test
whether its parity is even (this example function is a slight
variation of an example by \citet{Gibbons12}).
%
We can run this computation as a subcomputation in the context of
global state cell $st$.
%
\[
\Let\;st \revto \refv~4\;\In\;\Record{\incrEven\,\Unit;!st} \reducesto^+ \Record{\True;5} : \Bool \times \Int
\]
%
Operationally, the whole computation initialises the state cell $st$
to contain the integer value $4$. Subsequently it runs the $\incrEven$
computation, which returns the boolean value $\True$ and as a
side-effect increments the value of $st$ to be $5$. The whole
computation returns the boolean value paired with the final value of
the state cell.
\subsubsection{Transparent state-passing purely functionally}
It is possible to implement stateful behaviour in a language without
any computational effects, e.g. simply typed $\lambda$-calculus, by
following a particular design pattern known as
\emph{state-passing}. The principal idea is to parameterise stateful
functions by the current state and make them return whatever result
they compute along with the updated state value. More precisely, in
order to endow some $n$-ary function with argument types $A_i$ and
return type $R$ with state of type $S$, we transform the function
signature as follows.
%
\[
\sembr{A_1 \to \cdots \to A_n \to R}_S
\defas A_1 \to \cdots \to A_n \to S \to R \times S
\]
%
By convention we always insert the state parameter at the tail end of
the parameter list. We may read the suffix $S \to R \times S$ as a
sort of effect annotation indicating that a particular function
utilises state. The downside of state-passing is that it is a global
technique which requires us to rewrite the signatures (and their
implementations) of all functions that makes use of state.
We can reimplement the $\incrEven$ in state-passing style as follows.
%
\[
\bl
\incrEven : \UnitType \to \Int \to \Bool \times \Int\\
\incrEven\,\Unit \defas \lambda st. \Record{\even~st; 1 + st}
\el
\]
%
State initialisation is simply function application.
%
\[
\incrEven~\Unit~4 \reducesto^+ \Record{\True;5} : \Bool \times \Int
\]
%
Programming in state-passing style is laborious and no fun as it is
anti-modular, because for effect-free higher-order functions to work
with stateful functions they too must be transformed or at the very
least be duplicated to be compatible with stateful function arguments.
%
Nevertheless, state-passing is an important technique as it is the
secret sauce that enables us to simulate mutable state with other
programming techniques.
\subsubsection{Opaque state-passing with delimited control}
%
Delimited control appears during the late 80s in different
forms~\cite{SitaramF90,DanvyF89}.
%
There are several different forms of delimited control. The
particular form of delimited control that I will use here is due to
\citet{DanvyF89}.
%
Nevertheless, the secret sauce of all forms of delimited control is
that a delimited control operator makes it possible to pry open
function boundaries as control may transfer out of an arbitrary
evaluation context, leaving behind a hole that can later be filled by
some value supplied externally.
\citeauthor{DanvyF89}'s formulation of delimited control introduces
two primitives.
%
\[
\reset{-} : (\UnitType \to R) \to R \qquad\quad
\shift : ((A \to R) \to R) \to A
\]
%
The first primitive $\reset{-}$ (pronounced `reset') is a control
delimiter. Operationally, reset evaluates a given thunk in an empty
evaluation context and returns the final result of that evaluation.
%
The second primitive $\shift$ is a control reifier. An application
$\shift$ reifies and erases the control state up to (but not
including) the nearest enclosing reset. The reified control state
represents the continuation of the invocation of $\shift$ (up to the
innermost reset); it gets passed as a function to the argument of
$\shift$.
We define both primitives over some fixed return type $R$ (an actual
practical implementation would use polymorphism to make them more
flexible).
%
By instantiating $R = S \to A \times S$, where $S$ is the type of
state and $A$ is the type of return values, then we can use shift and
reset to simulate mutable state using state-passing in way that is
opaque to the rest of the program.
%
Let us first define operations for accessing and modifying the state
cell.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multirow{2}{*}{
\bl
\getF : \UnitType \to S\\
\getF~\Unit \defas \shift\,(\lambda k.\lambda st. k~st~st)
\el} &
\multirow{2}{*}{
\bl
\putF : S \to \UnitType\\
\putF~st \defas \shift\,(\lambda k.\lambda st'.k~\Unit~st)
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
The body of $\getF$ applies $\shift$ to capture the current
continuation, which gets supplied to the anonymous function
$(\lambda k.\lambda st. k~st~st)$. The continuation parameter $k$ has
type $S \to S \to A \times S$. The continuation is applied to two
instances of the current state value $st$. The first instance is the
value returned to the caller of $\getF$, whilst the second instance is
the state value available during the next invocation of either $\getF$
or $\putF$. This aligns with the intuition that accessing a state cell
does not modify its contents. The implementation of $\putF$ is
similar, except that the first argument to $k$ is the unit value,
because the caller of $\putF$ expects a unit in return. Also, it
ignores the current state value $st'$ and instead passes the state
argument $st$ onto the activation of the next state operation. Again,
this aligns with the intuition that modifying a state cell destroys
its previous contents.
Using these two operations we can implement a version of $\incrEven$
that takes advantage of delimited control to simulate global state.
%
\[
\bl
\incrEven : \UnitType \to \Bool\\
\incrEven\,\Unit \defas \Let\;st \revto \getF\,\Unit\;\In\;\putF\,(1 + st);\,\even~st
\el
\]
%
Modulo naming of operations, this version is similar to the version
that uses builtin state. The type signature of the function is even
the same.
%
Before we can apply this function we must first implement a state
initialiser.
%
\[
\bl
\runState : (\UnitType \to A) \to S \to A \times S\\
\runState~m~st_0 \defas \reset{\lambda\Unit.\Let\;x \revto m\,\Unit\;\In\;\lambda st. \Record{x;st}}\,st_0
\bl
\el
\el
\]
%
The function $\runState$ acts as both the state cell initialiser and
runner of the stateful computation. The first parameter $m$ is a thunk
that may perform stateful operations and the second parameter $st_0$
is the initial value of the state cell. The implementation wraps an
instance of reset around the application of $m$ in order to delimit
the extent of applications of $\shift$ within $m$. It is important to
note that each invocation of $\getF$ and $\putF$ gives rise to a
state-accepting function, thus when $m$ is applied a chain of
state-accepting functions gets constructed lazily. The chain ends in
the state-accepting function returned by the reset instance. The
application of the reset instance to $st_0$ effectively causes
evaluation of each function in this chain to start.
After instantiating $A = \Bool$ and $S = \Int$ we can use the
$\runState$ function to apply the $\incrEven$ function.
%
\[
\runState~\incrEven~4~ \reducesto^+ \Record{\True;5} : \Bool \times \Int
\]
% \subsection{Monadic epoch}
\subsection{Monadic state}
\label{sec:monadic-state}
During the late 80s and early 90s monads rose to prominence as a
practical programming idiom for structuring effectful
programming~\cite{Moggi89,Moggi91,Wadler92,Wadler92b,JonesW93,Wadler95,JonesABBBFHHHHJJLMPRRW99}.
%
The concept of monad has its origins in category theory and its
mathematical nature is well-understood~\cite{MacLane71,Borceux94}. The
emergence of monads as a programming abstraction began when
\citet{Moggi89,Moggi91} proposed to use monads as the mathematical
foundation for modelling computational effects in denotational
semantics. \citeauthor{Moggi91}'s view was that \emph{monads determine
computational effects}. The key property of this view is that pure
values of type $A$ are distinguished from effectful computations of
type $T~A$, where $T$ is the monad representing the effect(s) of the
computation. This view was put into practice by
\citet{Wadler92,Wadler95}, who popularised monadic programming in
functional programming by demonstrating how monads increase the ease
at which programs may be retrofitted with computational effects.
%
In practical programming terms, monads may be thought of as
constituting a family of design patterns, where each pattern gives
rise to a distinct effect with its own collection of operations.
%
Part of the appeal of monads is that they provide a structured
interface for programming with effects such as state, exceptions,
nondeterminism, interactive input and output, and so forth, whilst
preserving the equational style of reasoning about pure functional
programs~\cite{GibbonsH11,Gibbons12}.
%
% Notably, they form the foundations for effectful programming in
% Haskell, which adds special language-level support for programming
% with monads~\cite{JonesABBBFHHHHJJLMPRRW99}.
%
The presentation of monads here is inspired by \citeauthor{Wadler92}'s
presentation of monads for functional programming~\cite{Wadler92}, and
it ought to be familiar to users of
Haskell~\cite{JonesABBBFHHHHJJLMPRRW99}.
\begin{definition}
A monad is a triple $(T^{\TypeCat \to \TypeCat}, \Return, \bind)$
where $T$ is some unary type constructor, $\Return$ is an operation
that lifts an arbitrary value into the monad (sometimes this
operation is called `the unit operation'), and $\bind$ is the
application operator of the monad (this operator is pronounced
`bind'). Adequate implementations of $\Return$ and $\bind$ must
conform to the following interface.
%
\[
\bl
\Return : A \to T~A \qquad\quad \bind ~: T~A \to (A \to T~B) \to T~B
\el
\]
%
Interactions between $\Return$ and $\bind$ are governed by the monad
laws.
\begin{reductions}
\slab{Left\textrm{ }identity} & \Return\;x \bind k &=& k~x\\
\slab{Right\textrm{ }identity} & m \bind \Return &=& m\\
\slab{Associativity} & (m \bind k) \bind f &=& m \bind (\lambda x. k~x \bind f)
\end{reductions}
\end{definition}
%
We may understand the type $T~A$ as inhabiting computations that
compute a \emph{tainted} value of type $A$. In this regard, we may
understand $T$ as denoting the taint involved in computing $A$,
i.e. we can think of $T$ as sort of effect annotation which informs us
about which effectful operations the computation may perform to
produce $A$.
%
The monad interface may be instantiated in different ways to realise
different computational effects. In the following subsections we will
see three different instantiations with which we will implement global
mutable state.
Monadic programming is a top-down approach to effectful programming,
where the concrete monad structure is taken as a primitive which
controls interactions between effectful operations.
% Monadic programming is a top-down approach to effectful programming,
% where we start with a concrete framework in which we can realise
% effectful operations.
%
The monad laws ensure that monads have some algebraic structure, which
programmers can use when reasoning about their monadic
programs. Similarly, optimising compilers may take advantage of the
structure to emit more efficient code.
The success of monads as a programming idiom is difficult to
understate as monads have given rise to several popular
control-oriented programming abstractions including the asynchronous
programming idiom async/await~\cite{Claessen99,LiZ07,SymePL11}.
\subsubsection{State monad}
%
The state monad is an instantiation of the monad interface that
encapsulates mutable state by using the state-passing technique
internally. In addition it equips the monad with two operations for
manipulating the state cell.
%
\begin{definition}\label{def:state-monad}
The state monad is defined over some fixed state type $S$.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multicolumn{2}{l}{T~A \defas S \to A \times S} \smallskip\\
\multirow{2}{*}{
\bl
\Return : A \to T~A\\
\Return~x\defas \lambda st.\Record{x;st}
\el} &
\multirow{2}{*}{
\bl
\bind ~: T~A \to (A \to T~B) \to T~B\\
\bind ~\defas \lambda m.\lambda k.\lambda st. \Let\;\Record{x;st'} = m~st\;\In\; (k~x)~st'
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
The $\Return$ of the monad is a state-accepting function of type
$S \to A \times S$ that returns its first argument paired with the
current state. The bind operator also produces a state-accepting
function of type $S \to A \times S$. The bind operator first
supplies the current state $st$ to the monad argument $m$. This
application yields a value result of type $A$ and an updated state
$st'$. The result is supplied to the continuation $k$, which
produces another state accepting function that gets applied to the
previously computed state value $st'$.
The state monad is equipped with two dual operations for accessing
and modifying the state encapsulated within the monad.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multirow{2}{*}{
\bl
\getF : \UnitType \to T~S\\
\getF~\Unit \defas \lambda st. \Record{st;st}
\el} &
\multirow{2}{*}{
\bl
\putF : S \to T~\UnitType\\
\putF~st \defas \lambda st'.\Record{\Unit;st}
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
Interactions between the two operations satisfy the following
equations~\cite{Gibbons12}.
%
\begin{reductions}
\slab{Get\textrm{-}get} & \getF\,\Unit \bind (\lambda st. \getF\,\Unit \bind (\lambda st'.k~st~st')) &=& \getF \bind \lambda st.k~st~st\\
\slab{Get\textrm{-}put} & \getF\,\Unit \bind (\lambda st.\putF~st) &=& \Return\;\Unit\\
\slab{Put\textrm{-}get} & \putF~st \bind (\lambda\Unit.\getF\,\Unit \bind (\lambda st.k~st') &=& \putF~st \bind (\lambda \Unit.k~st)\\
\slab{Put\textrm{-}put} & \putF~st \bind (\lambda st.\putF~st') &=& \putF~st' \bind (\lambda\Unit.k~st)
\end{reductions}
%
The first equation states that performing one get after another get
is redundant. The second equation captures the intuition that
getting a value and then putting has no observable effect on the
state cell. The third equation states that performing a get
immediately after putting a value is equivalent to returning that
value. The fourth equation states that only the latter of two
consecutive puts is observable.
\end{definition}
The literature often uses the presentation (or a similar one) with the
four equations above, even though, there exists a smaller presentation
in which the first equation is redundant as it is derivable from the
second and third equations (c.f. Appendix~\ref{ch:get-get}).
We can implement a monadic variation of the $\incrEven$ function that
uses the state monad to emulate manipulations of the state cell as
follows.
%
\[
\bl
T~A \defas \Int \to A \times \Int\smallskip\\
\incrEven : \UnitType \to T~\Bool\\
\incrEven~\Unit \defas \getF\,\Unit
\bind (\lambda st.
\putF\,(1+st)
\bind \lambda\Unit. \Return\;(\even~st)))
\el
\]
%
We fix the state type of our monad to be the integer type. The type
signature of the function $\incrEven$ may be read as describing a
thunk that returns a boolean value, and whilst computing this boolean
value the function may perform any effectful operations given by the
monad $T$~\cite{Moggi91,Wadler92}, i.e. $\getF$ and
$\putF$. Operationally, the function retrieves the current value of
the state cell via the invocation of $\getF$. The bind operator passes
this value onto the continuation, which increments the value and
invokes $\putF$. The continuation applies a predicate the $\even$
predicate to the original state value. The structure of the monad
means that the result of running this computation gives us a pair
consisting of boolean value indicating whether the initial state was
even and the final state value.
The state initialiser and monad runner is simply thunk forcing and
function application combined.
%
\[
\bl
\runState : (\UnitType \to T~A) \to S \to A \to S\\
\runState~m~st_0 \defas m~\Unit~st_0\\
\el
\]
%
By instantiating $S = \Int$ and $A = \Bool$ we can obtain the same
result as before.
%
\[
\runState~\incrEven~4 \reducesto^+ \Record{\True;5} : \Bool \times \Int
\]
%
We can instantiate the monad structure in a similar way to simulate
other computational effects such as exceptions, nondeterminism,
concurrency, and so forth~\cite{Moggi91,Wadler92}.
\subsubsection{Continuation monad}
As in J.R.R. Tolkien's fictitious Middle-earth~\cite{Tolkien54} there
exists one monad to rule them all, one monad to realise them, one
monad to subsume them all, and in the term language bind them. This
powerful monad is the \emph{continuation monad}.
The continuation monad may be regarded as `the universal monad' as it
can embed any other monad, and thereby simulate any computational
effect~\cite{Filinski99}. It derives its name from its connection to
continuation passing style~\cite{Wadler92}, which is a particular
style of programming where each function is parameterised by the
current continuation (we will discuss continuation passing style in
detail in Chapter~\ref{ch:cps}). The continuation monad is powerful
exactly because each of its operations has access to the current
continuation.
\begin{definition}\label{def:cont-monad}
The continuation monad is defined over some fixed return type
$R$~\cite{Wadler92}.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multicolumn{2}{l}{T~A \defas (A \to R) \to R} \smallskip\\
\multirow{2}{*}{
\bl
\Return : A \to T~A\\
\Return~x\defas \lambda k.k~x
\el} &
\multirow{2}{*}{
\bl
\bind ~: T~A \to (A \to T~B) \to T~B\\
\bind ~\defas \lambda m.\lambda k.\lambda c.m\,(\lambda x.k~x~c)
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
\end{definition}
%
The $\Return$ operation lifts a value into the monad by using it as an
argument to the continuation $k$. The bind operator binds the current
continuation to $c$. In the body it applies the monad $m$ to an
anonymous continuation function of type $A \to T~B$. Internally, the
monad $m$ will apply this continuation when it is on the form
$\Return$. Thus the parameter $x$ gets bound to the $\Return$ value of
the monad. This parameter gets supplied as an argument to the next
monadic action $k$ alongside the current continuation $c$.
If we instantiate $R = S \to A \times S$ for some type $S$ then we
can implement the state monad inside the continuation monad.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multirow{2}{*}{
\bl
\getF : \UnitType \to T~S\\
\getF~\Unit \defas \lambda k.\lambda st.k~st~st
\el} &
\multirow{2}{*}{
\bl
\putF : S \to T~\UnitType\\
\putF~st \defas \lambda k.\lambda st'.k~\Unit~st
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
The $\getF$ operation takes as input a (binary) continuation $k$ of
type $S \to S \to A \times S$ and produces a state-accepting function
that applies the continuation to the given state $st$. The first
occurrence of $st$ is accessible to the caller of $\getF$, whilst the
second occurrence passes the value $st$ onto the next operation
invocation on the monad. The operation $\putF$ works in the same
way. The primary difference is that $\putF$ does not return the value
of the state cell; instead it returns simply the unit value $\Unit$.
%
One can show that this implementation of $\getF$ and $\putF$ abides by
the same equations as the implementation given in
Definition~\ref{def:state-monad}.
The state initialiser and runner for the monad supplies the initial
continuation.
%
\[
\bl
\runState : (\UnitType \to T~A) \to S \to A \times S\\
\runState~m~st_0 \defas m~\Unit~(\lambda x.\lambda st. \Record{x;st})~st_0
\el
\]
%
The initial continuation $(\lambda x.\lambda st. \Record{x;st})$
corresponds to the $\Return$ of the state monad.
%
By fixing $S = \Int$ and $A = \Bool$, we can use the continuation
monad to interpret $\incrEven$.
%
\[
\runState~\incrEven~4 \reducesto^+ \Record{\True;5} : \Bool \times \Int
\]
%
The continuation monad gives us a succinct framework for implementing
and programming with computational effects, however, it comes at the
expense of extensibility and modularity. Adding a new operation to the
monad may require modifying its internal structure, which entails a
complete reimplementation of any existing operations.
% Scheme's undelimited control operator $\Callcc$ is definable as a
% monadic operation on the continuation monad~\cite{Wadler92}.
% %
% \[
% \bl
% \Callcc : ((A \to T~B) \to T~A) \to T~A\\
% \Callcc \defas \lambda f.\lambda k. f\,(\lambda x.\lambda.k'.k~x)~k
% \el
% \]
\subsubsection{Free monad}
%
\begin{figure}
\centering
\compTreeEx
\caption{Computation tree for $\incrEven$.}\label{fig:comptree}
\end{figure}
%
The state monad and the continuation monad offer little flexibility
with regards to the concrete interpretation of state as in both cases
the respective monad hard-wires a particular interpretation. An
alternative is the \emph{free monad} which decouples the structure of
the monad from its interpretation.
%
Just like other monads the free monad satisfies the monad laws,
however, unlike other monads the free monad does not perform any
computation \emph{per se}. Instead the free monad builds an abstract
representation of the computation in form of a computation tree, whose
interior nodes correspond to an invocation of some operation on the
monad, where each outgoing edge correspond to a possible continuation
of the operation; the leaves correspond to return
values. Figure~\ref{fig:comptree} depicts the computation tree for the
$\incrEven$ function. This particular computation tree has infinite
width, because the operation $\getF$ has infinitely many possible
continuations (we take the denotation of $\Int$ to be
$\mathbb{Z}$). Conversely, each $\putF$ node has only one outgoing
edge, because $\putF$ has only a single possible continuation, namely,
the trivial continuation $\Unit$.
The meaning of a free monadic computation is ascribed by a separate
function, or interpreter, that traverses the computation tree.
%
The shape of computation trees is captured by the following generic
type definition.
%
\[
\Free~F~A \defas [\return:A|\OpF:F\,(\Free~F~A)]
\]
%
The type constructor $\Free$ takes two type arguments. The first
parameter $F$ is itself a type constructor of kind
$\TypeCat \to \TypeCat$. The second parameter is the usual type of
values computed by the monad. The $\Return$ tag creates a leaf of the
computation tree, whilst the $\OpF$ tag creates an interior node. In
the type signature for $\OpF$ the type variable $F$ is applied to the
$\Free$ type. The idea is that $F~K$ computes an enumeration of the
signatures of the possible operations on the monad, where $K$ is the
type of continuation for each operation. Thus the continuation of an
operation is another computation tree node.
%
\begin{definition} The free monad is a triple
$(F^{\TypeCat \to \TypeCat}, \Return, \bind)$ which forms a monad
with respect to $F$. In addition an adequate instance of $F$ must
supply a map, $\dec{fmap} : (A \to B) \to F~A \to F~B$, over its
structure (in more precise technical terms: $F$ must be a
\emph{functor}~\cite{Borceux94}).
%
\[
\bl
\ba{@{~}l@{\,\quad}@{~}r}
\multicolumn{2}{l}{T~A \defas \Free~F~A} \smallskip\\
\multirow{2}{*}{
\bl
\Return : A \to T~A\\
\Return~x \defas \return~x
\el} &
\multirow{2}{*}{
\bl
\bind ~: T~A \to (A \to T~B) \to T~B\\
\bind ~\defas \lambda m.\lambda k.\Case\;m\;\{
\bl
\return~x \mapsto k~x;\\
\OpF~y \mapsto \OpF\,(\fmap\,(\lambda m'. m' \bind k)\,y)\}
\el
\el}\\ & \\ &
\ea
\el
\]
%
The $\Return$ operation simplify reflects itself by injecting the
value $x$ into the computation tree as a leaf node. The bind
operator threads the continuation $k$ through the computation
tree. Upon encounter a leaf node the continuation gets applied to
the value of the node. Note how this is reminiscent of the
$\Return$ of the continuation monad. The bind operator works in
tandem with the $\fmap$ of $F$ to advance past $\OpF$ nodes. The
$\fmap$ function is responsible for applying its functional
argument to the next computation tree node which is embedded inside
$y$.
%
We define an auxiliary function to alleviate some of the
boilerplate involved with performing operations on the monad.
%
\[
\bl
\DoF : F~A \to \Free~F~A\\
\DoF~op \defas \OpF\,(\fmap\,(\lambda x.\return~x)\,op)
\el
\]
%
This function injects some operation $op$ into the computation tree
as an operation node.
\end{definition}
%
In order to implement state with the free monad we must first declare
a signature of its operations and implement the required $\fmap$ for
the signature.
%
\[
\bl
\FreeState~S~R \defas [\Get:S \to R|\Put:S \times (\UnitType \to R)] \smallskip\\
\fmap : (A \to B) \to \FreeState~S~A \to \FreeState~S~B\\
\fmap~f~op \defas \Case\;op\;\{
\ba[t]{@{}l@{~}c@{~}l}
\Get~k &\mapsto& \Get\,(\lambda st.f\,(k~st));\\
\Put\,\Record{st';k} &\mapsto& \Put\,\Record{st';\lambda\Unit.f\,(k\,\Unit)}\}
\ea
\el
\]
%
The signature $\FreeState$ declares the two stateful operations $\Get$
and $\Put$ over state type $S$ and continuation type $R$. The $\Get$
tag is parameterised a continuation function of type $S \to R$. The
idea is that an application of this function provides access to the
current state, whilst computing the next node of the computation
tree. The $\Put$ operation is parameterised by the new state value and
a thunk, which computes the next computation tree node. The $\fmap$
instance applies the function $f$ to the continuation $k$ of each
operation.
%
By instantiating $F = \FreeState\,S$ and using the $\DoF$ function we
can give the get and put operations a familiar look and feel.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multirow{2}{*}{
\bl
\getF : \UnitType \to T~S\\
\getF~\Unit \defas \DoF\,(\Get\,(\lambda st.st))
\el} &
\multirow{2}{*}{
\bl
\putF : S \to T~\UnitType\\
\putF~st \defas \DoF\,(\Put\,\Record{st;\lambda\Unit.\Unit})
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
Both operations are performed with the identity function as their
respective continuation function. We do not have much choice in this
regard as for instance in the case of $\getF$ we must ultimately
return a computation of type $T~S$, and the only value of type $S$ we
have access to in this context is the one supplied externally to the
continuation function.
The state initialiser and runner for the monad is an interpreter. As
the programmers, we are free to choose whatever interpretation of
state we desire. For example, the following interprets the stateful
operations using the state-passing technique.
%
\[
\bl
\runState : (\UnitType \to \Free\,(\FreeState~S)\,R) \to S \to R \times S\\
\runState~m~st \defas
\Case\;m\,\Unit\;\{
\ba[t]{@{}l@{~}c@{~}l}
\return~x &\mapsto& (x, st);\\
\OpF\,(\Get~k) &\mapsto& \runState\,(\lambda\Unit.k~st)~st;\\
\OpF\,(\Put\,\Record{st';k}) &\mapsto& \runState~k~st'
\}\ea
\el
\]
%
The interpreter implements a \emph{fold} over the computation tree by
pattern matching on the shape of the tree (or equivalently
monad)~\cite{MeijerFP91}. In the case of a $\Return$ node the
interpreter returns the payload $x$ along with the final state value
$st$. If the current node is a $\Get$ operation, then the interpreter
recursively calls itself with the same state value $st$ and a thunked
application of the continuation $k$ to the current state $st$. The
recursive activation of $\runState$ will force the thunk in order to
compute the next computation tree node. In the case of a $\Put$
operation the interpreter calls itself recursively with new state
value $st'$ and the continuation $k$ (which is a thunk). One may prove
that this interpretation of get and put satisfies the equations of
Definition~\ref{def:state-monad}.
%
By instantiating $S = \Int$ and $R = \Bool$ we can use this
interpreter to run $\incrEven$.
%
\[
\runState~\incrEven~4 \reducesto^+ \Record{\True;5} : \Bool \times \Int
\]
%
The free monad brings us close to the essence of programming with
effect handlers.
\subsection{Back to direct-style}
\label{sec:back-to-directstyle}
Monads do not freely compose, because monads must satisfy a
distributive property in order to combine~\cite{KingW92}. Alas, not
every monad has a distributive property.
%
The lack of composition is to an extent remedied by monad
transformers, which provide a programmatic abstraction for stacking
one monad on top of another~\cite{Espinosa95}. The problem with monad
transformers is that they enforce an ordering on effects that affects
the program semantics (c.f. my MSc dissertation for a concrete example
of this~\cite{Hillerstrom15}).
However, a more fundamental problem with monads is that they break the
basic doctrine of modular abstraction, which says we should program
against an abstract interface, not an implementation. Effectful
programming using monads fixates on the concrete structure first, and
adds effect operations second. As a result monadic effect operations
are intimately tied to the concrete structure of their monad.
Before moving onto direct-style alternatives, it is worth mentioning
\citeauthor{McBrideP08}'s idioms (known as applicative functors in
Haskell) as an alternative to monadic
programming~\cite{McBrideP08}. Idioms provide an applicative style for
programming with effects. Even though idioms are computationally
weaker than monads, they are still capable of encapsulating a wide
range of computational effects whose realisation do not require the
full monad structure (consult \citet{Yallop10} for a technical
analysis of idioms and monads). Another thing worth pointing out is
that it is possible to have a direct-style interface for effectful
programming in the source language, which the compiler can translate
into monadic binds and returns automatically. For a concrete example
of this see the work of \citet{VazouL16}.
Let us wrap up this crash course in effectful programming by looking
at two approaches for programming in direct-style with effects that
make structured use of delimited control, before finishing with a
brief discussion of effect tracking.
\subsubsection{Monadic reflection on state}
%
Monadic reflection is a technique due to
\citet{Filinski94,Filinski96,Filinski99,Filinski10} which makes use of
delimited control to perform a local switch from monadic style into
direct-style and vice versa. The key insight is that a control reifier
provides an escape hatch that makes it possible for computation to
locally jump out of the monad, as it were. The scope of this escape
hatch is restricted by the control delimiter, which forces computation
back into the monad. Monadic reflection introduces two operators,
which are defined over some monad $T$ and some fixed result type $R$.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multirow{2}{*}{
\bl
\reify : (\UnitType \to R) \to T~R\\
\reify~m \defas \reset{\lambda\Unit. \Return\,(m\,\Unit)}
\el} &
\multirow{2}{*}{
\bl
\reflect : T~A \to A\\
\reflect~m \defas \shift\,(\lambda k.m \bind k)
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
The first operator $\reify$ (pronounced `reify') performs
\emph{monadic reification}. Semantically it makes the effect
corresponding to $T$ transparent. The implementation installs a reset
instance to delimit control effects of $m$. The result of forcing $m$
gets lifted into the monad $T$.
%
The second operator $\reflect$ (pronounced `reflect') performs
\emph{monadic reflection}. It makes the effect corresponding to $T$
opaque. The implementation applies $\shift$ to capture the current
continuation (up to the nearest instance of reset). Subsequently, it
evaluates the monadic computation $m$ and passes the result of this
evaluation to the continuation $k$, which effectively performs the
jump out of the monad.
Suppose we instantiate $T = \State~S$ for some type $S$, then we can
realise direct-style versions of the state operations $\getF$ and
$\putF$, whose internal implementations make use of the monadic state
operations.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multirow{2}{*}{
\bl
\getF : \UnitType \to S\\
\getF\,\Unit \defas \reflect\,(T.\getF\,\Unit)
\el} &
\multirow{2}{*}{
\bl
\putF : S \to \UnitType\\
\putF~st \defas \reflect\,(T.\putF\,st)
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
I am slightly abusing notation here as I use component selection
notation on the constructor type $T$ in order to disambiguate the
reflected operation names and monadic operation names. Nevertheless,
the implementations of $\getF$ and $\putF$ simply reflect their
monadic counterparts. Note that the type signatures are the same as
the signatures for operations that we implemented using shift/reset in
Section~\ref{sec:direct-style-state}.
The initialiser and runner for some reflected stateful computation is
defined in terms of the state monad runner.
%
\[
\bl
\runState : (\UnitType \to R) \to S \to R \times S\\
\runState~m~st_0 \defas T.\runState~(\lambda\Unit.\reify m)~st_0
\el
\]
%
The runner reifies the computation $m$ to obtain an instance of the
state monad, which it then runs using the state monad implementation
of $\runState$.
Since this state interface is the same as shift/reset-based interface,
we can simply take a carbon copy of the shift/reset-based
implementation of $\incrEven$ and run it after instantiating
$R = \Bool$ and $S = \Int$.
%
\[
\runState~\incrEven~4 \reducesto^+ \Record{\True;5} : \Bool \times \Int
\]
\subsubsection{Handling state}
%
At the start of the 00s decade
\citet{PlotkinP01,PlotkinP02,PlotkinP03} introduced algebraic theories
of computational effects, or simply \emph{algebraic effects}, which
inverts \citeauthor{Moggi91}'s view of effects such that
\emph{computational effects determine monads}. In their view a
computational effect is described by an algebraic effect, which
consists of a signature of abstract operations and a collection of
equations that govern their behaviour, together they generate a free
monad rather than the other way around.
%
Algebraic effects provide a bottom-up approach to effectful
programming in which abstract effect operations are taken as
primitive. Using these operations we may build up concrete structures.
%
In practical programming terms, we may understand an algebraic effect
as an abstract interface, whose operations build the underlying free
monad.
%
\begin{definition}
% A \emph{signature} $\Sigma$ is a collection of operation symbols
% $\ell : A \opto B \in \Sigma$. An operation symbol is a syntactic entity.
An algebraic effect is given by a pair
$\AlgTheory = (\Sigma,\mathsf{E})$ consisting of an effect signature
$\Sigma = \{(\ell_i : A \opto B)_i\}_i$ of typed operation symbols
$\ell_i$, whose interactions are govern by set of equations
$\mathsf{E}$.
%
We will not concern ourselves with the mathematical definition of
equation, as in this dissertation we will always fix
$\mathsf{E} = \emptyset$, meaning that the interactive patterns of
operations are unrestricted. As a consequence we will regard an
operation symbol as a syntactic entity subject only to a static
semantics. The type $A \opto B$ denotes the space of operations
whose payload has type $A$ and whose interpretation yields a value
of type $B$.
\end{definition}
%
As with the free monad, the meaning of an algebraic effect operation
is conferred by some separate interpreter. In the algebraic theory of
computational effects such interpreters are known as handlers for
algebraic effects, or simply \emph{effect handlers}. They were
introduced by \citet{PlotkinP09,PlotkinP13} by the end of the decade.
%
A crucial difference between effect handlers and interpreters of free
monads is that effect handlers use delimited control to realise the
behaviour of computational effects.
% The meaning of an algebraic effect is conferred by a suitable effect
% handler, or in analogy with the free monad a suitable interpreter. Effect handlers were By
% the end of the decade \citet{PlotkinP09,PlotkinP13} introduced
% \emph{handlers for algebraic effects}, which interpret computation
% trees induced by effectful operations in a similar way to runners of
% free monad interpret computation trees. A crucial difference between
% handlers and runners is that the handlers are based on first-class
% delimited control.
%
Practical programming with effect handlers was popularised by
\citet{KammarLO13}, who advocated algebraic effects and their handlers
as a modular basis for effectful programming.
Effect handlers introduce two dual control constructs.
%
\[
\ba{@{~}l@{~}r}
\Do\;\ell^{A \opto B}~V^A : B & \Handle\;M^C\;\With\;H^{C \Harrow D} : D \smallskip\\
\multicolumn{2}{c}{H ::= \{\Return\;x^C \mapsto N^D\} \mid \{\OpCase{\ell^{A \opto B}}{p^A}{k^{B \to D}} \mapsto N^D\} \uplus H^{C \Harrow D}}
\ea
\]
%
The $\Do$ construct reifies the control state up to a suitable handler
and packages it up with the operation symbol $\ell$ and its payload
$V$ before transferring control to the suitable handler. As control is
transferred a hole is left in the evaluation context that must be
filled before evaluation can continue. The $\Handle$ construct
delimits $\Do$ invocations within the computation $M$ according to the
handler definition $H$. Handler definitions consist of the union of a
single $\Return$-clause and the disjoint union of zero or more
operation clauses. The $\Return$-clause specifies what to do with the
return value of a computation. An operation clause
$\OpCase{\ell}{p}{k}$ matches on an operation symbol and binds its
payload to $p$ and its continuation $k$. Note that the domain type of
the continuation agrees with the codomain type of the operation
symbol, and the codomain type of the continuation agrees with the
codomain type of the handler definition. Continuation application
fills the hole left by the $\Do$ construct, thus providing a value
interpretation of the invocation. The continuation returns inside the
handler once the $\Return$-clause computation has finished.
%
Operationally, effect handlers may be regarded as an extension of
\citet{BentonK01} style exception handlers.
We can implement mutable state with effect handlers as follows.
%
\[
\ba{@{~}l@{\qquad\quad}@{~}r}
\multicolumn{2}{l}{\Sigma \defas \{\Get : \UnitType \opto S;\Put : S \opto \UnitType\}} \smallskip\\
\multirow{2}{*}{
\bl
\getF : \UnitType \to S\\
\getF~\Unit \defas \Do\;\Get\,\Unit
\el} &
\multirow{2}{*}{
\bl
\putF : S \to \UnitType\\
\putF~st \defas \Do\;\Put~st
\el} \\ & % space hack to avoid the next paragraph from
% floating into the math environment.
\ea
\]
%
As with the free monad, we are completely free to pick whatever
interpretation of state we desire. If we want an interpretation that
is compatible with the usual equations for state, then we can simply
use the state-passing technique again.
%
\[
\bl
\runState : (\UnitType \to A) \to S \to A \times S\\
\runState~m~st_0 \defas
\bl
\Let\;f \revto \Handle\;m\,\Unit\;\With\\~\{
\ba[t]{@{}l@{~}c@{~}l}
\Return\;x &\mapsto& \lambda st.\Record{x;st};\\
\OpCase{\Get}{\Unit}{k} &\mapsto& \lambda st.k~st~st;\\
\OpCase{\Put}{st'}{k} &\mapsto& \lambda st.k~\Unit~st'\}
\ea\\
\In\;f~st_0
\el
\el
\]
%
Note the similarity with the implementation of the interpreter for the
free state monad. Save for the syntactic differences, the main
difference between this implementation and the free state monad
interpreter is that here the continuation $k$ implicitly reinstalls
the handler, whereas in the free state monad interpreter we explicitly
reinstalled the handler via a recursive application.
%
By fixing $S = \Int$ and $A = \Bool$ we can use the above effect
handler to run the delimited control variant of $\incrEven$.
%
\[
\runState~\incrEven~4 \reducesto^+ \Record{\True;5} : \Bool \times \Int
\]
%
Effect handlers come into their own when multiple effects are
combined. Throughout the dissertation we will see multiple examples of
handlers in action (e.g. Chapter~\ref{ch:unary-handlers}).
\subsubsection{Effect tracking}
% \dhil{Cite \citet{GiffordL86}, \citet{LucassenG88}, \citet{TalpinJ92},
% \citet{TofteT97}, \citet{WadlerT03}.}
A benefit of using monads for effectful programming is that we get
effect tracking `for free' (some might object to this statement and
claim we paid for it by having to program in monadic style). Effect
tracking is a useful tool for making programming with effects less
prone to error in much the same way a static type system is useful for
detecting a wide range of potential runtime errors at compile time.
Effect systems provide suitable typing discipline for statically
tracking the observable effects of programs~\cite{NielsonN99}. The
notion of effect system was developed around the same time as monads
rose to prominence, though, its development was independent of
monads. Nevertheless, \citet{WadlerT03} have shown that effect systems
and monads are formally related, providing effect systems with some
formal validity. Subsequently, \citet{Kammar14} has contributed to the
formal understanding of effect systems through development of a
general algebraic theory of effect systems. \citet{LucassenG88}
developed the original effect system as a means for lightweight static
analyses of functional programs with imperative features. For
instance, \citet{Lucassen87} made crucial use of an effect system to
statically distinguish between safe and unsafe terms for parallel
execution.
The principal idea of a \citeauthor{LucassenG88} style effect system is to
annotate computation types with the collection of effects that their
inhabitants are allowed to perform, e.g. the type $A \to B \eff E$ is
inhabited by functions that accept a value of type $A$ as input and
ultimately return a value of type $B$. As an inhabitant computes the
$B$ value it is allowed to perform the effect operations mentioned by
the effect signature $E$.
This typing discipline fits nicely with the effect handlers-style of
programming. The $\Do$ construct provides a mechanism for injecting an
operation into the effect signature, whilst the $\Handle$ construct
provides a way to eliminate an effect operation from the
signature~\cite{BauerP13,HillerstromL16}.
%
If we instantiate $A = \UnitType$, $B = \Bool$, and $E = \Sigma$, then
we obtain a type-and-effect signature for the handler version of
$\incrEven$.
%
\[
\incrEven : \UnitType \to \Bool \eff \{\Get:\UnitType \opto \Int;\Put:\Int \opto \UnitType\}
\]
%
Now, the signature of $\incrEven$ communicates precisely what it
expects from the ambient context. It is clear that we must run this
function under a handler that interprets at least $\Get$ and $\Put$.
Some form of polymorphism is necessary to make an effect system
extensible and useful in practice. Otherwise effect annotations end up
pervading the entire program in a similar fashion as monads do. In
Chapter~\ref{ch:base-language} we will develop an extensible effect
system based on row polymorphism.
\section{Scope}
Summarised in one sentence this dissertation is about practical
programming language designs for programming with effect handlers,
their foundational implementation techniques, and implications for the
expressive power of their host language.
Numerous variations and extensions of effect handlers have been
proposed since their inceptions. In this dissertation I restrict my
attention to \citeauthor{PlotkinP09}'s deep handlers, their shallow
variation, and parameterised handlers which are a slight variation of
deep handlers. In particular I work with free algebraic theories,
which is to say my designs do not incorporate equational theories for
effects. Furthermore, I frame my study in terms of simply-typed and
polymorphic $\lambda$-calculi for which I give computational
interpretations in terms of contextual operational semantics and
realise using two foundational operational techniques: continuation
passing style and abstract machine semantics. When it comes to
expressiveness there are multiple possible dimensions to investigate
and multiple different notions of expressivity available. I focus on
two questions: `are deep, shallow, and parameterised handlers
interdefinable?' which I investigate via a syntactic notion of
expressiveness due \citet{Felleisen91}. And, `does effect handlers
admit any essential computational efficiency?' which I investigate
using a semantic notion of expressiveness due to \citet{LongleyN15}.
\subsection{Scope extrusion}
The literature on effect handlers is rich, and my dissertation is but
one of many on topics related to effect handlers. In this section I
provide a few pointers to related work involving effect handlers that
I will not otherwise discuss in this dissertation.
Readers interested in the mathematical foundations and original
development of effect handlers should consult \citeauthor{Pretnar10}'s
PhD dissertation~\cite{Pretnar10}.
Most programming language treatments of algebraic effects and their
handlers sideline equational theories, despite equational theories
being an important part of the original treatment of effect
handlers. \citeauthor{Ziga20}'s PhD dissertation brings equations back
onto the pitch as \citet{Ziga20} develops a core calculus with a novel
local notion of equational theories for algebraic effects.
To get a grasp of the reasoning principles for effect handlers,
interested readers should consult \citeauthor{McLaughlin20}'s PhD
dissertation, which contains a development of relational reasoning
techniques for shallow
multi-handlers~\cite{McLaughlin20}. \citeauthor{McLaughlin20}'s
techniques draw inspiration from the logical relation reasoning
techniques for deep handlers due to \citet{BiernackiPPS18}.
\citeauthor{Ahman17}'s PhD dissertation is relevant for readers
interested in the integration of computational effects into dependent
type theories~\cite{Ahman17}. \citeauthor{Ahman17} develops an
intensional \citet{MartinLof84} style dependent type theory equipped
with a novel computational dependent type, which makes it possible to
treat type-dependency in the sequential composition of effectful
computations uniformly.
Lexical effect handlers are a variation on \citeauthor{PlotkinP09}'s
deep handlers, which provide a form of lexical scoping for effect
operations, thus statically binding them to their handlers.
%
\citeauthor{Geron19}'s PhD dissertation develops the mathematical
theory of scoped effect operations, whilst \citet{BiernackiPPS20}
study them in conjunction with ordinary handlers from a programming
perspective.
% \citet{WuSH14} study scoped effects, which are effects whose
% payloads are effectful computations. Scoped effects are
% non-algebraic, Thinking in terms of computation trees, a scoped
% effect is not an internal node of some computation tree, rather, it
% is itself a whole computation tree.
% Effect handlers were conceived in the realm of category theory to give
% an algebraic treatment of exception handling~\cite{PlotkinP09}. They
% were adopted early by functional programmers, who either added
% language-level support for effect handlers~
Functional programmers were early adopters of effect handlers. They
either added language-level support for handlers~
\cite{Hillerstrom15,DolanWSYM15,BiernackiPPS18,Leijen17,BauerP15,BrachthauserSO20a,LindleyMM17,Chiusano20}
or embedded them in
libraries~\cite{KiselyovSS13,KiselyovI15,KiselyovS16,KammarLO13,BrachthauserS17,Brady13,XieL20}. Thus
functional perspectives on effect handlers are plentiful in the
literature. Some notable examples of perspectives on effect handlers
outside functional programming are: \citeauthor{Brachthauser20}'s PhD
dissertation, which contains an object-oriented perspective on effect
handlers in Java~\cite{Brachthauser20}; \citeauthor{Saleh19}'s PhD
dissertation offers a logic programming perspective via an effect
handlers extension to Prolog; and \citet{Leijen17b} has an imperative
take on effect handlers in C.
\section{Contributions}
The key contributions of this dissertation are spread across the three
main parts. The following listing summarises the contributions of each
part.
\paragraph{Programming}
\begin{itemize}
\item A practical design for a programming language equipped with a
structural effect system and deep, shallow, and parameterised effect
handlers.
\item A case study in effect handler oriented programming
demonstrating how to compose the essence of an \UNIX{}-style
operating system with user session management, task parallelism,
and file I/O using standard effects and handlers.
\end{itemize}
\paragraph{Implementation}
\begin{itemize}
\item A novel generalisation of the notion of continuation known as
\emph{generalised continuation}, which provides a succinct
foundation for implementing deep, shallow, and parameterised
handlers.
\item A higher-order continuation passing style translation based on
generalised continuations, which yields a universal implementation
strategy for effect handlers.
\item An abstract machine semantics based on generalised
continuations, which characterises the low-level stack
manipulations admitted by effect handlers at runtime.
\end{itemize}
\paragraph{Expressiveness}
\begin{itemize}
\item A formal proof that deep, shallow, and parameterised handlers
are equi-expressible in the sense of typed macro-expressiveness.
\item A robust mathematical characterisation of the computational
efficiency of effect handlers, which shows that effect handlers
can improve the asymptotic runtime of certain classes of programs.
\end{itemize}
Another contribution worth noting is the continuation literature
review in Appendix~\ref{ch:continuation}, which provides a
comprehensive operational characterisation of various notions of
continuations and first-class control phenomena.
\section{Structure of this dissertation}
The following is a summary of the chapters belonging to each part of
this dissertation.
\paragraph{Programming}
\begin{itemize}
\item Chapter~\ref{ch:base-language} introduces a polymorphic fine-grain
call-by-value core calculus, $\BCalc$, which makes key use of
\citeauthor{Remy93}-style row polymorphism to implement polymorphic
variants, structural records, and a structural effect system. The
calculus distils the essence of the core of the Links programming
language.
\item Chapter~\ref{ch:unary-handlers} presents three extensions of $\BCalc$,
which are $\HCalc$ that adds deep handlers, $\SCalc$ that adds shallow
handlers, and $\HPCalc$ that adds parameterised handlers. The chapter
also contains a running case study that demonstrates effect handler
oriented programming in practice by implementing a small operating
system dubbed \OSname{} based on \citeauthor{RitchieT74}'s original
\UNIX{}.
\end{itemize}
\paragraph{Implementation}
\begin{itemize}
\item Chapter~\ref{ch:cps} develops a higher-order continuation passing
style translation for effect handlers through a series of step-wise
refinements of an initial standard continuation passing style
translation for $\BCalc$. Each refinement slightly modifies the notion
of continuation employed by the translation. The development
ultimately leads to the key invention of generalised continuation,
which is used to give a continuation passing style semantics to deep,
shallow, and parameterised handlers.
\item Chapter~\ref{ch:abstract-machine} demonstrates an application of
generalised continuations to abstract machine as we plug generalised
continuations into \citeauthor{FelleisenF86}'s CEK machine to obtain
an adequate abstract runtime with simultaneous support for deep,
shallow, and parameterised handlers.
\end{itemize}
\paragraph{Expressiveness}
\begin{itemize}
\item Chapter~\ref{ch:deep-vs-shallow} shows that deep, shallow, and
parameterised notions of handlers can simulate one another up to
specific notions of administrative reduction.
\item Chapter~\ref{ch:handlers-efficiency} studies the fundamental efficiency of effect
handlers. In this chapter, we show that effect handlers enable an
asymptotic improvement in runtime complexity for a certain class of
functions. Specifically, we consider the \emph{generic count} problem
using a pure PCF-like base language $\BPCF$ (a simply typed variation
of $\BCalc$) and its extension with effect handlers $\HPCF$.
%
We show that $\HPCF$ admits an asymptotically more efficient
implementation of generic count than any $\BPCF$ implementation.
%
\end{itemize}
\paragraph{Conclusions}
\begin{itemize}
\item Chapter~\ref{ch:conclusions} concludes and discusses future work.
\end{itemize}
\paragraph{Appendices}
\begin{itemize}
% \item Chapter~\ref{ch:maths-prep} defines some basic mathematical
% notation and constructions that are they pervasively throughout this
% dissertation.
\item Appendix~\ref{ch:continuations} presents a literature survey of
continuations and first-class control. I classify continuations
according to their operational behaviour and provide an overview of
the various first-class sequential control operators that appear in
the literature. The application spectrum of continuations is discussed
as well as implementation strategies for first-class control.
\item Appendix~\ref{ch:get-get} presents a small proof for the claim
made in Section~\ref{sec:monadic-state}, that the state equation
``Get after get'' is redundant.
\item Appendix~\ref{sec:proofs-cps-gen-cont} contains the proof
details for the proof of correctness of the higher-order
continuation-passing style translation developed in Chapter~\ref{ch:cps}.
\item \dhil{TODO inline Appendix~\ref{sec:berger-count} into Chapter~\ref{ch:handlers-efficiency}?}\end{itemize}
\dhil{TODO introduce relation notation}
% \part{Background}
% \label{p:background}
% \chapter{Mathematical preliminaries}
% \label{ch:maths-prep}
% Only a modest amount of mathematical proficiency should be necessary
% to be able to wholly digest this dissertation.
% %
% This chapter introduces some key mathematical concepts that will
% either be used directly or indirectly throughout this dissertation.
% %
% I assume familiarity with basic programming language theory including
% structural operational semantics~\cite{Plotkin04a} and System F type
% theory~\cite{Girard72}. For a practical introduction to programming
% language theory I recommend consulting \citeauthor{Pierce02}'s
% excellent book \emph{Types and Programming
% Languages}~\cite{Pierce02}. For the more theoretical inclined I
% recommend \citeauthor{Harper16}'s book \emph{Practical Foundations for
% Programming Languages}~\cite{Harper16} (do not let the ``practical''
% qualifier deceive you) --- the two books complement each other nicely.
% \section{Relations}
% \label{sec:relations}
% Relations feature prominently in the design and understanding of the
% static and dynamic properties of programming languages. The interested
% reader is likely to already be familiar with the basic concepts of
% relations, although this section briefly introduces the concepts, its
% real purpose is to introduce the notation that I am using pervasively
% throughout this dissertation.
% %
% I assume familiarity with basic set theory.
% \begin{definition}
% The Cartesian product of two sets $A$ and $B$, written $A \times B$,
% is the set of all ordered pairs $(a, b)$, where $a$ is drawn from
% $A$ and $b$ is drawn from $B$, i.e.
% %
% \[
% A \times B \defas \{ (a, b) \mid a \in A, b \in B \}
% \]
% %
% \end{definition}
% %
% Since the Cartesian product is itself a set, we can take the Cartesian
% product of it with another set, e.g. $A \times B \times C$. However,
% this raises the question in which order the product operator
% ($\times$) is applied. In this dissertation the product operator is
% taken to be right associative, meaning
% $A \times B \times C = A \times (B \times C)$.
% %
% %\dhil{Define tuples (and ordered pairs)?}
% %
% % To make the notation more compact for the special case of $n$-fold
% % product of some set $A$ with itself we write
% % $A^n \defas A \underbrace{\times \cdots \times}_{n \text{ times}} A$.
% %
% \begin{definition}
% A relation $R$ is a subset of the Cartesian product of two sets $A$
% and $B$, i.e. $R \subseteq A \times B$.
% %
% An element $a \in A$ is related to an element $b \in B$ if
% $(a, b) \in R$, sometimes written using infix notation $a\,R\,b$.
% %
% If $A = B$ then $R$ is said to be a \emph{homogeneous} relation.
% \end{definition}
% %
% \begin{definition}
% For any two relations $R \subseteq A \times B$ and
% $S \subseteq B \times C$ their composition is defined as follows.
% %
% \[
% S \circ R \defas \{ (a,c) \mid (a,b) \in R, (b, c) \in S \}
% \]
% \end{definition}
% The composition operator ($\circ$) is associative, meaning
% $(T \circ S) \circ R = T \circ (S \circ R)$.
% %
% For $n \in \N$ the $n$th relational power of a relation $R$, written
% $R^n$, is defined inductively.
% \[
% R^0 \defas \emptyset, \quad\qquad R^1 \defas R, \quad\qquad R^{1 + n} \defas R \circ R^n.
% \]
% %
% Homogeneous relations play a prominent role in the operational
% understanding of programming languages as they are used to give
% meaning to program reductions. There are two particular properties and
% associated closure operations of homogeneous relations that reoccur
% throughout this dissertation.
% %
% \begin{definition}
% A homogeneous relation $R \subseteq A \times A$ is said to be
% reflexive and transitive if its satisfies the following criteria,
% respectively.
% \begin{itemize}
% \item Reflexive: $\forall a \in A$ it holds that $a\,R\,a$.
% \item Transitive: $\forall a,b,c \in A$ if $a\,R\,b$ and $b\,R\,c$
% then $a\,R\,c$.
% \end{itemize}
% \end{definition}
% \begin{definition}[Closure operations]
% Let $R \subseteq A \times A$ denote a homogeneous relation. The
% reflexive closure $R^{=}$ of $R$ is the smallest reflexive relation
% over $A$ containing $R$
% %
% \[
% R^{=} \defas \{ (a, a) \mid a \in A \} \cup R.
% \]
% %
% The transitive closure $R^+$ of $R$ is the smallest transitive
% relation over $A$ containing $R$
% %
% \[
% R^+ \defas \displaystyle\bigcup_{n \in \N} R^n.
% \]
% %
% The reflexive and transitive closure $R^\ast$ of $R$ is the smallest
% reflexive and transitive relation over $A$ containing $R$
% %
% \[
% R^\ast \defas (R^+)^{=}.
% \]
% \end{definition}
% %
% \begin{definition}
% A relation $R \subseteq A \times B$ is functional and serial if it
% satisfies the following criteria, respectively.
% %
% \begin{itemize}
% \item Functional: $\forall a \in A, b,b' \in B$ if $a\,R\,b$ and $a\,R\,b'$ then $b = b'$.
% \item Serial: $\forall a \in A,\exists b \in B$ such that
% $a\,R\,b$.
% \end{itemize}
% \end{definition}
% %
% The functional property guarantees that every $a \in A$ is at most
% related to one $b \in B$. Note this does not mean that every $a$
% \emph{is} related to some $b$. The serial property guarantees that
% every $a \in A$ is related to one or more elements in $B$.
% %
% \section{Functions}
% \label{sec:functions}
% We define partial and total functions in terms of relations.
% %
% \begin{definition}
% A partial function $f : A \pto B$ is a functional relation
% $f \subseteq A \times B$.
% %
% A total function $f : A \to B$ is a functional and serial relation
% $f \subseteq A \times B$.
% \end{definition}
% %
% A total function is also simply called a `function'. Throughout this
% dissertation the terms (partial) mapping and (partial) function are
% synonymous.
% %
% For a function $f : A \to B$ (or partial function $f : A \pto B$) we
% write $f(a) = b$ to mean $(a, b) \in f$, and say that $f$ applied to
% $a$ returns $b$. We write $f(P) \defas M$ for the definition of a
% function with pattern $P$ and expression $M$, and sometimes we will
% use the anonymous notation $P \mapsto M$ to mean $f(P) \defas M$ for
% some fresh $f$. The notation $f(a)$ means the application of $f$ to
% $a$, and we say that $f(a)$ is defined whenever $f(a) = b$ for some
% $b$.
% %
% The domain of a function is a set, $\dom(-)$, consisting of all the
% elements for which it is defined. Thus the domain of a total function
% is its domain of definition, e.g. $\dom(f : A \to B) = A$.
% %
% For a partial function $f$ its domain is a proper subset of the domain
% of definition.
% %
% \[
% \dom(f : A \pto B) \defas \{ a \mid a \in A,\, f(a) \text{ is defined} \} \subset A.
% \]
% %
% The codomain of a total function $f : A \to B$ (or partial function
% $f : A \pto B$) is $B$, written $\dec{cod}(f) = B$. A related notion
% is that of \emph{image}. The image of a total or partial function $f$,
% written $\dec{Im}(f)$, is the set of values that it can return, i.e.
% %
% \[
% \dec{Im}(f) \defas \{\, f(a) \mid a \in \dom(f) \}.
% \]
% \begin{definition}
% A function $f : A \to B$ is injective and surjective if it satisfies
% the following criteria, respectively.
% \begin{itemize}
% \item Injective: $\forall a,a' \in A$ if $f(a) = f(a')$ then $a = a'$.
% \item Surjective: $\forall b \in B,\exists a \in A$ such that $f(a) = b$.
% \end{itemize}
% If a function is both injective and surjective, then it is said to
% be a bijective.
% \end{definition}
% %
% An injective function guarantees that each element in its image is
% uniquely determined by some element of its domain.
% %
% A surjective function guarantees that its domain covers the codomain,
% meaning that the codomain and image coincide.
% %
% A partial function $f$ is injective, surjective, and bijective
% whenever the function $f' : \dom(f) \to \dec{cod}(f)$, obtained by
% restricting $f$ to its domain, is injective, surjective, and bijective
% respectively.
% \section{Asymptotic notation}
% \label{sec:asymp-not}
% Asymptotic notation is a compact notational framework for comparing
% the order of growth of functions, which abstracts away any constants
% involved~\cite{Bachmann94}. We will use the asymptotic notation for
% both runtime and space analyses of functions.
% \begin{definition}[Upper bound]
% We say that a function $f : \N \to \R$ is of order \emph{at most}
% $g : \N \to \R$, and write $f = \BigO(g)$, if there is a positive
% constant $c \in \R$ and a positive natural number $n_0 \in \N$ such that
% %
% \[
% f(n) \leq c \cdot g(n),\quad \text{for all}~ n \geq n_0.
% \]
% %
% \end{definition}
% %
% We will extend the notation to permit $f(n) = \BigO(g(n))$.
% %
% We will often abuse notation and use the body of an anonymous function
% in place of $g$, e.g. $\BigO(\log~n)$ means $\BigO(n \mapsto \log~n)$
% and stands for a function whose values are bounded above by a
% logarithmic factor (in this dissertation logarithms are always in base
% $2$). If $f = \BigO(\log~n)$ then we say that the order of growth of
% $f$ is logarithmic; if $f = \BigO(n)$ we say that its order of growth
% is linear; if $f = \BigO(n^k)$ for some $k \in \N$ we say that the
% order of growth is polynomial; and if $f = \BigO(2^n)$ then the order
% of growth of $f$ is exponential.
% %
% It is important to note, though, that we write $f = \BigO(1)$ to mean
% that the values of $f$ are bounded above by some constant, meaning
% $1 \not\in \N$, but rather $1$ denotes a family of constant functions
% of type $\N \to \R$. So if $f = \BigO(1)$ then we say that the order
% of growth of $f$ is constant.
% \begin{definition}[Lower bound]
% We say that a function $f : \N \to \R$ is of order \emph{at least}
% $g : \N \to \R$, and write $f = \Omega(g)$, if there is a positive
% constant $c \in \R$ and a positive natural number $n_0 \in \N$ such
% that
% %
% \[
% f(n) \geq c \cdot g(n),\quad \text{for all}~n \geq n_0.
% \]
% \end{definition}
% \section{Typed programming languages}
% \label{sec:pls}
% We will be working mostly with statically typed programming
% languages. The following definition informally describes the core
% components used to construct a statically typed programming language.
% %
% The objective here is not to be mathematical
% rigorous.% but rather to give
% % an idea of what constitutes a programming language.
% %
% \begin{definition}
% A statically typed programming language $\LLL$ consists of a syntax
% $S$, static semantics $T$, and dynamic semantics $E$ where
% \begin{itemize}
% \item $S$ is a collection of, possibly mutually, inductively defined
% syntactic categories (e.g. terms, types, and kinds). Each
% syntactic category contains a collection of syntax constructors
% with nonnegative arities $\{(\SC_i,\ar_i)\}$ which construct
% abstract syntax trees. We insist that $S$ contains at least two
% syntactic categories for constructing terms and types of
% $\LLL$. We write $\Tm(S)$ for the terms category and $\Ty(S)$ for
% the types category;
% \item $T : \Tm(S) \times \Ty(S) \to \B$ is \emph{typechecking}
% function, which decides whether a given term is well-typed; and
% \item $E : P \pto R$ is an \emph{evaluation} function, which maps
% well-typed programs
% \[
% P \defas \{ M \in \Tm(S) \mid A \in \Ty(S), T(M, A)~\text{is true} \},
% \]
% to some unspecified set of answers $R$.
% \end{itemize}
% \end{definition}
% %
% We will always present the syntax of a programming language on
% Backus-Naur form.
% %
% The static semantics will always be given in form a typing judgement
% (and a kinding judgement for languages with polymorphism), and the
% dynamic semantics will be given as either a reduction relation or an
% abstract machine.
% We will take the view that an untyped programming language is a
% special instance of a statically typed programming language, where the
% typechecking function $T$ is the constant function that always returns
% true.
% Often we will build programming languages incrementally by starting
% from a base language and extend it with new facilities. A
% \emph{conservative extension} is a particularly well-behaved extension
% in the sense that it preserves the all of the behaviour of the
% original language.
% %
% \begin{definition}
% A programming language $\LLL = (S, T, E)$ is said to be a
% \emph{conservative extension} of a language $\LLL' = (S', T', E')$
% if the following conditions are met.
% \begin{itemize}
% \item $\LLL$ syntactically extends $\LLL'$, i.e. $S'$ is a proper
% subset of $S$.
% \item $\LLL$ preserves the static semantics and dynamic semantics
% of $\LLL'$, that is $T(M, A) = T'(M, A)$ and $E(M)$ holds if and
% only if $E'(M)$ holds for all types $A \in \Ty(S)$ and programs
% $M \in \Tm(S)$.
% \end{itemize}
% Conversely, $\LLL'$ is a \emph{conservative restriction} of $\LLL$.
% \end{definition}
% We will often work with translations on syntax between languages. It
% is often the case that a syntactic translation is \emph{homomorphic}
% on most syntax constructors, which is a technical way of saying it
% does not perform any interesting transformation on those
% constructors. Therefore we will omit homomorphic cases in definitions
% of translations.
% %
% \begin{definition}
% Let $\LLL = (S, T, E)$ and $\LLL' = (S', T', E')$ be programming
% languages. A translation $\sembr{-} : S \to S'$ on the syntax
% between $\LLL$ and $\LLL'$ is a homomorphism if it distributes over
% the syntax constructors, i.e. for every $\{(\SC_i,\ar_i)\}_i \in S$
% \[
% \sembr{\SC_i(t_1,\dots,t_{\ar_i})} = \sembr{\SC_i}(\sembr{t_1},\cdots,\sembr{t_{\ar_i}}),
% \]
% %
% where $t_1,\dots,t_{\ar_i}$ are abstract syntax trees. We say that
% $\sembr{-}$ is homomorphic on $S_i$.
% \end{definition}
% We will be using a typed variation of \citeauthor{Felleisen91}'s
% macro-expressivity to compare the relative expressiveness of different
% languages~\cite{Felleisen90,Felleisen91}. Macro-expressivity is a
% syntactic notion based on the idea of macro rewrites as found in the
% programming language Scheme~\cite{SperberDFSFM10}.
% %
% Informally, if $\LLL$ admits a translation into a sublanguage $\LLL'$
% in a way which respects not only the behaviour of programs but also
% their local syntactic structure, then $\LLL'$ macro-expresses
% $\LLL$. % If the translation of some $\LLL$-program into $\LLL'$
% % requires a complete global restructuring, then we may say that $\LLL'$
% % is in some way less expressive than $\LLL$.
% %
% \begin{definition}[Typeability-preserving macro-expressiveness]
% Let both $\LLL = (S, T, E)$ and $\LLL' = (S', T', E')$ be
% programming languages such that the syntax constructors
% $\SC_1,\dots,\SC_k$ are unique to $\LLL$. If there exists a
% translation $\sembr{-} : S \to S'$ on the syntax of $\LLL$ that is
% homomorphic on all syntax constructors but $\SC_1,\dots,\SC_k$, such
% that for all $A \in \Ty(S)$ and $M \in \Tm(S)$
% %
% \[
% T(M,A) = T'(\sembr{M},\sembr{A})~\text{and}~
% E(M)~\text{holds if and only if}~E'(\sembr{M})~\text{holds},
% \]
% %
% then we say that $\LLL'$ can \emph{macro-express} the
% $\SC_1,\dots,\SC_k$ facilities of $\LLL$.
% \end{definition}
% %
% In general, it is not the case that $\sembr{-}$ preserves types, it is
% only required to ensure that the translated term is typeable in the
% target language $\LLL'$.
% \begin{definition}[Type-respecting expressiveness]
% Let $\LLL = (S,T,E)$ and $\LLL' = (S',T',E')$ be programming
% languages and $A$ be a type such that $A \in \Ty(S)$ and
% $A \in \Ty(S')$.
% \end{definition}
% % \dhil{Definition of (typed) programming language, conservative extension, macro-expressiveness~\cite{Felleisen90,Felleisen91}}
% % \begin{definition}
% % A signature $\Sigma$ is a collection of abstract syntax constructors
% % with arities $\{(\SC_i, \ar_i)\}_i$, where $\ST_i$ are syntactic
% % entities and $\ar_i$ are natural numbers. Syntax constructors with
% % arities $0$, $1$, $2$, and $3$ are referred to as nullary, unary,
% % binary, and ternary, respectively.
% % \end{definition}
% % \begin{definition}[Terms in contexts]
% % A context $\mathcal{X}$ is set of variables $\{x_1,\dots,x_n\}$. For
% % a signature $\Sigma = \{(\SC_i, \ar_i)\}_i$ the $\Sigma$-terms in
% % some context $\mathcal{X}$ are generated according to the following
% % inductive rules:
% % \begin{itemize}
% % \item each variable $x_i \in \mathcal{X}$ is a $\Sigma$-term,
% % \item if $t_1,\dots,t_{\ar_i}$ are $\Sigma$-terms in context
% % $\mathcal{X}$, then $\SC_i(t_1,\dots,t_{\ar_i})$ is
% % $\Sigma$-term in context $\mathcal{X}$.
% % \end{itemize}
% % We write $\mathcal{X} \vdash t$ to indicate that $t$ is a
% % $\Sigma$-term in context $\mathcal{X}$. A $\Sigma$-term $t$ is
% % said to be closed if $\emptyset \vdash t$, i.e. no variables occur
% % in $t$.
% % \end{definition}
% % \begin{definition}
% % A $\Sigma$-equation is a pair of terms $t_1,t_2 \in \Sigma$ in
% % context $\mathcal{X}$, which we write as
% % $\mathcal{X} \vdash t_1 = t_2$.
% % \end{definition}
% % The notation $\mathcal{X} \vdash t_1 = t_2$ begs to be read as a
% % logical statement, though, in universal algebra the notation is simply
% % a syntax. In order to give it a meaning we must first construct a
% % \emph{model} which interprets the syntax. We shall not delve deeper
% % here; for our purposes we may think of a $\Sigma$-equation as a
% % logical statement (the attentive reader may already have realised that .
% % The following definition is an adaptation of
% % \citeauthor{Felleisen90}'s definition of programming language to
% % define statically typed programming languages~\cite{Felleisen90}.
% % %
% % \begin{definition}
% % %
% % A statically typed programming language $\LLL$ consists of the
% % following components.
% % %
% % \begin{itemize}
% % \item A signature $\Sigma_T = \{(\ST_i,\ar_i)\}_i$ of \emph{type syntax constructors}.
% % \item A signature $\Sigma_t = \{(\SC_i,\ar_i)\}_i$ of \emph{term syntax constructors}.
% % \item A signature $\Sigma_p = \{(\SC_i,
% % \item A static semantics, which is a function
% % $typecheck : \Sigma_t \to \B$, which decides whether a
% % given closed term is well-typed.
% % \item An operational semantics, which is a partial function
% % $eval : P \pto R$, where $P$ is the set of well-typed terms, i.e.
% % %
% % \[
% % P = \{ t \in \Sigma_t \mid typecheck~t~\text{is true} \},
% % \]
% % %
% % and $R$ is some unspecified set of answers.
% % \end{itemize}
% % \end{definition}
% % %
% % An untyped programming language is just a special instance of a
% % statically typed programming language, where the signature of type
% % syntax constructors is a singleton containing a nullary type syntax
% % constructor for the \emph{universal type}, and the function
% % $typecheck$ is a constant function that always returns true.
% % A statically polymorphic typed programming language $\LLL$ also
% % includes a set of $\LLL$-kinds of kind syntax constructors as well as
% % a function $kindcheck_\LLL : \LLL\text{-types} \to \B$ which checks
% % whether a given type is well-kinded.
% % \begin{definition}
% % A context free grammar is a quadruple $(N, \Sigma, R, S)$, where
% % \begin{enumerate}
% % \item $N$ is a finite set called the nonterminals.
% % \item $\Sigma$ is a finite set, such that
% % $\Sigma \cap V = \emptyset$, called the alphabet (or terminals).
% % \item $R$ is a finite set of rules, each rule is on the form
% % \[
% % P ::= (N \cup \Sigma)^\ast, \quad\text{where}~P \in N.
% % \]
% % \item $S$ is the initial nonterminal.
% % \end{enumerate}
% % \end{definition}
\part{Programming}
\label{p:design}
\chapter{Composing \UNIX{} with effect handlers}
\label{ch:ehop}
There are several analogies for understanding effect handlers as a
programming abstraction, e.g. as interpreters for effects, folds over
computation trees (as in Section~\ref{sec:sec:state-of-effprog}),
resumable exceptions. A particularly compelling programmatic analogy
is \emph{effect handlers as composable operating systems}. Effect
handlers and operating systems share operational characteristics: an
operating system interprets a set of system commands performed via
system calls, in a similar way to how an effect handler interprets a
set of abstract operations performed via operation invocations (this
analogy was suggested to me by James McKinna; personal communication,
2017).
%
The compelling aspect of this analogy is that we can understand a
monolithic and complex operating system like \UNIX{}~\cite{RitchieT74}
as a collection of effect handlers, or alternatively, a collection of
tiny operating systems, that when composed yield a semantics for
\UNIX{}.
In this section we will take this reading of effect handlers
literally, and demonstrate how we can harness the power of (deep)
effect handlers to implement a \UNIX{}-style operating system with
multiple user sessions, time-sharing, and file i/o. We dub the system
\OSname{}.
%
It is a case study that demonstrates the versatility of effect
handlers, and shows how standard computational effects such as
\emph{exceptions}, \emph{dynamic binding}, \emph{nondeterminism}, and
\emph{state} make up the essence of an operating system. These effects
are standard in the sense that they appear frequently in 101 tutorials
on effects.
For the sake of clarity, we will occasionally make some blatant
simplifications, nevertheless the resulting implementation will
capture the essence of a \UNIX{}-like operating system.
%
The implementation will be composed of several small modular effect
handlers, that each handles a particular set of system commands. In
this respect, we will truly realise \OSname{} in the spirit of the
\UNIX{} philosophy~\cite[Section~1.6]{Raymond03}.
\section{Basic i/o}
\label{sec:tiny-unix-bio}
The file system is a cornerstone of \UNIX{} as the notion of \emph{file}
in \UNIX{} provides a unified abstraction for storing text, interprocess
communication, and access to devices such as terminals, printers,
network, etc.
%
Initially, we shall take a rather basic view of the file system. In
fact, our initial system will only contain a single file, and
moreover, the system will only support writing operations. This system
hardly qualifies as a \UNIX{} file system. Nevertheless, it serves a
crucial role for development of \OSname{}, because it provides the
only means for us to be able to observe the effects of processes.
%
We defer development of a more advanced file system to
Section~\ref{sec:tiny-unix-io}.
Much like \UNIX{} we shall model a file as a list of characters, that is
$\UFile \defas \List~\Char$. For convenience we will use the same
model for strings, $\String \defas \List~\Char$, such that we can use
string literal notation to denote the $\strlit{contents of a file}$.
%
The signature of the basic file system will consist of a single
operation $\Write$ for writing a list of characters to the file.
%
\[
\BIO \defas \{\Write : \Record{\UFD;\String} \opto \UnitType\}
\]
%
The operation is parameterised by a $\UFD$ and a character
sequence. We will leave the $\UFD$ type abstract until
Section~\ref{sec:tiny-unix-io}, however, we shall assume the existence
of a term $\stdout : \UFD$ such that we can perform invocations of
$\Write$.
%
Let us define a suitable handler for this operation.
%
\[
\bl
\basicIO : (\UnitType \to \alpha \eff \BIO) \to \Record{\alpha; \UFile}\\
\basicIO~m \defas
\ba[t]{@{~}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& \Record{res;\nil}\\
\OpCase{\Write}{\Record{\_;cs}}{resume} &\mapsto&
\ba[t]{@{}l}
\Let\; \Record{res;file} = resume\,\Unit\;\In\\
\Record{res; cs \concat file}
\ea
\ea
\ea
\el
\]
%
The handler takes as input a computation that produces some value
$\alpha$, and in doing so may perform the $\BIO$ effect.
%
The handler ultimately returns a pair consisting of the return value
$\alpha$ and the final state of the file.
%
The $\Return$-case pairs the result $res$ with the empty file $\nil$
which models the scenario where the computation $m$ performed no
$\Write$-operations, e.g.
$\basicIO\,(\lambda\Unit.\Unit) \reducesto^+
\Record{\Unit;\strlit{}}$.
%
The $\Write$-case extends the file by first invoking the resumption,
whose return type is the same as the handler's return type, thus it
returns a pair containing the result of $m$ and the file state. The
file gets extended with the character sequence $cs$ before it is
returned along with the original result of $m$.
%
Intuitively, we may think of this implementation of $\Write$ as a
peculiar instance of buffered writing, where the contents of the
operation are committed to the file when the computation $m$ finishes.
Let us define an auxiliary function that writes a string to the
$\stdout$ file.
%
\[
\bl
\echo : \String \to \UnitType \eff\, \BIO\\%\{\Write : \Record{\UFD;\String} \opto \UnitType\}\\
\echo~cs \defas \Do\;\Write\,\Record{\stdout;cs}
\el
\]
%
The function $\echo$ is a simple wrapper around an invocation of
$\Write$.
%
We can now write some contents to the file and observe the effects.
%
\[
\ba{@{~}l@{~}l}
&\basicIO\,(\lambda\Unit. \echo~\strlit{Hello}; \echo~\strlit{World})\\
\reducesto^+& \Record{\Unit;\strlit{HelloWorld}} : \Record{\UnitType;\UFile}
\ea
\]
\section{Exceptions: non-local exits}
\label{sec:tiny-unix-exit}
A process may terminate successfully by running to completion, or it
may terminate with success or failure in the middle of some
computation by performing an \emph{exit} system call. The exit system
call is typically parameterised by an integer value intended to
indicate whether the exit was due to success or failure. By
convention, \UNIX{} interprets the integer zero as success and any
nonzero integer as failure, where the specific value is supposed to
correspond to some known error code.
%
We can model the exit system call by way of a single operation
$\Exit$.
%
\[
\Status \defas \{\Exit : \Int \opto \ZeroType\}
\]
%
The operation is parameterised by an integer value, however, an
invocation of $\Exit$ can never return, because the type $\ZeroType$ is
uninhabited. Thus $\Exit$ acts like an exception.
%
It is convenient to abstract invocations of $\Exit$ to make it
possible to invoke the operation in any context.
%
\[
\bl
\exit : \Int \to \alpha \eff \Status\\
\exit~n \defas \Absurd\;(\Do\;\Exit~n)
\el
\]
%
The $\Absurd$ computation term is used to coerce the return type
$\ZeroType$ of $\Fail$ into $\alpha$. This coercion is safe, because
$\ZeroType$ is an uninhabited type.
%
An interpretation of $\Exit$ amounts to implementing an exception
handler.
%
\[
\bl
\status : (\UnitType \to \alpha \eff \Status) \to \Int\\
\status~m \defas
\ba[t]{@{~}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;\_ &\mapsto& 0\\
\ExnCase{\Exit}{n} &\mapsto& n
\ea
\ea
\el
\]
%
Following the \UNIX{} convention, the $\Return$-case interprets a
successful completion of $m$ as the integer $0$. The operation case
returns whatever payload the $\Exit$ operation was carrying. As a
consequence, outside of $\status$, an invocation of $\Exit~0$ in $m$
is indistinguishable from $m$ returning normally, e.g.
$\status\,(\lambda\Unit.\exit~0) = \status\,(\lambda\Unit.\Unit)$.
To illustrate $\status$ and $\exit$ in action consider the following
example, where the computation gets terminated mid-way.
%
\[
\ba{@{~}l@{~}l}
&\bl
\basicIO\,(\lambda\Unit.
\status\,(\lambda\Unit.
\echo~\strlit{dead};\exit~1;\echo~\strlit{code}))
\el\\
\reducesto^+& \Record{1;\strlit{dead}} : \Record{\Int;\UFile}
\ea
\]
%
The (delimited) continuation of $\exit~1$ is effectively dead code.
%
Here, we have a choice as to how we compose the handlers. Swapping the
order of handlers would cause the whole computation to return just
$1 : \Int$, because the $\status$ handler discards the return value of
its computation. Thus with the alternative layering of handlers the
system would throw away the file state after the computation
finishes. However, in this particular instance the semantics the
(local) behaviour of the operations $\Write$ and $\Exit$ would be
unaffected if the handlers were swapped. In general the behaviour of
operations may be affected by the order of handlers. The canonical
example of this phenomenon is the composition of nondeterminism and
state, which we will discuss in Section~\ref{sec:tiny-unix-io}.
\section{Dynamic binding: user-specific environments}
\label{sec:tiny-unix-env}
When a process is run in \UNIX{}, the operating system makes available
to the process a collection of name-value pairs called the
\emph{environment}.
%
The name of a name-value pair is known as an \emph{environment
variable}.
%
During execution the process may perform a system call to ask the
operating system for the value of some environment variable.
%
The value of environment variables may change throughout process
execution, moreover, the value of some environment variables may vary
according to which user asks the environment.
%
For example, an environment may contain the environment variable
\texttt{USER} that is bound to the name of the enquiring user.
An environment variable can be viewed as an instance of dynamic
binding. The idea of dynamic binding as a binding form in programming
dates back as far as the original implementation of
Lisp~\cite{McCarthy60}, and still remains an integral feature in
successors such as Emacs Lisp~\cite{LewisLSG20}. It is well-known that
dynamic binding can be encoded as a computational effect by using
delimited control~\cite{KiselyovSS06}.
%
Unsurprisingly, we will use this insight to simulate user-specific
environments using effect handlers.
For simplicity we fix the users of the operating system to be root,
Alice, and Bob.
%
\[
\User \defas [\Alice;\Bob;\Root]
\]
Our environment will only support a single environment variable
intended to store the name of the current user. The value of this
variable can be accessed via an operation $\Ask : \UnitType \opto \String$.
%
% \[
% \EnvE \defas \{\Ask : \UnitType \opto \String\}
% \]
%
Using this operation we can readily implement the \emph{whoami}
utility from the GNU coreutils~\cite[Section~20.3]{MacKenzieMPPBYS20},
which returns the name of the current user.
%
\[
\bl
\whoami : \UnitType \to \String \eff \{\Ask : \UnitType \opto \String\}\\
\whoami~\Unit \defas \Do\;\Ask~\Unit
\el
\]
%
The following handler implements the environment.
%
\[
\bl
\environment : \Record{\User;\UnitType \to \alpha \eff \{\Ask : \UnitType \opto \String\}} \to \alpha\\
\environment~\Record{user;m} \defas
\ba[t]{@{~}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& res\\
\OpCase{\Ask}{\Unit}{resume} &\mapsto&
\bl
\Case\;user\,\{
\ba[t]{@{}l@{~}c@{~}l}
\Alice &\mapsto& resume~\strlit{alice}\\
\Bob &\mapsto& resume~\strlit{bob}\\
\Root &\mapsto& resume~\strlit{root}\}
\ea
\el
\ea
\ea
\el
\]
%
The handler takes as input the current $user$ and a computation that
may perform the $\Ask$ operation. When an invocation of $\Ask$ occurs
the handler pattern matches on the $user$ parameter and resumes with a
string representation of the user. With this implementation we can
interpret an application of $\whoami$.
%
\[
\environment~\Record{\Root;\whoami} \reducesto^+ \strlit{root} : \String
\]
%
It is not difficult to extend this basic environment model to support
an arbitrary number of variables. This can be done by parameterising
the $\Ask$ operation by some name representation (e.g. a string),
which the environment handler can use to index into a list of string
values. In case the name is unbound the environment, the handler can
embrace the laissez-faire attitude of \UNIX{} and resume with the
empty string.
\paragraph{User session management}
%
It is somewhat pointless to have multiple user-specific environments,
if the system does not support some mechanism for user session
handling, such as signing in as a different user.
%
In \UNIX{} the command \emph{substitute user} (su) enables the invoker
to impersonate another user account, provided the invoker has
sufficient privileges.
%
We will implement su as an operation $\Su : \User \opto \UnitType$
which is parameterised by the user to be impersonated.
%
To model the security aspects of su, we will use the weakest possible
security model: unconditional trust. Put differently, we will not
bother with security at all to keep things relatively simple.
%
Consequently, anyone can impersonate anyone else.
The session signature consists of two operations, $\Ask$, which we
used above, and $\Su$, for switching user.
%
\[
\EnvE \defas \{\Ask : \UnitType \opto \String;\Su : \User \opto \UnitType\}
\]
%
As usual, we define a small wrapper around invocations of $\Su$.
%
\[
\bl
\su : \User \to \UnitType \eff \{\Su : \User \opto \UnitType\}\\
\su~user \defas \Do\;\Su~user
\el
\]
%
The intended operational behaviour of an invocation of $\Su~user$ is
to load the environment belonging to $user$ and continue the
continuation under this environment.
%
We can achieve this behaviour by defining a handler for $\Su$ that
invokes the provided resumption under a fresh instance of the
$\environment$ handler.
%
\[
\bl
\sessionmgr : \Record{\User; \UnitType \to \alpha \eff \EnvE} \to \alpha\\
\sessionmgr\,\Record{user;m} \defas
\environment\langle{}user;(\lambda\Unit.
\ba[t]{@{}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& res\\
\OpCase{\Su}{user'}{resume} &\mapsto& \environment\Record{user';resume})\rangle
\ea
\ea
\el
\]
%
The function $\sessionmgr$ manages a user session. It takes two
arguments: the initial user ($user$) and the computation ($m$) to run
in the current session. An initial instance of $\environment$ is
installed with $user$ as argument. The computation argument is a
handler for $\Su$ enclosing the computation $m$. The $\Su$-case
installs a new instance of $\environment$, which is the environment
belonging to $user'$, and runs the resumption $resume$ under this
instance.
%
The new instance of $\environment$ shadows the initial instance, and
therefore it will intercept and handle any subsequent invocations of
$\Ask$ arising from running the resumption. A subsequent invocation of
$\Su$ will install another environment instance, which will shadow
both the previously installed instance and the initial instance.
%
To make this concrete, let us plug together the all components of our
system we have defined thus far.
%
\[
\ba{@{~}l@{~}l}
&\bl
\basicIO\,(\lambda\Unit.\\
\qquad\sessionmgr\,\Record{\Root;\lambda\Unit.\\
\qquad\qquad\status\,(\lambda\Unit.
\ba[t]{@{}l@{~}l}
\su~\Alice;&\echo\,(\whoami\,\Unit);~\echo~\strlit{ };\\
\su~\Bob; &\echo\,(\whoami\,\Unit);~\echo~\strlit{ };\\
\su~\Root; &\echo\,(\whoami\,\Unit))})
\ea
\el \smallskip\\
\reducesto^+& \Record{0;\strlit{alice bob root}} : \Record{\Int;\UFile}
\ea
\]
%
The session manager ($\sessionmgr$) is installed in between the basic
IO handler ($\basicIO$) and the process status handler
($\status$). The initial user is $\Root$, and thus the initial
environment is the environment that belongs to the root user. Main
computation signs in as $\Alice$ and writes the result of the system
call $\whoami$ to the global file, and then repeats these steps for
$\Bob$ and $\Root$.
%
Ultimately, the computation terminates successfully (as indicated by
$0$ in the first component of the result) with global file containing
the three user names.
%
The above example demonstrates that we now have the basic building
blocks to build a multi-user system.
%
%\dhil{Remark on the concrete layering of handlers.}
\section{Nondeterminism: time sharing}
\label{sec:tiny-unix-time}
Time sharing is a mechanism that enables multiple processes to run
concurrently, and hence, multiple users to work concurrently.
%
Thus far in our system there is exactly one process.
%
In \UNIX{} there exists only a single process whilst the system is
bootstrapping itself into operation. After bootstrapping is complete
the system duplicates the initial process to start running user
managed processes, which may duplicate themselves to create further
processes.
%
The process duplication primitive in \UNIX{} is called
\emph{fork}~\cite{RitchieT74}.
%
The fork-invoking process is typically referred to as the parent
process, whilst its clone is referred to as the child process.
%
Following an invocation of fork, the parent process is provided with a
nonzero identifier for the child process and the child process is
provided with the zero identifier. This enables processes to determine
their respective role in the parent-child relationship, e.g.
%
\[
\bl
\Let\;i\revto fork~\Unit\;\In\\
\If\;i = 0\;\Then\;
~\textit{child's code}\\
\Else\;~\textit{parent's code}
\el
\]
%
In our system, we can model fork as an effectful operation, that
returns a boolean to indicate the process role; by convention we will
interpret the return value $\True$ to mean that the process assumes
the role of parent.
%
\[
\bl
\fork : \UnitType \to \Bool \eff \{\Fork : \UnitType \opto \Bool\}\\
\fork~\Unit \defas \Do\;\Fork~\Unit
\el
\]
%
In \UNIX{} the parent process \emph{continues} execution after the
fork point, and the child process \emph{begins} its execution after
the fork point.
%
Thus, operationally, we may understand fork as returning twice to its
invocation site. We can implement this behaviour by invoking the
resumption arising from an invocation of $\Fork$ twice: first with
$\True$ to continue the parent process, and subsequently with $\False$
to start the child process (or the other way around if we feel
inclined).
%
The following handler implements this behaviour.
%
\[
\bl
\nondet : (\UnitType \to \alpha \eff \{\Fork:\UnitType \opto \Bool\}) \to \List~\alpha\\
\nondet~m \defas
\ba[t]{@{}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& [res]\\
\OpCase{\Fork}{\Unit}{resume} &\mapsto& resume~\True \concat resume~\False
\ea
\ea
\el
\]
%
The $\Return$-case returns a singleton list containing a result of
running $m$.
%
The $\Fork$-case invokes the provided resumption $resume$ twice. Each
invocation of $resume$ effectively copies $m$ and runs each copy to
completion. Each copy returns through the $\Return$-case, hence each
invocation of $resume$ returns a list of the possible results obtained
by interpreting $\Fork$ first as $\True$ and subsequently as
$\False$. The results are joined by list concatenation ($\concat$).
%
Thus the handler returns a list of all the possible results of $m$.
%
In fact, this handler is exactly the standard handler for
nondeterministic choice, which satisfies the standard semi-lattice
equations~\cite{PlotkinP09,PlotkinP13}.
% \dhil{This is an instance of non-blind backtracking~\cite{FriedmanHK84}}
Let us consider $\nondet$ together with the previously defined
handlers. But first, let us define two computations.
%
\[
\bl
\quoteRitchie,\;\quoteHamlet : \UnitType \to \UnitType \eff \{\Write: \Record{\UFD;\String} \opto \UnitType\} \smallskip\\
\quoteRitchie\,\Unit \defas
\ba[t]{@{~}l}
\echo~\strlit{UNIX is basically };\\
\echo~\strlit{a simple operating system, };\\
\echo~\strlit{but };\\
\echo~\texttt{"}
\ba[t]{@{}l}
\texttt{you have to be a genius }\\
\texttt{to understand the simplicity.\nl{}"}
\ea
\ea \smallskip\\
\quoteHamlet\,\Unit \defas
\ba[t]{@{}l}
\echo~\strlit{To be, or not to be, };\\
\echo~\strlit{that is the question:\nl};\\
\echo~\strlit{Whether 'tis nobler in the mind to suffer\nl}
\ea
\el
\]
%
The computation $\quoteRitchie$ writes a quote by Dennis Ritchie to
the file, whilst the computation $\quoteHamlet$ writes a few lines of
William Shakespeare's \emph{The Tragedy of Hamlet, Prince of Denmark},
Act III, Scene I~\cite{Shakespeare6416} to the file.
%
Using $\nondet$ and $\fork$ together with the previously defined
infrastructure, we can fork the initial process such that both of the
above computations are run concurrently.
%
\[
\ba{@{~}l@{~}l}
&\bl
\basicIO\,(\lambda\Unit.\\
\qquad\nondet\,(\lambda\Unit.\\
\qquad\qquad\sessionmgr\,\Record{\Root;\lambda\Unit.\\
\qquad\qquad\qquad\status\,(\lambda\Unit.
\ba[t]{@{}l}
\If\;\fork\,\Unit\;\Then\;
\su~\Alice;\,
\quoteRitchie~\Unit\\
\Else\;
\su~\Bob;\,
\quoteHamlet~\Unit)}))
\ea
\el \smallskip\\
\reducesto^+&
\Record{
\ba[t]{@{}l}
[0, 0];
\texttt{"}\ba[t]{@{}l}
\texttt{UNIX is basically a simple operating system, but }\\
\texttt{you have to be a genius to understand the simplicity.\nl}\\
\texttt{To be, or not to be, that is the question:\nl}\\
\texttt{Whether 'tis nobler in the mind to suffer\nl"}} : \Record{\List~\Int; \UFile}
\ea
\ea
\ea
\]
%
The computation running under the $\status$ handler immediately
performs an invocation of fork, causing $\nondet$ to explore both the
$\Then$-branch and the $\Else$-branch. In the former, $\Alice$ signs
in and quotes Ritchie, whilst in the latter Bob signs in and quotes a
Hamlet.
%
Looking at the output there is supposedly no interleaving of
computation, since the individual writes have not been
interleaved. From the stack of handlers, we \emph{know} that there has
been no interleaving of computation, because no handler in the stack
handles interleaving. Thus, our system only supports time sharing in
the extreme sense: we know from the $\nondet$ handler that every
effect of the parent process will be performed and handled before the
child process gets to run. In order to be able to share time properly
amongst processes, we must be able to interrupt them.
\paragraph{Interleaving computation}
%
We need an operation for interruptions and corresponding handler to
handle interrupts in order for the system to support interleaving of
processes.
%
\[
\bl
\interrupt : \UnitType \to \UnitType \eff \{\Interrupt : \UnitType \opto \UnitType\}\\
\interrupt~\Unit \defas \Do\;\Interrupt~\Unit
\el
\]
%
The intended behaviour of an invocation of $\Interrupt$ is to suspend
the invoking computation in order to yield time for another
computation to run.
%
We can achieve this behaviour by reifying the process state. For the
purpose of interleaving processes via interruptions it suffices to
view a process as being in either of two states: 1) it is done, that
is it has run to completion, or 2) it is paused, meaning it has
yielded to provide room for another process to run.
%
We can model the state using a recursive variant type parameterised by
some return value $\alpha$ and a set of effects $\varepsilon$ that the
process may perform.
%
\[
\Pstate~\alpha~\varepsilon~\theta \defas
\ba[t]{@{}l@{}l}
[&\Done:\alpha;\\
&\Suspended:\UnitType \to \Pstate~\alpha~\varepsilon~\theta \eff \{\Interrupt:\theta;\varepsilon\} ]
\ea
\]
%
This data type definition is an instance of the \emph{resumption
monad}~\cite{Papaspyrou01}. The $\Done$-tag simply carries the
return value of type $\alpha$. The $\Suspended$-tag carries a
suspended computation, which returns another instance of $\Pstate$,
and may or may not perform any further invocations of
$\Interrupt$. Payload type of $\Suspended$ is precisely the type of a
resumption originating from a handler that handles only the operation
$\Interrupt$ such as the following handler.
%
\[
\bl
\reifyP : (\UnitType \to \alpha \eff \{\Interrupt: \UnitType \opto \UnitType;\varepsilon\}) \to \Pstate~\alpha~\varepsilon\\
\reifyP~m \defas
\ba[t]{@{}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& \Done~res\\
\OpCase{\Interrupt}{\Unit}{resume} &\mapsto& \Suspended~resume
\ea
\ea
\el
\]
%
This handler tags and returns values with $\Done$. It also tags and
returns the resumption provided by the $\Interrupt$-case with
$\Suspended$.
%
This particular implementation is amounts to a handler-based variation
of \citeauthor{Harrison06}'s non-reactive resumption
monad~\cite{Harrison06}.
%
If we compose this handler with the nondeterminism
handler, then we obtain a term with the following type.
%
\[
\nondet\,(\lambda\Unit.\reifyP~m) : \List~(\Pstate~\alpha~\{\Fork: \UnitType \opto \Bool;\varepsilon\})
\]
%
for some $m : \UnitType \to \{\Proc;\varepsilon\}$ where
$\Proc \defas \{\Fork: \UnitType \opto \Bool;\Interrupt: \UnitType
\opto \UnitType\}$.
%
The composition yields a list of process states, some of which may be
in suspended state. In particular, the suspended computations may have
unhandled instances of $\Fork$ as signified by it being present in the
effect row. The reason for this is that in the above composition when
$\reifyP$ produces a $\Suspended$-tagged resumption, it immediately
returns through the $\Return$-case of $\nondet$, meaning that the
resumption escapes the $\nondet$. Recall that a resumption is a
delimited continuation that captures the extent from the operation
invocation up to and including the nearest enclosing suitable
handler. In this particular instance, it means that the $\nondet$
handler is part of the extent.
%
We ultimately want to return just a list of $\alpha$s to ensure every
process has run to completion. To achieve this, we need a function
that keeps track of the state of every process, and in particular it
must run each $\Suspended$-tagged computation under the $\nondet$
handler to produce another list of process state, which must be
handled recursively.
%
\[
\bl
\schedule : \List~(\Pstate~\alpha~\{\Fork:\Bool;\varepsilon\}~\theta) \to \List~\alpha \eff \varepsilon\\
\schedule~ps \defas
\ba[t]{@{}l}
\Let\;run \revto
\Rec\;sched\,\Record{ps;done}.\\
\qquad\Case\;ps\;\{
\ba[t]{@{}r@{~}c@{~}l}
\nil &\mapsto& done\\
(\Done~res) \cons ps' &\mapsto& sched\,\Record{ps';res \cons done}\\
(\Suspended~m) \cons ps' &\mapsto& sched\,\Record{ps' \concat (\nondet~m);\, done} \}
\ea\\
\In\;run\,\Record{ps;\nil}
\ea
\el
\]
%
The function $\schedule$ implements a process scheduler. It takes as
input a list of process states, where $\Suspended$-tagged computations
may perform the $\Fork$ operation. Locally it defines a recursive
function $sched$ which carries a list of active processes $ps$ and the
results of completed processes $done$. The function inspects the
process list $ps$ to test whether it is empty or nonempty. If it is
empty it returns the list of results $done$. Otherwise, if the head is
$\Done$-tagged value, then the function is recursively invoked with
tail of processes $ps'$ and the list $done$ augmented with the value
$res$. If the head is a $\Suspended$-tagged computation $m$, then
$sched$ is recursively invoked with the process list $ps'$
concatenated with the result of running $m$ under the $\nondet$
handler.
%
Using the above machinery, we can define a function which adds
time-sharing capabilities to the system.
%
\[
\bl
\timeshare : (\UnitType \to \alpha \eff \Proc) \to \List~\alpha\\
\timeshare~m \defas \schedule\,[\Suspended\,(\lambda\Unit.\reifyP~m)]
\el
\]
%
The function $\timeshare$ handles the invocations of $\Fork$ and
$\Interrupt$ in some computation $m$ by starting it in suspended state
under the $\reifyP$ handler. The $\schedule$ actually starts the
computation, when it runs the computation under the $\nondet$ handler.
%
The question remains how to inject invocations of $\Interrupt$ such
that computation gets interleaved.
\paragraph{Interruption via interception}
%
To implement process preemption operating systems typically to rely on
the underlying hardware to asynchronously generate some kind of
interruption signals. These signals can be caught by the operating
system's process scheduler, which can then decide to which processes
to suspend and continue.
%
If our core calculi had an integrated notion of asynchrony and effects
along the lines of \citeauthor{AhmanP21}'s core calculus
$\lambda_{\text{\ae}}$~\cite{AhmanP21}, then we could potentially
treat interruption signals as asynchronous effectful operations, which
can occur spontaneously and, as suggested by \citet{DolanEHMSW17} and
realised by \citet{Poulson20}, be handled by a user-definable handler.
%
In the absence of asynchronous effects we have to inject synchronous
interruptions ourselves.
%
One extreme approach is to trust the user to perform invocations of
$\Interrupt$ periodically.
%
Another approach is based on the fact that every effect (except for
divergence) occurs via some operation invocation, and every-so-often
the user is likely to perform computational effect, thus the basic
idea is to bundle $\Interrupt$ with invocations of other
operations. For example, we can insert an instance of $\Interrupt$ in
some of the wrapper functions for operation invocations that we have
defined so conscientiously thus far. The problem with this approach is
that it requires a change of type signatures. To exemplify this
problem consider type of the $\echo$ function if we were to bundle an
invocation of $\Interrupt$ along side $\Write$.
%
\[
\bl
\echo' : \String \to \UnitType \eff \{\Interrupt : \UnitType \opto \UnitType;\Write : \Record{\UFD;\String} \opto \UnitType\}\\
\echo'~cs \defas \Do\;\Interrupt\,\Unit;\,\Do\;\Write\,\Record{\stdout;cs}
\el
\]
%
In addition to $\Write$ the effect row must now necessarily mention
the $\Interrupt$ operation. As a consequence this approach is not
backwards compatible, since the original definition of $\echo$ can be
used in a context that prohibits occurrences of $\Interrupt$. Clearly,
this alternative definition cannot be applied in such a context.
There is backwards-compatible way to bundle the two operations
together. We can implement a handler that \emph{intercepts}
invocations of $\Write$ and handles them by performing an interrupt
and, crucially, reperforming the intercepted write operation.
%
\[
\bl
\dec{interruptWrite} :
\ba[t]{@{~}l@{~}l}
&(\UnitType \to \alpha \eff \{\Interrupt : \UnitType \opto \UnitType;\Write : \Record{\UFD;\String} \opto \UnitType\})\\
\to& \alpha \eff \{\Interrupt : \UnitType \opto \UnitType;\Write : \Record{\UFD;\String} \opto \UnitType\}
\ea\\
\dec{interruptWrite}~m \defas
\ba[t]{@{~}l}
\Handle\;m~\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& res\\
\OpCase{\Write}{\Record{fd;cs}}{resume} &\mapsto&
\ba[t]{@{}l}
\interrupt\,\Unit;\\
resume\,(\Do\;\Write~\Record{fd;cs})
\ea
\ea
\ea
\el
\]
%
This handler is not `self-contained' as the other handlers we have
defined previously. It gives in some sense a `partial' interpretation
of $\Write$ as it leaves open the semantics of $\Interrupt$ and
$\Write$, i.e. this handler must be run in a suitable context of other
handlers.
Let us plug this handler into the previous example to see what
happens.
%
\[
\ba{@{~}l@{~}l}
&\bl
\basicIO\,(\lambda\Unit.\\
\qquad\timeshare\,(\lambda\Unit.\\
\qquad\qquad\dec{interruptWrite}\,(\lambda\Unit.\\
\qquad\qquad\qquad\sessionmgr\,\Record{\Root;\lambda\Unit.\\
\qquad\qquad\qquad\qquad\status\,(\lambda\Unit.
\ba[t]{@{}l}
\If\;\fork\,\Unit\;\Then\;
\su~\Alice;\,
\quoteRitchie~\Unit\\
\Else\;
\su~\Bob;\,
\quoteHamlet~\Unit)})))
\ea
\el \smallskip\\
\reducesto^+&
\bl
\Record{
\ba[t]{@{}l}
[0, 0];
\texttt{"}\ba[t]{@{}l}
\texttt{UNIX is basically To be, or not to be,\nl{}}\\
\texttt{a simple operating system, that is the question:\nl{}}\\
\texttt{but Whether 'tis nobler in the mind to suffer\nl{}}\\
\texttt{you have to be a genius to understand the simplicity.\nl{}"}}
\ea
\ea\\
: \Record{\List~\Int; \UFile}
\el
\ea
\]
%
Evidently, each write operation has been interleaved, resulting in a
mishmash poetry of Shakespeare and \UNIX{}.
%
I will leave it to the reader to be the judge of whether this new
poetry belongs under the category of either classic arts vandalism or
novel contemporary reinterpretations. As the saying goes: \emph{art
is in the eye of the beholder}.
\section{State: file i/o}
\label{sec:tiny-unix-io}
Thus far the system supports limited I/O, abnormal process
termination, multiple user sessions, and multi-tasking via concurrent
processes. At this stage we have most of core features in place. We
still have to complete the I/O model. The current I/O model provides
an incomplete file system consisting of a single write-only file.
%
In this section we will implement a \UNIX{}-like file system that
supports file creation, opening, truncation, read and write
operations, and file linking.
%
To implement a file system we will need to use state. State can
readily be implemented with an effect handler~\cite{KammarLO13}.
%
It is a deliberate choice to leave state for last, because once you
have state it is tempting to use it excessively --- to the extent it
becomes a cliche.
%
As demonstrated in the previous sections, it is possible to achieve
many things that have a stateful flavour without explicit state by
harnessing the implicit state provided by the program stack.
In the following subsection, I will provide an interface for stateful
operations and their implementation in terms of a handler. The
stateful operations will be put to use in the subsequent subsection to
implement a basic sequential file system.
\subsubsection{Handling state}
The interface for accessing and updating a state cell consists of two
operations.
%
\[
\State~\beta \defas \{\Get:\UnitType \opto \beta;\Put:\beta \opto \UnitType\}
\]
%
The intended operational behaviour of $\Get$ operation is to read the
value of type $\beta$ of the state cell, whilst the $\Put$ operation
is intended to replace the current value held by the state cell with
another value of type $\beta$. As per usual business, the following
functions abstract the invocation of the operations.
%
\[
\ba{@{~}l@{\quad\qquad\quad}c@{~}l}
\Uget : \UnitType \to \beta \eff \{\Get:\UnitType \opto \beta\}
& &
\Uput : \UnitType \to \beta \eff \{\Put:\beta \opto \UnitType\}\\
\Uget~\Unit \defas \Do\;\Get~\Unit
& &
\Uput~st \defas \Do\;\Put~st
\el
\]
%
The following handler interprets the operations.
%
\[
\bl
\runState : \Record{\beta;\UnitType \to \alpha \eff \State~\beta} \to \Record{\alpha;\beta}\\
\runState~\Record{st_0;m} \defas
\ba[t]{@{}l}
\Let\;run \revto
\ba[t]{@{}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& \lambda st.\Record{res;st}\\
\OpCase{\Get}{\Unit}{resume} &\mapsto& \lambda st.resume~st~st\\
\OpCase{\Put}{st'}{resume} &\mapsto& \lambda st.resume~\Unit~st'
\ea
\ea\\
\In\;run~st_0
\ea
\el
\]
%
The $\runState$ handler provides a generic way to interpret any
stateful computation. It takes as its first parameter the initial
value of the state cell. The second parameter is a potentially
stateful computation. Ultimately, the handler returns the value of the
input computation along with the current value of the state cell.
This formulation of state handling is analogous to the standard
monadic implementation of state handling~\citep{Wadler95}. In the
context of handlers, the implementation uses a technique known as
\emph{parameter-passing}~\citep{PlotkinP09,Pretnar15}.
%
Each case returns a state-accepting function.
%
The $\Return$-case returns a function that produces a pair consisting
of return value of $m$ and the final state $st$.
%
The $\Get$-case returns a function that applies the resumption
$resume$ to the current state $st$. Recall that return type of a
resumption is the same as its handler's return type, so since the
handler returns a function, it follows that
$resume : \beta \to \beta \to \Record{\alpha, \beta}$. In other words,
the invocation of $resume$ produces another state-accepting
function. This function arises from the next activation of the handler
either by way of a subsequent operation invocation in $m$ or the
completion of $m$ to invoke the $\Return$-case. Since $\Get$ does not
modify the value of the state cell it passes $st$ unmodified to the
next handler activation.
%
In the $\Put$-case the resumption must also produce a state-accepting
function of the same type, however, the type of the resumption is
slightly different
$resume : \UnitType \to \beta \to \Record{\alpha, \beta}$. The unit
type is the expected return type of $\Put$. The state-accepting
function arising from $resume~\Unit$ is supplied with the new state
value $st'$. This application effectively discards the current state
value $st$.
The first operation invocation in $m$, or if it completes without
invoking $\Get$ or $\Put$, the handler returns a function that accepts
the initial state. The function gets bound to $run$ which is
subsequently applied to the provided initial state $st_0$ which causes
evaluation of the stateful fragment of $m$ to continue.
\paragraph{Local state vs global state} The meaning of stateful
operations may depend on whether the ambient environment is
nondeterministic. Post-composing nondeterminism with state gives rise
to the so-called \emph{local state} phenomenon, where state
modifications are local to each strand of nondeterminism, that is each
strand maintains its own copy of the state. Local state is also known
as `backtrackable state' in the literature~\cite{GibbonsH11}, because
returning back to a branch point restores the state as it were prior
to the branch. In contrast, post-composing state with nondeterminism
results in a \emph{global state} interpretation, where the state is
shared across every strand of nondeterminism. In terms of backtracking
this means the original state does not get restored upon a return to
some branch point.
For modelling the file system we opt for the global state
interpretation such that changes made to file system are visible to
all processes. The local state interpretation could prove useful if we
were to model a virtual file system per process such that each process
would have its own unique standard out file.
The two state phenomena are inter-encodable. \citet{PauwelsSM19} give
a systematic behaviour-preserving transformation for nondeterminism
with local state into nondeterminism with global state and vice versa.
\subsubsection{Basic serial file system}
%
\begin{figure}[t]
\centering
\begin{tabular}[t]{| l |}
\hline
\multicolumn{1}{| c |}{\textbf{Directory}} \\
\hline
\strlit{hamlet}\tikzmark{hamlet}\\
\hline
\strlit{ritchie.txt}\tikzmark{ritchie}\\
\hline
\multicolumn{1}{| c |}{$\vdots$}\\
\hline
\strlit{stdout}\tikzmark{stdout}\\
\hline
\multicolumn{1}{| c |}{$\vdots$}\\
\hline
\strlit{act3}\tikzmark{act3}\\
\hline
\end{tabular}
\hspace{1.5cm}
\begin{tabular}[t]{| c |}
\hline
\multicolumn{1}{| c |}{\textbf{I-List}} \\
\hline
1\tikzmark{ritchieino}\\
\hline
2\tikzmark{hamletino}\\
\hline
\multicolumn{1}{| c |}{$\vdots$}\\
\hline
1\tikzmark{stdoutino}\\
\hline
\end{tabular}
\hspace{1.5cm}
\begin{tabular}[t]{| l |}
\hline
\multicolumn{1}{| c |}{\textbf{Data region}} \\
\hline
\tikzmark{stdoutdr}\strlit{}\\
\hline
\tikzmark{hamletdr}\strlit{To be, or not to be...}\\
\hline
\multicolumn{1}{| c |}{$\vdots$}\\
\hline
\tikzmark{ritchiedr}\strlit{UNIX is basically...}\\
\hline
\end{tabular}
%% Hamlet arrows.
\tikz[remember picture,overlay]\draw[->,thick,out=30,in=160] ([xshift=1.23cm,yshift=0.1cm]pic cs:hamlet) to ([xshift=-0.85cm,yshift=0.1cm]pic cs:hamletino) node[] {};
\tikz[remember picture,overlay]\draw[->,thick,out=30,in=180] ([xshift=0.62cm,yshift=0.1cm]pic cs:hamletino) to ([xshift=-0.23cm,yshift=0.1cm]pic cs:hamletdr) node[] {};
%% Ritchie arrows.
\tikz[remember picture,overlay]\draw[->,thick,out=-30,in=180] ([xshift=0.22cm,yshift=0.1cm]pic cs:ritchie) to ([xshift=-0.85cm,yshift=0.1cm]pic cs:ritchieino) node[] {};
\tikz[remember picture,overlay]\draw[->,thick,out=30,in=180] ([xshift=0.62cm,yshift=0.1cm]pic cs:ritchieino) to ([xshift=-0.23cm,yshift=0.1cm]pic cs:ritchiedr) node[] {};
%% Act3 arrow.
\tikz[remember picture,overlay]\draw[->,thick,out=10,in=210] ([xshift=1.64cm,yshift=0.1cm]pic cs:act3) to ([xshift=-0.85cm,yshift=-0.5mm]pic cs:hamletino) node[] {};
%% Stdout arrows.
\tikz[remember picture,overlay]\draw[->,thick,out=30,in=180] ([xshift=1.23cm,yshift=0.1cm]pic cs:stdout) to ([xshift=-0.85cm,yshift=0.1cm]pic cs:stdoutino) node[] {};
\tikz[remember picture,overlay]\draw[->,thick,out=30,in=180] ([xshift=0.62cm,yshift=0.1cm]pic cs:stdoutino) to ([xshift=-0.23cm,yshift=0.1cm]pic cs:stdoutdr) node[] {};
\caption{\UNIX{} directory, i-list, and data region mappings.}\label{fig:unix-mappings}
\end{figure}
%
A file system provide an abstraction over storage media in a computer
system by organising the storage space into a collection of files.
This abstraction facilities typical file operations: allocation,
deletion, reading, and writing.
%
\UNIX{} dogmatises the notion of file to the point where
\emph{everything is a file}. A typical \UNIX{}-style file system
differentiates between ordinary files, directory files, and special
files~\cite{RitchieT74}. An ordinary file is a sequence of
characters. A directory file is a container for all kinds of files. A
special file is an interface for interacting with an i/o device.
We will implement a \emph{basic serial file system}, which we dub
\fsname{}.
%
It will be basic in the sense that it models the bare minimum to pass
as a file system, that is we will implement support for the four basic
operations: file allocation, file deletion, file reading, and file
writing.
%
The read and write operations will be serial, meaning every file is
read in order from its first character to its last character, and
every file is written to by appending the new content.
%
\fsname{} will only contain ordinary files, and as a result
the file hierarchy will be entirely flat. Although, the system can
readily be extended to be hierarchical, it comes at the expense of
extra complexity, that blurs rather than illuminates the model.
\paragraph{Directory, i-list, and data region}
%
A storage medium is an array of bytes. An \UNIX{} file system is
implemented on top of this array by interpreting certain intervals of
the array differently. These intervals provide the space for the
essential administrative structures for file organisation.
%
\begin{enumerate}
\item The \emph{directory} is a collection of human-readable names for
files. In general, a file may have multiple names. Each name is
stored along with a pointer into the i-list.
\item The \emph{i-list} is a collection of i-nodes. Each i-node
contains the meta data for a file along with a pointer into the data
region.
\item The \emph{data region} contains the actual file contents.
\end{enumerate}
%
These structures make up the \fsname{}.
%
Figure~\ref{fig:unix-mappings} depicts an example with the three
structures and a mapping between them.
%
The only file meta data tracked by \fsname{} is the number of names for
a file.
%
The three structures and their mappings can be implemented using
association lists. Although, a better practical choice may be a
functional map or functional array~\cite{Okasaki99}, association lists
have the advantage of having a simple, straightforward implementation.
%
\[
\ba{@{~}l@{\qquad}c@{~}l}
\Directory \defas \List\,\Record{\String;\Int} &&%
\DataRegion \defas \List\,\Record{\Int;\UFile} \smallskip\\
\INode \defas \Record{lno:\Int;loc:\Int} &&%
\IList \defas \List\,\Record{\Int;\INode}
\ea
\]
%
Mathematically, we may think the type $\dec{Directory}$ as denoting a
partial function $\C^\ast \pto \Z$, where $\C$ is a suitable
alphabet. The function produces an index into the i-list.
%
Similarly, the type $\dec{IList}$ denotes a partial function
$\Z \pto \Z \times \Z$, where the codomain is the denotation of
$\dec{INode}$. The first component of the pair is the number of names
linked to the i-node, and as such $\Z$ is really an overapproximation
as an i-node cannot have a negative number of names. The second
component is an index into the data region.
%
The denotation of the type $\dec{DataRegion}$ is another partial
function $\Z \pto \C^\ast$.
We define the type of the file system to be a record of the three
association lists along with two counters for the next available index
into the data region and i-list, respectively.
%
\[
\FileSystem \defas \Record{
\ba[t]{@{}l}
dir:\Directory;ilist:\IList;dreg:\DataRegion;\\
dnext:\Int;inext:\Int}
\ea
\]
%
We can then give an implementation of the initial state of the file
system.
%
\[
\dec{fs}_0 \defas \Record{
\ba[t]{@{}l}
dir=[\Record{\strlit{stdout};0}];ilist=[\Record{0;\Record{lno=1;loc=0}}];dreg=[\Record{0;\strlit{}}];\\
dnext=1;inext=1}
\ea
\]
%
Initially the file system contains a single, empty file with the name
$\texttt{stdout}$. Next we will implement the basic operations on the
file system separately.
We have made a gross simplification here, as a typical file system
would provide some \emph{file descriptor} abstraction for managing
access open files. In \fsname{} we will operate directly on i-nodes,
meaning we define $\UFD \defas \Int$, meaning the file open operation
will return an i-node identifier. As consequence it does not matter
whether a file is closed after use as file closing would be a no-op
(closing a file does not change the state of its i-node). Therefore
\fsname{} will not provide a close operation. As a further consequence
the file system will have no resource leakage.
\paragraph{File reading and writing}
%
Let us begin by giving a semantics to file reading and writing. We
need an abstract operation for each file operation.
%
\[
\dec{FileRW} \defas \{\URead : \Int \opto \Option~\String;\UWrite : \Record{\Int;\String} \opto \UnitType\}
\]
%
The operation $\URead$ is parameterised by an i-node number
(i.e. index into the i-list) and possibly returns the contents of the
file pointed to by the i-node. The operation may fail if it is
provided with a stale i-node number. Thus the option type is used to
signal failure or success to the caller.
%
The $\UWrite$ operation is parameterised by an i-node number and some
strings to be appended onto the file pointed to by the i-node. The
operation returns unit, and thus the operation does not signal to its
caller whether it failed or succeed.
%
Before we implement a handler for the operations, we will implement
primitive read and write operations that operate directly on the file
system. We will use the primitive operations to implement the
semantics for $\URead$ and $\UWrite$. To implement the primitive the
operations we will need two basic functions on association lists. I
will only their signatures here.
%
\[
\bl
\lookup : \Record{\alpha;\List\,\Record{\alpha;\beta}} \to \beta \eff \{\Fail : \UnitType \opto \ZeroType\} \smallskip\\
\modify : \Record{\alpha;\beta;\List\,\Record{\alpha;\beta}} \to \Record{\alpha;\beta}
\el
\]
%
Given a key of type $\alpha$ the $\lookup$ function returns the
corresponding value of type $\beta$ in the given association list. If
the key does not exists, then the function invokes the $\Fail$
operation to signal failure.
%
The $\modify$ function takes a key and a value. If the key exists in
the provided association list, then it replaces the value bound by the
key with the provided value.
%
Using these functions we can implement the primitive read and write
operations.
%
\[
\bl
\fread : \Record{\Int;\FileSystem} \to \String \eff \{\Fail : \UnitType \opto \ZeroType\}\\
\fread\,\Record{ino;fs} \defas
\ba[t]{@{}l}
\Let\;inode \revto \lookup\,\Record{ino; fs.ilist}\;\In\\
\lookup\,\Record{inode.loc; fs.dreg}
\el
\el
\]
%
The function $\fread$ takes as input the i-node number for the file to
be read and a file system. First it looks up the i-node structure in
the i-list, and then it uses the location in the i-node to look up the
file contents in the data region. Since $\fread$ performs no exception
handling it will fail if either look up fails. The implementation of
the primitive write operation is similar.
%
\[
\bl
\fwrite : \Record{\Int;\String;\FileSystem} \to \FileSystem \eff \{\Fail : \UnitType \opto \ZeroType\}\\
\fwrite\,\Record{ino;cs;fs} \defas
\ba[t]{@{}l}
\Let\;inode \revto \lookup\,\Record{ino; fs.ilist}\;\In\\
\Let\;file \revto \lookup\,\Record{inode.loc; fs.dreg}\;\In\\
\Record{\,fs\;\keyw{with}\;dreg = \modify\,\Record{inode.loc;file \concat cs;fs}}
\el
\el
\]
%
The first two lines grab hold of the file, whilst the last line
updates the data region in file system by appending the string $cs$
onto the file.
%
Before we can implement the handler, we need an exception handling
mechanism. The following exception handler interprets $\Fail$ as some
default value.
%
\[
\bl
\faild : \Record{\alpha;\UnitType \to \alpha \eff \{\Fail : \UnitType \opto \ZeroType\}} \to \alpha\\
\faild\,\Record{default;m} \defas
\ba[t]{@{~}l}
\Handle\;m~\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& x\\
\OpCase{\Fail}{\Unit}{\_} &\mapsto& default
\ea
\ea
\el
\]
%
The $\Fail$-case is simply the default value, whilst the
$\Return$-case is the identity.
%
Now we can use all the above pieces to implement a handler for the
$\URead$ and $\UWrite$ operations.
%
\[
\bl
\fileRW : (\UnitType \to \alpha \eff \dec{FileRW}) \to \alpha \eff \State~\FileSystem\\
\fileRW~m \defas
\ba[t]{@{}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& res\\
\OpCase{\URead}{ino}{resume} &\mapsto&
\bl
\Let\;cs\revto \faild\,\Record{\None;\lambda\Unit.\\
\quad\Some\,(\fread\,\Record{ino;\Uget\,\Unit})}\\
\In\;resume~cs
\el\\
\OpCase{\UWrite}{\Record{ino;cs}}{resume} &\mapsto&
\ba[t]{@{}l}
\faild~\Record{\Unit; \lambda \Unit.\\
\quad\bl
\Let\;fs \revto \fwrite\,\Record{ino;cs;\Uget\,\Unit}\\
\In\;\Uput~fs};\,resume\,\Unit
\el
\ea
\ea
\ea
\el
\]
%
The $\URead$-case uses the $\fread$ function to implement reading a
file. The file system state is retrieved using the state operation
$\Uget$. The possible failure of $\fread$ is dealt with by the
$\faild$ handler by interpreting failure as $\None$.
%
The $\UWrite$-case makes use of the $\fwrite$ function to implement
writing to a file. Again the file system state is retrieved using
$\Uget$. The $\Uput$ operation is used to update the file system state
with the state produced by the successful invocation of
$\fwrite$. Failure is interpreted as unit, meaning that from the
caller's perspective the operation fails silently.
\paragraph{File creation and opening}
The signature of file creation and opening is unsurprisingly comprised
of two operations.
%
\[
\dec{FileCO} \defas \{\UCreate : \String \opto \Option~\Int; \UOpen : \String \opto \Option~\Int\}
\]
%
The implementation of file creation and opening follows the same
pattern as the implementation of reading and writing. As before, we
implement a primitive routine for each operation that interacts
directly with the file system structure. We first implement the
primitive file opening function as the file creation function depends
on this function.
%
\[
\bl
\fopen : \Record{\String;\FileSystem} \to \Int \eff \{\Fail : \UnitType \opto \ZeroType\}\\
\fopen\,\Record{fname;fs} \defas \lookup\,\Record{fname; fs.dir}
\el
\]
%
Opening a file in the file system simply corresponds to returning the
i-node index associated with the filename in the directory table.
The \UNIX{} file create command does one of two things depending on
the state of the file system. If the create command is provided with
the name of a file that is already present in the directory, then the
system truncates the file, and returns the file descriptor for the
file. Otherwise the system allocates a new empty file and returns its
file descriptor~\cite{RitchieT74}. To check whether a file already
exists in the directory we need a function $\dec{has}$ that given a
filename and the file system state returns whether there exists a file
with the given name. This function can be built completely generically
from the functions we already have at our disposal.
%
\[
\bl
\dec{has} : \Record{\alpha;\List\,\Record{\alpha;\beta}} \to \Bool\\
\dec{has}\,\Record{k;xs} \defas \faild\,\Record{\False;(\lambda\Unit.\lookup\,\Record{k;xs};\True)}
\el
\]
%
The function $\dec{has}$ applies $\lookup$ under the failure handler
with default value $\False$. If $\lookup$ returns successfully then
its result is ignored, and the computation returns $\True$, otherwise
the computation returns the default value $\False$.
%
With this function we can implement the semantics of create.
%
\[
\bl
\fcreate : \Record{\String;\FileSystem} \to \Record{\Int;\FileSystem} \eff \{\Fail : \UnitType \opto \ZeroType\}\\
\fcreate\,\Record{fname;fs} \defas
\ba[t]{@{}l}
\If\;\dec{has}\,\Record{fname;fs.dir}\;\Then\\
\quad\bl
\Let\;ino \revto \fopen\,\Record{fname;fs}\;\In\\
\Let\;inode \revto \lookup\,\Record{ino;fs}\;\In\\
\Let\;dreg' \revto \modify\,\Record{inode.loc; \strlit{}; fs.dreg}\;\In\\
\Record{ino;\Record{fs\;\With\;dreg = dreg'}}
\el\\
\Else\\
\quad\bl
\Let\;loc \revto fs.lnext \;\In\\
\Let\;dreg \revto \Record{loc; \strlit{}} \cons fs.dreg\;\In\\
\Let\;ino \revto fs.inext \;\In\\
\Let\;inode \revto \Record{loc=loc;lno=1}\;\In\\
\Let\;ilist \revto \Record{ino;inode} \cons fs.ilist \;\In\\
\Let\;dir \revto \Record{fname; ino} \cons fs.dir \;\In\\
\Record{ino;\Record{
\bl
dir=dir;ilist=ilist;dreg=dreg;\\
lnext=loc+1;inext=ino+1}}
\el
\el
\el
\el
\]
%
The $\Then$-branch accounts for the case where the filename $fname$
already exists in the directory. First we retrieve the i-node for the
file to obtain its location in the data region such that we can
truncate the file contents.
%
The branch returns the i-node index along with the modified file
system. The $\Else$-branch allocates a new empty file. First we
allocate a location in the data region by copying the value of
$fs.lnext$ and consing the location and empty string onto
$fs.dreg$. The next three lines allocates the i-node for the file in a
similar fashion. The second to last line associates the filename with
the new i-node. The last line returns the identifier for the i-node
along with the modified file system, where the next location ($lnext$)
and next i-node identifier ($inext$) have been incremented.
%
It is worth noting that the effect signature of $\fcreate$ mentions
$\Fail$ even though it will never fail. It is present in the effect
row due to the use of $\fopen$ and $\lookup$ in the
$\Then$-branch. Either application can only fail if the file system is
in an inconsistent state, where the index $ino$ has become stale. The
$\dec{f}$-family of functions have been carefully engineered to always
leave the file system in a consistent state.
%
Now we can implement the semantics for the $\UCreate$ and $\UOpen$
effectful operations. The implementation is similar to the
implementation of $\fileRW$.
%
\[
\bl
\fileAlloc : (\UnitType \to \alpha \eff \dec{FileCO}) \to \alpha \eff \State~\FileSystem\\
\fileAlloc~m \defas
\ba[t]{@{}l}
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& res\\
\OpCase{\UCreate}{fname}{resume} &\mapsto&
\bl
\Let\;ino \revto \faild\,\Record{\None; \lambda\Unit.\\
\quad\bl
\Let\;\Record{ino;fs} = \fcreate\,\Record{\,fname;\Uget\,\Unit}\\
\In\;\Uput~fs;\,\Some~ino}
\el\\
\In\; resume~ino
\el\\
\OpCase{\UOpen}{fname}{resume} &\mapsto&
\ba[t]{@{}l}
\Let\; ino \revto \faild~\Record{\None; \lambda \Unit.\\
\quad\Some\,(\fopen\,\Record{fname;\Uget\,\Unit})}\\
\In\;resume~ino
\ea
\ea
\ea
\el
\]
%
\paragraph{Stream redirection}
%
The processes we have defined so far use the $\echo$ utility to write
to the $\stdout$ file. The target file $\stdout$ is hardwired into the
definition of $\echo$ (Section~\ref{sec:tiny-unix-bio}). To take
advantage of the capabilities of the new file system we could choose
to modify the definition of $\echo$ such that it is parameterised by
the target file. However, such a modification is a breaking
change. Instead we can define a \emph{stream redirection} operator
that allow us to redefine the target of $\Write$ operations locally.
%
\[
\bl
\redirect :
\bl
\Record{\UnitType \to \alpha \eff \{\Write : \Record{\Int;\String} \opto \UnitType\}; \String}\\
\to \alpha \eff \{\UCreate : \String \opto \Option~\Int;\Exit : \Int \opto \ZeroType;\Write : \Record{\Int;\String} \opto \UnitType\}
\el\\
m~\redirect~fname \defas
\ba[t]{@{}l}
\Let\;ino \revto \Case\;\Do\;\UCreate~fname\;\{
\ba[t]{@{~}l@{~}c@{~}l}
\None &\mapsto& \exit~1\\
\Some~ino &\mapsto& ino\}
\ea\\
\In\;\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& res\\
\OpCase{\Write}{\Record{\_;cs}}{resume} &\mapsto& resume\,(\Do\;\Write\,\Record{ino;cs})
\ea
\ea
\el
\]
%
The operator $\redirect$ first attempts to create a new target file
with name $fname$. If it fails it simply exits with code
$1$. Otherwise it continues with the i-node reference $ino$. The
handler overloads the definition of $\Write$ inside the provided
computation $m$. The new definition drops the i-node reference of the
initial target file and replaces it by the reference to new target
file.
This stream redirection operator is slightly more general than the
original redirection operator in the original \UNIX{} environment. As
the \UNIX{} redirection operator only redirects writes targeted at the
\emph{stdout} file~\cite{RitchieT74}, whereas the above operator
redirects writes regardless of their initial target.
%
It is straightforward to implement this original \UNIX{} behaviour by
inspecting the first argument of $\Write$ in the operation clause
before committing to performing the redirecting $\Write$ operation.
%
Modern \UNIX{} environments typically provide more fine-grained
control over redirects, for example by allowing the user to specify on
a per file basis which writes should be redirected. Again, we can
implement this behaviour by comparing the provided file descriptor
with the descriptor in the payload of $\Write$.
% ([0, 0, 0],
% (dir = [("hamlet", 2), ("ritchie.txt", 1), ("stdout", 0)],
% dregion = [(2, "To be, or not to be,
% that is the question:
% Whether 'tis nobler in the mind to suffer
% "),
% (1, "UNIX is basically a simple operating system, but you have to be a genius to understand the simplicity.
% "), (
% 0, "")],
% inext = 3,
% inodes = [(2, (lno = 1, loc = 2)), (1, (lno = 1, loc = 1)), (0, (lno = 1, loc = 0))],
% lnext = 3)) : ([Int], FileSystem)
% links> init(fsys0, example7);
% ([0, 0, 0],
% (dir = [("hamlet", 2), ("ritchie.txt", 1), ("stdout", 0)],
% dregion = [(2, "To be, or not to be,
% that is the question:
% Whether 'tis nobler in the mind to suffer
% "),
% (1, "UNIX is basically a simple operating system, but you have to be a genius to understand the simplicity.
% "), (
% 0, "")],
% inext = 3,
% inodes = [(2, (lno = 1, loc = 2)), (1, (lno = 1, loc = 1)), (0, (lno = 1, loc = 0))],
% lnext = 3)) : ([Int], FileSystem)
\medskip We can plug everything together to observe the new file
system in action.
%
\[
\ba{@{~}l@{~}l}
&\bl
\runState\,\Record{\dec{fs}_0;\fileRW\,(\lambda\Unit.\\
\quad\fileAlloc\,(\lambda\Unit.\\
\qquad\timeshare\,(\lambda\Unit.\\
\qquad\quad\dec{interruptWrite}\,(\lambda\Unit.\\
\qquad\qquad\sessionmgr\,\Record{\Root;\lambda\Unit.\\
\qquad\qquad\quad\status\,(\lambda\Unit.
\ba[t]{@{}l}
\If\;\fork\,\Unit\;\Then\;
\su~\Alice;\,
\quoteRitchie~\redirect~\strlit{ritchie.txt}\\
\Else\;
\su~\Bob;\,
\quoteHamlet~\redirect~\strlit{hamlet})}))))}
\ea
\el \smallskip\\
\reducesto^+&
\bl
\Record{
\ba[t]{@{}l}
[0, 0];\\
\Record{
\ba[t]{@{}l}
dir=[\Record{\strlit{hamlet};2},
\Record{\strlit{ritchie.txt};1},
\Record{\strlit{stdout};0}];\\
ilist=[\Record{2;\Record{lno=1;loc=2}},
\Record{1;\Record{lno=1;loc=1}},
\Record{0;\Record{lno=1;loc=0}}];\\
dreg=[
\ba[t]{@{}l}
\Record{2;
\ba[t]{@{}l@{}l}
\texttt{"}&\texttt{To be, or not to be,\nl{}that is the question:\nl{}}\\
&\texttt{Whether 'tis nobler in the mind to suffer\nl{}"}},
\ea\\
\Record{1;
\ba[t]{@{}l@{}l}
\texttt{"}&\texttt{UNIX is basically a simple operating system, }\\
&\texttt{but you have to be a genius to understand the simplicity.\nl{"}}},
\ea\\
\Record{0; \strlit{}}]; lnext=3; inext=3}}\\
\ea
\ea
\ea\\
: \Record{\List~\Int; \FileSystem}
\el
\ea
\]
%
The writes of the processes $\quoteRitchie$ and $\quoteHamlet$ are now
being redirected to designated files \texttt{ritchie.txt} and
\texttt{hamlet}, respectively. The operating system returns the
completion status of all the processes along with the current state of
the file system such that it can be used as the initial file system
state on the next start of the operating system.
\subsubsection{File linking and unlinking}
%
At this point the implementation of \fsname{} is almost feature
complete. However, we have yet to implement two dual file operations:
linking and unlinking. The former enables us to associate a new
filename with an existing i-node, thus providing a mechanism for
making soft copies of files (i.e. the file contents are
shared). The latter lets us dissociate a filename from an i-node, thus
providing a means for removing files. The interface of linking and
unlinking is given below.
%
\[
\dec{FileLU} \defas \{\ULink : \Record{\String;\String} \opto \UnitType; \UUnlink : \String \opto \UnitType\}
\]
%
The $\ULink$ operation is parameterised by two strings. The first
string is the name of the \emph{source} file and the second string is
the \emph{destination} name (i.e. the new name). The $\UUnlink$
operation takes a single string argument, which is the name of the
file to be removed.
As before, we bundle the low level operations on the file system state
into their own functions. We start with file linking.
%
\[
\bl
\flink : \Record{\String;\String;\FileSystem} \to \FileSystem \eff \{\Fail : \UnitType \opto \ZeroType\}\\
\flink\,\Record{src;dest;fs} \defas
\bl
\If\;\dec{has}\,\Record{dest;fs.dir}\;\Then\;\Absurd~\Do\;\Fail\,\Unit\\
\Else\;
\bl
\Let\;ino \revto \lookup~\Record{src;fs.dir}\;\In\\
\Let\;dir' \revto \Record{dest;ino} \cons fs.dir\;\In\\
\Let\;inode \revto \lookup~\Record{ino;fs.ilist}\;\In\\
\Let\;inode' \revto \Record{inode\;\With\;lno = inode.lno + 1}\;\In\\
\Let\;ilist' \revto \modify\,\Record{ino;inode';fs.ilist}\;\In\\
\Record{fs\;\With\;dir = dir';ilist = ilist'}
\el
\el
\el
\]
%
The function $\flink$ checks whether the destination filename, $dest$,
already exists in the directory. If it exists then the function raises
the $\Fail$ exception. Otherwise it looks up the index of the i-node,
$ino$, associated with the source file, $src$. Next, the directory is
extended with the destination filename, which gets associated with
this index, meaning $src$ and $dest$ both share the same
i-node. Finally, the link count of the i-node at index $ino$ gets
incremented, and the function returns the updated file system state.
%
The semantics of file unlinking is slightly more complicated as an
i-node may become unlinked, meaning that it needs to garbage collected
along with its file contents in the data region. To implement file
removal we make use of another standard operation on association
lists.
%
\[
\remove : \Record{\alpha;\Record{\alpha;\beta}} \to \Record{\alpha;\beta}
\]
%
The first parameter to $\remove$ is the key associated with the entry
to be removed from the association list, which is given as the second
parameter. If the association list does not have an entry for the
given key, then the function behaves as the identity. The behaviour of
the function in case of multiple entries for a single key does not
matter as our system is carefully set up to ensure that each key has
an unique entry.
%
\[
\bl
\funlink : \Record{\String;\FileSystem} \to \FileSystem \eff \{\Fail : \UnitType \opto \ZeroType\}\\
\funlink\,\Record{fname;fs} \defas
\bl
\If\;\dec{has}\,\Record{fname;fs.dir}\;\Then\\
\quad
\bl
\Let\;ino \revto \lookup\,\Record{fname;fs.dir}\;\In\\
\Let\;dir' \revto \remove\,\Record{fname;fs.dir}\;\In\\
\Let\;inode \revto \lookup\,\Record{ino;fs.ilist}\;\In\\
\Let\;\Record{ilist';dreg'} \revto
\bl
\If\;inode.lno > 1\;\Then\\
\quad\bl
\Let\;inode' \revto \Record{\bl inode\;\With\\lno = inode.lno - 1}\el\\
\In\;\Record{\modify\,\Record{ino;inode';fs.ilist};fs.dreg}
\el\\
\Else\;
\Record{\bl\remove\,\Record{ino;fs.ilist};\\
\remove\,\Record{inode.loc;fs.dreg}}
\el
\el\\
\In\;\Record{fs\;\With\;dir = dir'; ilist = ilist'; dreg = dreg'}
\el\\
\Else\; \Absurd~\Do\;\Fail\,\Unit
\el
\el
\]
%
The $\funlink$ function checks whether the given filename $fname$
exists in the directory. If it does not, then it raises the $\Fail$
exceptions. However, if it does exist then the function proceeds to
lookup the index of the i-node for the file, which gets bound to
$ino$, and subsequently remove the filename from the
directory. Afterwards it looks up the i-node with index $ino$. Now one
of two things happen depending on the current link count of the
i-node. If the count is greater than one, then we need only decrement
the link count by one, thus we modify the i-node structure. If the
link count is 1, then i-node is about to become stale, thus we must
garbage collect it by removing both the i-node from the i-list and the
contents from the data region. Either branch returns the new state of
i-list and data region. Finally, the function returns the new file
system state.
With the $\flink$ and $\funlink$ functions, we can implement the
semantics for $\ULink$ and $\UUnlink$ operations following the same
patterns as for the other file system operations.
%
\[
\bl
\fileLU : (\UnitType \to \alpha \eff \FileLU) \to \alpha \eff \State~\FileSystem\\
\fileLU~m \defas
\bl
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;res &\mapsto& res\\
\OpCase{\ULink}{\Record{src;dest}}{resume} &\mapsto&
\bl
\faild\,\Record{\Unit; \lambda\Unit.\\
\quad\bl
\Let\;fs = \flink\,\Record{src;dest;\Uget\,\Unit}\\
\In\;\Uput~fs}; resume\,\Unit
\el
\el\\
\OpCase{\UUnlink}{fname}{resume} &\mapsto&
\bl
\faild\,\Record{\Unit; \lambda\Unit.\\
\quad\bl
\Let\;fs = \funlink\,\Record{fname;\Uget\,\Unit}\\
\In\;\Uput~fs}; resume\,\Unit
\el
\el
\ea
\el
\el
\]
%
The composition of $\fileRW$, $\fileAlloc$, and $\fileLU$ complete the
implementation of \fsname{}.
%
\[
\bl
\FileIO \defas \{\FileRW;\FileCO;\FileLU\} \medskip\\
\fileIO : (\UnitType \to \alpha \eff \FileIO) \to \alpha \eff \State~\FileSystem \\
\fileIO~m \defas \fileRW\,(\lambda\Unit. \fileAlloc\,(\lambda\Unit.\fileLU~m))
\el
\]
%
The three handlers may as well be implemented as a single monolithic
handler, since they implement different operations, return the same
value, and make use of the same state cell. In practice a monolithic
handler may have better performance. However, a sufficiently clever
compiler would be able to take advantage of the fusion laws of deep
handlers to fuse the three handlers into one (e.g. using the technique
of \citet{WuS15}), and thus allow modular composition without
composition.
We now have the building blocks to implement a file copying
utility. We will implement the utility such that it takes an argument
to decide whether it should make a soft copy such that the source file
and destination file are linked, or it should make a hard copy such
that a new i-node is allocated and the bytes in the data regions gets
duplicated.
%
\[
\bl
\dec{cp} : \Record{\Bool;\String;\String} \to \UnitType \eff \{\FileIO;\Exit : \Int \opto \ZeroType\}\\
\dec{cp}~\Record{link;src;dest} \defas
\bl
\If\;link\;\Then\;\Do\;\ULink\,\Record{src;dest}\;\\
\Else\; \bl
\Case\;\Do\;\UOpen~src\\
\{ \ba[t]{@{~}l@{~}c@{~}l}
\None &\mapsto& \exit~1\\
\Some~ino &\mapsto& \\
\multicolumn{3}{l}{\quad\Case\;\Do\;\URead~ino\;\{
\ba[t]{@{~}l@{~}c@{~}l}
\None &\mapsto& \exit~1\\
\Some~cs &\mapsto& \echo~cs~\redirect~dest \} \}
\ea}
\ea
\el
\el
\el
\]
%
If the $link$ parameter is $\True$, then the utility makes a soft copy
by performing the operation $\ULink$ to link the source file and
destination file. Otherwise the utility makes a hard copy by first
opening the source file. If $\UOpen$ returns the $\None$ (i.e. the
open failed) then the utility exits with code $1$. If the open
succeeds then the entire file contents are read. If the read operation
fails then we again just exit, however, in the event that it succeeds
we apply the $\echo$ to the file contents and redirects the output to
the file $dest$.
The logic for file removal is part of the semantics for
$\UUnlink$. Therefore the implementation of a file removal utility is
simply an application of the operation $\UUnlink$.
%
\[
\bl
\dec{rm} : \String \to \UnitType \eff \{\UUnlink : \String \opto \UnitType\}\\
\dec{rm}~fname \defas \Do\;\UUnlink~fname
\el
\]
%
We can now plug it all together.
%
\[
\ba{@{~}l@{~}l}
&\bl
\runState\,\Record{\dec{fs}_0;\fileIO\,(\lambda\Unit.\\
\quad\timeshare\,(\lambda\Unit.\\
\qquad\dec{interruptWrite}\,(\lambda\Unit.\\
\qquad\quad\sessionmgr\,\Record{\Root;\lambda\Unit.\\
\qquad\qquad\status\,(\lambda\Unit.
\ba[t]{@{}l}
\If\;\fork\,\Unit\;\\
\Then\;
\bl
\su~\Alice;\,
\quoteRitchie~\redirect~\strlit{ritchie.txt};\\
\dec{cp}\,\Record{\False;\strlit{ritchie.txt};\strlit{ritchie}};\\
\dec{rm}\,\strlit{ritchie.txt}
\el\\
\Else\;
\bl
\su~\Bob;\,
\quoteHamlet~\redirect~\strlit{hamlet};\\
\dec{cp}\,\Record{\True;\strlit{hamlet};\strlit{act3}}
)}))))}
\el
\ea
\el \smallskip\\
\reducesto^+&
\bl
\Record{
\ba[t]{@{}l}
[0, 0];\\
\Record{
\ba[t]{@{}l}
dir=[\Record{\strlit{ritchie};3},\Record{\strlit{act3};2},\Record{\strlit{hamlet};2},
\Record{\strlit{stdout};0}];\\
ilist=[\Record{3;\Record{lno=1;loc=3}},
\Record{2;\Record{lno=2;loc=2}},
\Record{0;\Record{lno=1;loc=0}}];\\
dreg=[
\ba[t]{@{}l}
\Record{3;
\ba[t]{@{}l@{}l}
\texttt{"}&\texttt{UNIX is basically a simple operating system, }\\
&\texttt{but you have to be a genius }\\
&\texttt{to understand the simplicity.\nl{"}}},
\ea\\
\Record{2;
\ba[t]{@{}l@{}l}
\texttt{"}&\texttt{To be, or not to be,\nl{}that is the question:\nl{}}\\
&\texttt{Whether 'tis nobler in the mind to suffer\nl{}"}},
\ea\\
\Record{0; \strlit{}}]; lnext=4; inext=4}}\\
\ea
\ea
\ea\\
: \Record{\List~\Int; \FileSystem}
\el
\ea
\]
%
Alice copies the file \texttt{ritchie.txt} as \texttt{ritchie}, and
subsequently removes the original file, which effectively amounts to a
roundabout way of renaming a file. It is evident from the file system
state that the file is a hard copy as the contents of
\texttt{ritchie.txt} now reside in location $3$ rather than location
$1$ in the data region. Bob makes a soft copy of the file
\texttt{hamlet} as \texttt{act3}, which is evident by looking at the
directory where the two filenames point to the same i-node (with index
$2$), whose link counter has value $2$.
\paragraph{Summary} Throughout this section we have used effect
handlers to give a semantics to a \UNIX{}-style operating system by
treating system calls as effectful operations, whose semantics are
given by handlers, acting as composable micro-kernels. Starting from a
simple bare minimum file I/O model we seen how the modularity of
effect handlers enable us to develop a feature-rich operating system
in an incremental way by composing several handlers to implement a
basic file system, multi-user environments, and multi-tasking
support. Each incremental change to the system has been backwards
compatible with previous changes in the sense that we have not
modified any previously defined interfaces in order to support a new
feature. It serves as a testament to demonstrate the versatility of
effect handlers, and it suggests that handlers can be a viable option
to use with legacy code bases to retrofit functionality. The operating
system makes use of fourteen operations, which are being handled by
twelve handlers, some of which are used multiple times, e.g. the
$\environment$ and $\redirect$ handlers.
\section{\UNIX{}-style pipes}
\label{sec:pipes}
A \UNIX{} pipe is an abstraction for streaming communication between
two processes. Technically, a pipe works by connecting the standard
out file descriptor of the first process to the standard in file
descriptor of the second process. The second process can then process
the output of the first process by reading its own standard in
file~\cite{RitchieT74}.
We could implement pipes using the file system, however, it would
require us to implement a substantial amount of bookkeeping as we
would have to generate and garbage collect a standard out file and a
standard in file per process. Instead we can represent the files as
effectful operations and connect them via handlers.
%
With shallow handlers we can implement a demand-driven Unix pipeline
operator as two mutually recursive handlers.
%
\[
\bl
\Pipe : \Record{\UnitType \to \alpha \eff \{ \Yield : \beta \opto \UnitType \}; \UnitType \to \alpha\eff\{ \Await : \UnitType \opto \beta \}} \to \alpha \\
\Pipe\, \Record{p; c} \defas
\bl
\ShallowHandle\; c\,\Unit \;\With\; \\
~\ba[m]{@{}l@{~}c@{~}l@{}}
\Return~x &\mapsto& x \\
\OpCase{\Await}{\Unit}{resume} &\mapsto& \Copipe\,\Record{resume; p} \\
\ea
\el\medskip\\
\Copipe : \Record{\beta \to \alpha\eff\{ \Await : \UnitType \opto \beta\}; \UnitType \to \alpha\eff\{ \Yield : \beta \opto \UnitType\}} \to \alpha \\
\Copipe\, \Record{c; p} \defas
\bl
\ShallowHandle\; p\,\Unit \;\With\; \\
~\ba[m]{@{}l@{~}c@{~}l@{}}
\Return~x &\mapsto& x \\
\OpCase{\Yield}{y}{resume} &\mapsto& \Pipe\,\Record{resume; \lambda \Unit. c\, y} \\
\ea \\
\el \\
\el
\]
%
A $\Pipe$ takes two suspended computations, a producer $p$ and a
consumer $c$.
%
Each of the computations returns a value of type $\alpha$.
%
The producer can perform the $\Yield$ operation, which yields a value
of type $\beta$ and the consumer can perform the $\Await$ operation,
which correspondingly awaits a value of type $\beta$. The $\Yield$
operation corresponds to writing to standard out, whilst $\Await$
corresponds to reading from standard in.
%
The shallow handler $\Pipe$ runs the consumer first. If the consumer
terminates with a value, then the $\Return$ clause is executed and
returns that value as is. If the consumer performs the $\Await$
operation, then the $\Copipe$ handler is invoked with the resumption
of the consumer ($resume$) and the producer ($p$) as arguments. This
models the effect of blocking the consumer process until the producer
process provides some data.
The $\Copipe$ function runs the producer to get a value to feed to the
waiting consumer.
% The arguments are swapped and the consumer component
% now expects a value.
If the producer performs the $\Yield$ operation, then $\Pipe$ is
invoked with the resumption of the producer along with a thunk that
applies the consumer's resumption to the yielded value.
%
For aesthetics, we define a right-associative infix alias for pipe:
$p \mid c \defas \lambda\Unit.\Pipe\,\Record{p;c}$.
Let us put the pipe operator to use by performing a simple string
frequency analysis on a file. We will implement the analysis as a
collection of small single-purpose utilities which we connect by way
of pipes. We will build a collection of small utilities. We will make
use of two standard list iteration functions.
%
\[
\ba{@{~}l@{~}c@{~}l}
\map &:& \Record{\alpha \to \beta;\List~\alpha} \to \List~\beta\\
\iter &:& \Record{\alpha \to \beta; \List~\alpha} \to \UnitType
\ea
\]
%
The function $\map$ applies its function argument to each element of
the provided list in left-to-right order and returns the resulting
list. The function $\iter$ is simply $\map$ where the resulting list
is ignored. Our first utility is a simplified version of the GNU
coreutil utility \emph{cat}, which copies the contents of files to
standard out~\cite[Section~3.1]{MacKenzieMPPBYS20}. Our version will
open a single file and stream its contents one character at a time.
%
\[
\bl
\cat : \String \to \UnitType \eff \{\FileIO;\Yield : \Char \opto \UnitType;\Exit : \Int \opto \ZeroType\}\\
\cat~fname \defas
\bl
\Case\;\Do\;\UOpen~fname~\{\\
~\ba[t]{@{~}l@{~}c@{~}l}
\None &\mapsto& \exit\,1\\
\Some~ino &\mapsto& \bl \Case\;\Do\;\URead~ino~\{\\
~\ba[t]{@{~}l@{~}c@{~}l}
\None &\mapsto& \exit\,1\\
\Some~cs &\mapsto& \iter\,\Record{\lambda c.\Do\;\Yield~c;cs}; \Do\;\Yield~\charlit{\textnil} \}\}
\ea
\el
\ea
\el
\el
\]
%
The last line is the interesting line of code. The contents of the
file gets bound to $cs$, which is supplied as an argument to the list
iteration function $\iter$. The function argument yields each
character. Each invocation of $\Yield$ effectively suspends the
iteration until the next character is awaited.
%
This is an example of inversion of control as the iterator $\iter$ has
been turned into a generator.
%
We use the character $\textnil$ to identify the end of a stream. It is
essentially a character interpretation of the empty list (file)
$\nil$.
The $\cat$ utility processes the entire contents of a given
file. However, we may only be interested in some parts. The GNU
coreutil \emph{head} provides a way to process only a fixed amount of
lines and ignore subsequent
lines~\cite[Section~5.1]{MacKenzieMPPBYS20}.
%
We will implement a simplified version of this utility which lets us
keep the first $n$ lines of a stream and discard the remainder. This
process will act as a \emph{filter}, which is an intermediary process
in a pipeline that both awaits and yields data.
%
\[
\bl
\head : \Int \to \UnitType \eff \{\Await : \UnitType \opto \Char;\Yield : \Char \opto \UnitType\}\\
\head~n \defas
\bl
\If\;n = 0\;\Then\;\Do\;\Yield~\charlit{\textnil}\\
\Else\;
\bl
\Let\;c \revto \Do\;\Await~\Unit\;\In\\
\Do\;\Yield~c;\\
\If\;c = \charlit{\textnil}\;\Then\;\Unit\\
\Else\;\If\;c = \charlit{\nl}\;\Then\;\head~(n-1)\\
\Else\;\head~n
\el
\el
\el
\]
%
The function first checks whether more lines need to be processed. If
$n$ is zero, then it yields the nil character to signify the end of
stream. This has the effect of ignoring any future instances of
$\Yield$ in the input stream. Otherwise it awaits a character. Once a
character has been received the function yields the character in order
to include it in the output stream. After the yield, it checks whether
the character was nil in which case the process
terminates. Alternatively, if the character was a newline the function
applies itself recursively with $n$ decremented by one. Otherwise it
applies itself recursively with the original $n$.
The $\head$ filter does not transform the shape of its data stream. It
both awaits and yields a character. However, the awaits and yields
need not operate on the same type within the same filter, meaning we
can implement a filter that transforms the shape of the data. Let us
implement a variation of the GNU coreutil \emph{paste} which merges
lines of files~\cite[Section~8.2]{MacKenzieMPPBYS20}. Our
implementation will join characters in its input stream into strings
separated by spaces and newlines such that the string frequency
analysis utility need not operate on the low level of characters.
%
\[
\bl
\paste : \UnitType \to \UnitType \eff \{\Await : \UnitType \opto \Char;\Yield : \String \opto \UnitType\}\\
\paste\,\Unit \defas
\bl
pst\,\Record{\Do\;\Await~\Unit;\strlit{}}\\
\where
\ba[t]{@{~}l@{~}c@{~}l}
pst\,\Record{\charlit{\textnil};str} &\defas& \Do\;\Yield~str;\Do\;\Yield~\strlit{\textnil}\\
pst\,\Record{\charlit{\nl};str} &\defas& \Do\;\Yield~str;\Do\;\Yield~\strlit{\nl};pst\,\Record{\Do\;\Await~\Unit;\strlit{}}\\
pst\,\Record{\charlit{~};str} &\defas& \Do\;\Yield~str;pst\,\Record{\Do\;\Await~\Unit;\strlit{}}\\
pst\,\Record{c;str} &\defas& pst\,\Record{\Do\;\Await~\Unit;str \concat [c]}
\ea
\el
\el
\]
%
The heavy-lifting is delegated to the recursive function $pst$
which accepts two parameters: 1) the next character in the input
stream, and 2) a string buffer for building the output string. The
function is initially applied to the first character from the stream
(returned by the invocation of $\Await$) and the empty string
buffer. The function $pst$ is defined by pattern matching on the
character parameter. The first three definitions handle the special
cases when the received character is nil, newline, and space,
respectively. If the character is nil, then the function yields the
contents of the string buffer followed by a string with containing
only the nil character. If the character is a newline, then the
function yields the string buffer followed by a string containing the
newline character. Afterwards the function applies itself recursively
with the next character from the input stream and an empty string
buffer. The case when the character is a space is similar to the
previous case except that it does not yield a newline string. The
final definition simply concatenates the character onto the string
buffer and recurses.
Another useful filter is the GNU stream editor abbreviated
\emph{sed}~\cite{PizziniBMG20}. It is an advanced text processing
editor, whose complete functionality we will not attempt to replicate
here. We will just implement the ability to replace a string by
another. This will be useful for normalising the input stream to the
frequency analysis utility, e.g. decapitalise words, remove unwanted
characters, etc.
%
\[
\bl
\sed : \Record{\String;\String} \to \UnitType \eff \{\Await : \UnitType \opto \String;\Yield : \String \opto \UnitType\}\\
\sed\,\Record{target;str'} \defas
\bl
\Let\;str \revto \Do\;\Await~\Unit\;\In\\
\If\;str = target\;\Then\;\Do\;\Yield~str';\sed\,\Record{target;str'}\\
\Else\;\Do\;\Yield~str;\sed\,\Record{target;str'}
\el
\el
\]
%
The function $\sed$ takes two string arguments. The first argument is
the string to be replaced in the input stream, and the second argument
is the replacement. The function first awaits the next string from the
input stream, then it checks whether the received string is the same
as $target$ in which case it yields the replacement $str'$ and
recurses. Otherwise it yields the received string and recurses.
Now let us implement the string frequency analysis utility. It work on
strings and count the occurrences of each string in the input stream.
%
\[
\bl
\freq : \UnitType \to \UnitType \eff \{\Await : \UnitType \opto \String;\Yield : \List\,\Record{\String;\Int} \opto \UnitType\}\\
\freq\,\Unit \defas
\bl
freq'\,\Record{\Do\;\Await~\Unit;\nil}\\
\where
\ba[t]{@{~}l@{~}c@{~}l}
freq'\,\Record{\strlit{\textnil};tbl} &\defas& \Do\;\Yield~tbl\\
freq'\,\Record{str;tbl} &\defas&
\bl
\Let\;tbl' \revto \faild\,\Record{
\bl
\Record{str;1} \cons tbl; \lambda\Unit.\\
\Let\; sum \revto \lookup\,\Record{str;tbl}\;\In\\
\modify\,\Record{str;sum+1;tbl}}
\el\\
\In\;freq'\,\Record{\Do\;\Await~\Unit;tbl'}
\el
\ea
\el
\el
\]
%
The auxiliary recursive function $freq'$ implements the analysis. It
takes two arguments: 1) the next string from the input stream, and 2)
a table to keep track of how many times each string has occurred. The
table is implemented as an association list indexed by strings. The
function is initially applied to the first string from the input
stream and the empty list. The function is defined by pattern matching
on the string argument. The first definition handles the case when the
input stream has been exhausted in which case the function yields the
table. The other case is responsible for updating the entry associated
with the string $str$ in the table $tbl$. There are two subcases to
consider: 1) the string has not been seen before, thus a new entry
will have to created; or 2) the string already has an entry in the
table, thus the entry will have to be updated. We handle both cases
simultaneously by making use of the handler $\faild$, where the
default value accounts for the first subcase, and the computation
accounts for the second. The computation attempts to lookup the entry
associated with $str$ in $tbl$, if the lookup fails then $\faild$
returns the default value, which is the original table augmented with
an entry for $str$. If an entry already exists it gets incremented by
one. The resulting table $tbl'$ is supplied to a recursive application
of $freq'$.
We need one more building block to complete the pipeline. The utility
$\freq$ returns a value of type $\List~\Record{\String;\Int}$, we need
a utility to render the value as a string in order to write it to a
file.
%
\[
\bl
\printTable : \UnitType \to \UnitType \eff \{\Await : \UnitType \opto \List\,\Record{\String;\Int}\}\\
\printTable\,\Unit \defas
\map\,\Record{\lambda\Record{s;i}.s \concat \strlit{:} \concat \intToString~i \concat \strlit{;};\Do\;\Await~\Unit}
\el
\]
%
The function performs one invocation of $\Await$ to receive the table,
and then performs a $\map$ over the table. The function argument to
$\map$ builds a string from the provided string-integer pair.
%
Here we make use of an auxiliary function,
$\intToString : \Int \to \String$, that turns an integer into a
string. The definition of this function is omitted here for brevity.
%
%
% \[
% \bl
% \wc : \UnitType \to \UnitType \eff \{\Await : \UnitType \opto \Char;\Yield : \Int \opto \UnitType\}\\
% \wc\,\Unit \defas
% \bl
% \Do\;\Yield~(wc'\,\Unit)\\
% \where~
% \bl
% wc' \Unit \defas
% \bl
% \Let\;c \revto \Do\;\Await~\Unit\;\In\\
% \If\;c = \charlit{\textnil}\;\Then\;0\\
% \Else\; 1 + wc'~\Unit
% \el
% \el
% \el
% \el
% \]
%
We now have all the building blocks to construct a pipeline for
performing string frequency analysis on a file. The following performs
the analysis on the two first lines of Hamlet quote.
%
\[
\ba{@{~}l@{~}l}
&\bl
\runState\,\Record{\dec{fs}_0;\fileIO\,(\lambda\Unit.\\
\quad\timeshare\,(\lambda\Unit.\\
\qquad\dec{interruptWrite}\,(\lambda\Unit.\\
\qquad\quad\sessionmgr\,\Record{\Root;\lambda\Unit.\\
\qquad\qquad\status\,(\lambda\Unit.
\ba[t]{@{}l}
\quoteHamlet~\redirect~\strlit{hamlet};\\
\Let\;p \revto
\bl
~~(\lambda\Unit.\cat~\strlit{hamlet}) \mid (\lambda\Unit.\head~2) \mid \paste\\
\mid (\lambda\Unit.\sed\,\Record{\strlit{be,};\strlit{be}}) \mid (\lambda\Unit.\sed\,\Record{\strlit{To};\strlit{to}})\\
\mid (\lambda\Unit.\sed\,\Record{\strlit{question:};\strlit{question}})\\
\mid \freq \mid \printTable
\el\\
\In\;(\lambda\Unit.\echo~(p\,\Unit))~\redirect~\strlit{analysis})})))}
\ea
\el \smallskip\\
\reducesto^+&
\bl
\Record{
\ba[t]{@{}l}
[0];\\
\Record{
\ba[t]{@{}l}
dir=[\Record{\strlit{analysis};2},\Record{\strlit{hamlet};1},
\Record{\strlit{stdout};0}];\\
ilist=[\Record{2;\Record{lno=1;loc=2}},
\Record{1;\Record{lno=1;loc=1}},
\Record{0;\Record{lno=1;loc=0}}];\\
dreg=[
\ba[t]{@{}l}
\Record{2;
\ba[t]{@{}l@{}l}
\texttt{"}&\texttt{to:2;be:2;or:1;not:1;\nl:2;that:1;is:1}\\
&\texttt{the:1;question:1;"}},
\ea\\
\Record{1;
\ba[t]{@{}l@{}l}
\texttt{"}&\texttt{To be, or not to be,\nl{}that is the question:\nl{}}\\
&\texttt{Whether 'tis nobler in the mind to suffer\nl{}"}},
\ea\\
\Record{0; \strlit{}}]; lnext=3; inext=3}}\\
\ea
\ea
\ea\\
: \Record{\List~\Int; \FileSystem}
\el
\ea
\]
%
The pipeline gets bound to the variable $p$. The pipeline starts with
call to $\cat$ which streams the contents of the file
$\strlit{hamlet}$ to the process $\head$ applied to $2$, meaning it
will only forward the first two lines of the file to its
successor. The third process $\paste$ receives the first two lines one
character at a time and joins the characters into strings delimited by
whitespace. The next three instances of $\sed$ perform some string
normalisation. The first instance removes the trailing comma from the
string $\strlit{be,}$; the second normalises the capitalisation of the
word ``to''; and the third removes the trailing colon from the string
$\strlit{question:}$. The seventh process performs the frequency
analysis and outputs a table, which is being rendered as a string by
the eighth process. The output of the pipeline is supplied to the
$\echo$ utility whose output is being redirected to a file named
$\strlit{analysis}$. Contents of the file reside in location $2$ in
the data region. Here we can see that the analysis has found that the
words ``to'', ``be'', and the newline character ``$\nl$'' appear two
times each, whilst the other words appear once each.
\section{Process synchronisation}
%
In Section~\ref{sec:tiny-unix-time} we implemented a time-sharing
system on top of a simple process model. However, the model lacks a
process synchronisation facility. It is somewhat difficult to cleanly
add support for synchronisation to the implementation as it is in
Section~\ref{sec:tiny-unix-time}. Firstly, because the interface of
$\Fork : \UnitType \to \Bool$ only gives us two possible process
identifiers: $\True$ and $\False$, meaning at any point we can only
identify two processes. Secondly, and more importantly, some state is
necessary to implement synchronisation, but the current implementation
of process scheduling is split amongst two handlers and one auxiliary
function, all of which need to coordinate their access and
manipulation of the state cell. One option is to use some global state
via the interface from Section~\ref{sec:tiny-unix-io}, which has the
advantage of making the state manipulation within the scheduler
modular, but it also has the disadvantage of exposing the state as an
implementation detail --- and it comes with all the caveats of
programming with global state. A parameterised handler provides an
elegant solution, which lets us internalise the state within the
scheduler.
We will see how a parameterised handler enables us to implement a
richer process model supporting synchronisation with ease. The effect
signature of process concurrency is as follows.
%
\[
\Co \defas \{\UFork : \UnitType \opto \Int; \Wait : \Int \opto \UnitType; \Interrupt : \UnitType \opto \UnitType\}
\]
%
The operation $\UFork$ models \UNIX{}
\emph{fork}~\cite{RitchieT74}. It is generalisation of the $\Fork$
operation from Section~\ref{sec:tiny-unix-time}. The operation is
intended to return twice: once to the parent process with a unique
process identifier for the child process, and a second time to the
child process with the zero identifier. The $\Wait$ operation takes a
process identifier as argument and then blocks the invoking process
until the process associated with the provided identifier has
completed. The $\Interrupt$ operation is the same as in
Section~\ref{sec:tiny-unix-time}; it temporarily suspends the invoking
process in order to let another process run.
The main idea is to use the state cell of a parameterised handler to
manage the process queue and to keep track of the return values of
completed processes. The scheduler will return an association list of
process identifiers mapped to the return value of their respective
process when there are no more processes to be run. The process queue
will consist of reified processes, which we will represent using
parameterised resumptions. To make the type signatures understandable
we will make use of three mutually recursive type aliases.
%
\[
\ba{@{~}l@{~}l@{~}c@{~}l}
\Proc &\alpha~\varepsilon &\defas& \Sstate~\alpha~\varepsilon \to \List\,\Record{\Int;\alpha} \eff \varepsilon\\
\Pstate &\alpha~\varepsilon &\defas& [\Ready:\Proc~\alpha~\varepsilon;\Blocked:\Record{\Int;\Proc~\alpha~\varepsilon}]\\
\Sstate &\alpha~\varepsilon &\defas& \Record{q:\List\,\Record{\Int;\Pstate~\alpha~\varepsilon};done:\List~\alpha;pid:\Int;pnext:\Int}\\
\ea
\]
%
The $\Proc$ alias is the type of reified processes. It is defined as a
function that takes the current scheduler state and returns an
association list of $\alpha$s indexed by integers. This is almost the
type of a parameterised resumption as the only thing missing is the a
component for the interpretation of an operation.
%
The second alias $\Pstate$ enumerates the possible process
states. Either a process is \emph{ready} to be run or it is
\emph{blocked} on some other process. The payload of the $\Ready$ tag
is the process to run. The $\Blocked$ tag is parameterised by a pair,
where the first component is the identifier of the process that is
being waited on and the second component is the process to be
continued when the other process has completed.
%
The third alias $\Sstate$ is the type of scheduler state. It is a
quadruple, where the first label $q$ is the process queue. It is
implemented as an association list indexed by process identifiers. The
second label $done$ is used to store the return values of completed
processes. The third label $pid$ is used to remember the identifier of
currently executing process, and the fourth label $pnext$ is used to
compute a unique identifier for new processes.
We will abstract some of the scheduling logic into an auxiliary
function $\runNext$, which is responsible for dequeuing and running
the next process from the queue.
%
\[
\bl
\runNext : \Sstate~\alpha~\varepsilon \to \List~\alpha \eff \varepsilon\\
\runNext~st \defas
\bl
\Case\;st.q\;\{\\
~\bl
\nil \mapsto st.done\\
\Record{pid;\Blocked\,\Record{pid';resume}} \cons q' \mapsto\\
\quad\bl
\Let\;st' \revto \Record{st \;\With\; q = q' \concat [\Record{pid;\Blocked\,\Record{pid';resume}}]}\;\In\\
\runNext~st'
\el\\
\Record{pid;\Ready~resume} \cons q' \mapsto\\
\quad\bl
\Let\;st' \revto \Record{st \;\With\; q = q'; pid = pid}\;\In\\
resume~st'\,\}
\el
\el
\el
\el
\]
%
The function operates on the scheduler state. It first performs a case
split on the process queue. There are three cases to consider.
\begin{enumerate}
\item The queue is empty. Then the function returns the list $done$,
which is the list of process return values.
\item The next process is blocked. Then the process is appended on
to the end of the queue, and $\runNext$ is applied recursively to
the scheduler state $st'$ with the updated queue.
\item The next process is ready. Then the $q$ and $pid$ fields
within the scheduler state are updated accordingly. The reified
process $resume$ is applied to the updated scheduler state $st'$.
\end{enumerate}
%
Evidently, this function may enter an infinite loop if every process
is in blocked state. This may happen if we deadlock any two processes
by having them wait on one another. Using this function we can define
a handler that implements a process scheduler.
%
\[
\bl
\scheduler : \Record{\alpha \eff \{\Co;\varepsilon\};\Sstate~\alpha~\varepsilon} \Harrow^\param \List\,\Record{\Int;\alpha} \eff \varepsilon\\
\scheduler \defas
\bl
st.\\
\bl
\Return\;x \mapsto \\
\quad\bl
\Let\;done' \revto \Record{st.pid;x} \cons st.done\;\In\\
\runNext\,\Record{st\;\With\;done = done'}
\el\\
\OpCase{\UFork}{\Unit}{resume} \mapsto\\
\quad\bl
\Let\;resume' \revto \lambda st.resume\,\Record{0;st}\;\In\\
\Let\;pid \revto st.pnext \;\In\\
\Let\;q' \revto st.q \concat [\Record{pid;\Ready~resume'}]\;\In\\
\Let\;st' \revto \Record{st\;\With\;q = q'; pnext = pid + 1}\;\In\\
resume\,\Record{pid;st'}
\el\\
\OpCase{\Wait}{pid}{resume} \mapsto\\
\quad\bl
\Let\;resume' \revto \lambda st.resume~\Record{\Unit;st}\;\In\\
\Let\;q' \revto
\bl
\If\;\dec{has}\,\Record{pid;st.q}\\
\Then\;st.q \concat [\Record{st.pid;\Blocked\,\Record{pid;resume'}}]\\
\Else\;st.q \concat [\Record{st.pid;\Ready~resume'}]
\el\\
\In\;\runNext\,\Record{st\;\With\;q = q'}
\el\\
\OpCase{\Interrupt}{\Unit}{resume} \mapsto\\
\quad\bl
\Let\;resume' \revto \lambda st.resume\,\Record{\Unit;st}\;\In\\
\Let\;q' \revto st.q \concat [\Record{st.pid;\Ready~resume'}]\;\In\\
\runNext~\Record{st \;\With\; q = q'}
\el
\el
\el
\el
\]
%
The handler definition $\scheduler$ takes as input a computation that
computes a value of type $\alpha$ whilst making use of the concurrency
operations from the $\Co$ signature. In addition it takes the initial
scheduler state as input. Ultimately, the handler returns a
computation that computes a list of $\alpha$s, where all the
$\Co$-operations have been handled.
%
In the definition the scheduler state is bound by the name $st$.
The $\Return$ case is invoked when a process completes. The return
value $x$ is paired with the identifier of the currently executing
process and consed onto the list $done$. Subsequently, the function
$\runNext$ is invoked in order to the next ready process.
The $\UFork$ case implements the semantics for process forking. First
the child process is constructed by abstracting the parameterised
resumption $resume$ such that it becomes an unary state-accepting
function, which can be ascribed type $\Proc~\alpha~\varepsilon$. The
parameterised resumption applied to the process identifier $0$, which
lets the receiver know that it assumes the role of child in the
parent-child relationship amongst the processes. The next line
retrieves the unique process identifier for the child. Afterwards, the
child process is pushed on to the queue in ready state. The next line
updates the scheduler state with the new queue and a new unique
identifier for the next process. Finally, the parameterised resumption
is applied to the child process identifier and the updated scheduler
state.
The $\Wait$ case implements the synchronisation operation. The
parameter $pid$ is the identifier of the process that the invoking
process wants to wait on. First we construct an unary state-accepting
function. Then we check whether there exists a process with identifier
$pid$ in the queue. If there is one, then we enqueue the current
process in blocked state. If no such process exists (e.g. it may
already have finished), then we enqueue the current process in ready
state. Finally, we invoke $\runNext$ with the scheduler state updated
with the new process queue in order to run the next ready process.
The $\Interrupt$ case suspends the current process by enqueuing it in
ready state, and dequeuing the next ready process.
Using this handler we can implement version 2 of the time-sharing
system.
%
\[
\bl
\timesharee : (\UnitType \to \alpha \eff \Co) \to \List\,\Record{\Int;\alpha}\\
\timesharee~m \defas
\bl
\Let\;st_0 \revto \Record{q=\nil;done=\nil;pid=1;pnext=2}\;\In\\
\ParamHandle\;m\,\Unit\;\With\; \scheduler~st_0
\el
\el
\]
%
The computation $m$, which may perform any of the concurrency
operations, is handled by the parameterised handler $\scheduler$. The
parameterised handler definition is applied to the initial scheduler
state, which has an empty process queue, an empty done list, and it
assigns the first process the identifier $1$, and sets up the
identifier for the next process to be $2$.
With $\UFork$ and $\Wait$ we can implement the \emph{init} process,
which is the initial startup process in
\UNIX{}~\cite{RitchieT74}. This process remains alive until the
operating system is shutdown. It is the ancestor of every process
created by the operating system.
%
\[
\bl
\init : (\Unit \to \alpha \eff \varepsilon) \to \alpha \eff \{\Co;\varepsilon\}\\
\init~main \defas
\bl
\Let\;pid \revto \Do\;\UFork~\Unit\;\In\\
\If\;pid = 0\\
\Then\;main\,\Unit\\
\Else\;\Do\;\Wait~pid
\el
\el
\]
%
We implement $\init$ as a higher-order function. It takes a main
routine that will be applied when the system has been started. The
function first performs $\UFork$ to duplicate itself. The child branch
executes the $main$ routine, whilst the parent branch waits on the
child.
Now we can plug everything together.
%
\[
\ba{@{~}l@{~}l}
&\bl
\runState\,\Record{\dec{fs}_0;\fileIO\,(\lambda\Unit.\\
\quad\timesharee\,(\lambda\Unit.\\
\qquad\dec{interruptWrite}\,(\lambda\Unit.\\
\qquad\quad\sessionmgr\,\Record{\Root;\lambda\Unit.\\
\qquad\qquad\status\,(\lambda\Unit.
\init\,(\lambda\Unit.
\ba[t]{@{}l}
\Let\;pid \revto \Do\;\UFork\,\Unit\;\In\\
\If\;pid = 0\\
\Then\;\bl
\su~\Alice;
\quoteRitchie\,\Unit
\el\\
\Else\; \su~\Bob; \Do\;\Wait~pid; \quoteHamlet\,\Unit))})))}
\ea
\el \smallskip\\
\reducesto^+&
\bl
\Record{
\ba[t]{@{}l}
[\Record{1;0};\Record{2;0};\Record{3;0}];\\
\Record{
\ba[t]{@{}l}
dir=[\Record{\strlit{stdout};0}];\\
ilist=[\Record{0;\Record{lno=1;loc=0}}];\\
dreg=[
\ba[t]{@{}l}
\Record{0;
\ba[t]{@{}l@{}l}
\texttt{"}&\texttt{UNIX is basically a simple operating system, but }\\
&\texttt{you have to be a genius to understand the simplicity.\nl}\\
&\texttt{To be, or not to be,\nl{}that is the question:\nl{}}\\
&\texttt{Whether 'tis nobler in the mind to suffer\nl{}"}}]
\ea\\
lnext=1; inext=1}}\\
\ea
\ea
\ea\\
: \Record{\List\,\Record{\Int;\Int}; \FileSystem}
\el
\ea
\]
%
Process number $1$ is $\init$, which forks itself to run its
argument. The argument runs as process $2$, which also forks itself,
thus creating a process $3$. Process $3$ executes the child branch,
which switches user to $\Alice$ and invokes the $\quoteRitchie$
process which writes to standard out. Process $2$ executes the parent
branch, which switches user to $\Bob$ and waits for the child process
to complete before it invokes the routine $\quoteHamlet$ which also
writes to standard out.
%
It is evident from looking at the file system state that the writes to
standard out has not been interleaved as the contents of
$\strlit{stdout}$ appear in order. We can also see from the process
completion list that Alice's process (pid $3$) is the first to
complete with status $0$, and the second to complete is Bob's process
(pid $2$) with status $0$, whilst the last process to complete is the
$\init$ process (pid $1$) with status $0$.
\paragraph{Retrofitting fork} In the previous program we replaced the
original implementation of $\timeshare$
(Section~\ref{sec:tiny-unix-time}), which handles invocations of
$\Fork : \UnitType \opto \Bool$, by $\timesharee$, which handles the
more general operation $\UFork : \UnitType \opto \Int$. In practice,
we may be unable to dispense of the old interface so easily, meaning
we have to retain support for, say, legacy reasons. As we have seen
previously we can an operation in terms of another operation. Thus to
retain support for $\Fork$ we simply have to insert a handler under
$\timesharee$ which interprets $\Fork$ in terms of $\UFork$. The
operation case of this handler would be akin to the following.
%
\[
\OpCase{\Fork}{\Unit}{resume} \mapsto
\bl
\Let\;pid \revto \Do\;\UFork~\Unit\;\In\\
resume\,(pid \neq 0)
\el
\]
%
The interpretation of $\Fork$ inspects the process identifier returned
by the $\UFork$ to determine the role of the current process in the
parent-child relationship. If the identifier is nonzero, then the
process is a parent, hence $\Fork$ should return $\True$ to its
caller. Otherwise it should return $\False$. This preserves the
functionality of the legacy code.
\section{Related work}
\label{sec:unix-related-work}
\paragraph{Programming languages with handlers} The work presented in
this chapter has been retrofitted on to the programming language
Links~\cite{HillerstromL16,HillerstromL18}. A closely related
programming language with handlers is \citeauthor{Leijen17}'s Koka,
which has been retrofitted with ordinary deep and parameterised effect
handlers~\cite{Leijen17}. In Koka effects are nominal, meaning an
effect and its constructors must be declared before use, which is
unlike the structural approach taken in this chapter. Koka also tracks
effects via an effect system based on \citeauthor{Leijen05}-style row
polymorphism~\cite{Leijen05,Leijen14}, where rows are interpreted as
multisets which means an effect can occur multiple times in an effect
row. The ability to repeat effects provide a form for effect scoping
in the sense that an effect instance can shadow another. A handler
handles only the first instance of a repeated effect, leaving the
remaining instances for another handler. Consequently, the order of
repeated effect instances matter and it can therefore be situational
useful to manipulate the order of repeated instances by way of
so-called \emph{effect masking}.
%
The notion of effect masking was formalised by \citet{BiernackiPPS18}
and generalised by \citet{ConventLMM20}.
%
\citet{BiernackiPPS18} designed Helium, which is a programming
language that features a rich module system, deep handlers, and
\emph{lexical} handlers~\cite{BiernackiPPS20}. Lexical handlers
\emph{bind} effectful operations to specific handler
instances. Operations remain bound for the duration of
computation. This makes the nature of lexical handlers more static
than ordinary deep handlers, as for example it is not possible to
dynamically overload the interpretation of residual effects of a
resumption invocation as in Section~\ref{sec:tiny-unix-env}.
%
The mathematical foundations for lexical handlers has been developed
by \citet{Geron19}.
The design of the Effekt language by \citet{BrachthauserSO20b}
resolves around the idea of lexical handlers for efficiency. Effekt
takes advantage of the static nature of lexical handlers to eliminate
the dynamic handler lookup at runtime by tying the correct handler
instance directly to an operation
invocation~\cite{BrachthauserS17,SchusterBO20}. The effect system of
Effekt is based on intersection types, which provides a limited form
of effect polymorphism~\cite{BrachthauserSO20b}. A design choice that
means it does not feature first-class functions.
The Frank language by \citet{LindleyMM17} is born and bred on shallow
effect handlers. One of the key novelties of Frank is $n$-ary shallow
handlers, which generalise ordinary unary shallow handlers to be able
to handle multiple computations simultaneously. Another novelty is the
effect system, which is based on a variation of
\citeauthor{Leijen05}-style row polymorphism, where the programmer
rarely needs to mention effect variables. This is achieved by
insisting that the programmer annotates each input argument with the
particular effects handled at the particular argument position as well
as declaring what effects needs to be handled by the ambient
context. Each annotation is essentially an incomplete row. They are
made complete by concatenating them and inserting a fresh effect
variable.
\citeauthor{BauerP15}'s Eff language was the first programming
language designed from the ground up with effect handlers in mind. It
features only deep handlers~\cite{BauerP15}. A previous iteration of
the language featured an explicit \emph{effect instance} system. An
effect instance is a sort of generative interface, where the
operations are unique to each instance. As a result it is possible to
handle two distinct instances of the same effect differently in a
single computation. Their system featured a type-and-effect system
with support for effect inference~\cite{Pretnar13,BauerP13}, however,
the effect instance system was later dropped to in favour of a vanilla
nominal approach to effects and handlers.
Multicore OCaml is, at the time of writing, an experimental branch of
the OCaml programming language, which aims to extend OCaml with effect
handlers for multicore and concurrent
programming~\cite{DolanWM14,DolanWSYM15}. The current incarnation
features untracked nominal effects and deep handlers with single-use
resumptions.
%\dhil{Possibly move to the introduction or background}
\paragraph{Effect-driven concurrency}
In their tutorial of the Eff programming language \citet{BauerP15}
implement a simple lightweight thread scheduler. It is different from
the schedulers presented in this section as their scheduler only uses
resumptions linearly. This is achieved by making the fork operation
\emph{higher-order} such that the operation is parameterised by a
computation. The computation is run under a fresh instance of the
handler. On one hand this approach has the benefit of making threads
cheap as it is no stack copying is necessary at runtime. On the other
hand it does not guarantee that every operation is handled uniformly
(when in the setting of deep handlers) as every handler in between the
fork operation invocation site and the scheduler handler needs to be
manually reinstalled when the computation argument is
run. Nevertheless, this is the approach to concurrency that
\citet{DolanWSYM15} have adopted for Multicore
OCaml~\cite{DolanWSYM15}.
%
In my MSc(R) dissertation I used a similar approach to implement a
cooperative version of the actor concurrency model of Links as a
user-definable Links library~\cite{Hillerstrom16}. This library was
used by a prototype compiler for Links to make the runtime as lean as
possible~(this compiler hooked directly into the backend of the
Multicore OCaml compiler in order to produce native code for effect
handlers~\cite{HillerstromL16}).
%
This line of work was further explored by \citet{Convent17}, who
implemented various cooperative actor-based concurrency abstractions
using effect handlers in the Frank programming
language. \citet{Poulson20} expanded upon this work by investigating
ways to handle preemptive concurrency.
\citet{FowlerLMD19} uses effect handlers in the setting of linearly
typed fault-tolerant distributed programming. They use effect handlers
to codify an exception handling mechanism, which automatically
consumes linear resources. Exceptions are implemented as operations,
that are handled by \emph{cancelling} their resumptions. Cancellation
is a runtime primitive that gathers and closes active resources in the
computation represented by some resumption.
\citet{DolanEHMSW17} and \citet{Leijen17a} gave two widely different
implementations of the async/await idiom using effect
handlers. \citeauthor{DolanEHMSW17}'s implementation is based on
higher-order operations with linearly used resumptions, whereas
\citeauthor{Leijen17a}'s implementation is based on first-order
operations with multi-shot resumptions, and thus, it is close in the
spirit to the schedulers we have considered in this chapter.
\paragraph{Continuations and operating systems}
% The very first implementation of `lightweight threads' using
% continuations can possibly be credited to
% \citet{Burstall69}. \citeauthor{Burstall69} used
% \citeauthor{Landin65}'s J operator to arrange tree-based search, where
% each branch would be reified as a continuation and put into a
% queue.
The idea of using continuations to implement various facets of
operating systems is not new. However, most work has focused on
implementing some form of multi-tasking mechanism.
%
\citet{Wand80} implements a small multi-tasking kernel with support
for mutual exclusion and data protection using undelimited
continuations in the style of the catch operator of Scheme.
% \citet{HaynesFW86} codify coroutines as library using call/cc.
\citet{DybvigH89} implements \emph{engines} using call/cc in Scheme
--- an engine is a kind of process abstraction which support
preemption. An engine runs a computation on some time budget. If
computation exceeds the allotted time budget, then it is
interrupted. They represent engines as reified continuations and use
the macro system of Scheme to insert clock ticks at appropriate places
in the code. % \citet{HiebD90} also design the \emph{spawn-controller}
% operator for programming tree-based concurrency abstractions.
\citet{KiselyovS07a} develop a small fault-tolerant operating system
with multi-tasking support and a file system using delimited
continuations. Their file system is considerably more sophisticated
than the one we implemented in this chapter as it supports
transactional storage, meaning user processes can roll back actions
such as file deletion and file update.
\paragraph{Resumption monad}
The resumption monad is both a semantic and programmatic abstraction
for interleaving computation. \citet{Papaspyrou01} applies a
resumption monad transformer to construct semantic models of
concurrent computation. A resumption monad transformer, i.e. a monad
$T$ that transforms an arbitrary monad $M$ to a new monad $T~M$ with
commands for interrupting computation.
%
\citet{Harrison06} demonstrates the resumption monad as a practical
programming abstraction by implementing a small multi-tasking
kernel. \citeauthor{Harrison06} implements two variations of the
resumption monad: basic and reactive. The basic resumption monad is a
closed environment for interleaving different strands of
computations. It is closed in the sense that strands of computation
cannot interact with the ambient context of their environment. The
reactive resumption monad makes the environment open by essentially
registering a callback with an interruption action. This provides a
way to model system calls.
The origins of the (semantic) resumption monad can be traced back to
at least \citet{Moggi90}, who described a monad for modelling the
interleaving semantics of \citeauthor{Milner75}'s \emph{calculus of
communicating systems}~\cite{Milner75}.
The usage of \emph{resumption} in the name has a slightly different
meaning than the term `resumption' we have been using throughout this
chapter. We have used `resumption' to mean delimited continuation. In
the setting of the resumption monad it has a precise domain-theoretic
meaning. It is derived from \citeauthor{Plotkin76}'s domain of
resumptions, which in turn is derived from \citeauthor{Milner75}'s
domain of processes~\cite{Milner75,Plotkin76}.
% \dhil{Briefly mention \citet{AtkeyJ15}}
% \begin{figure}[t]
% \centering
% \begin{tikzpicture}[node distance=4cm,auto,>=stealth']
% \node[] (server) {\bfseries Bob (server)};
% \node[left = of server] (client) {\bfseries Alice (client)};
% \node[below of=server, node distance=5cm] (server_ground) {};
% \node[below of=client, node distance=5cm] (client_ground) {};
% %
% \draw (client) -- (client_ground);
% \draw (server) -- (server_ground);
% \draw[->,thick] ($(client)!0.25!(client_ground)$) -- node[rotate=-6,above,scale=0.7,midway]{SYN 42} ($(server)!0.40!(server_ground)$);
% \draw[<-,thick] ($(client)!0.56!(client_ground)$) -- node[rotate=6,above,scale=0.7,midway]{SYN 84;ACK 43} ($(server)!0.41!(server_ground)$);
% \draw[->,thick] ($(client)!0.57!(client_ground)$) -- node[rotate=-6,above,scale=0.7,midway]{ACK 85} ($(server)!0.72!(server_ground)$);
% \end{tikzpicture}
% \caption{Sequence diagram for the TCP handshake example.}\label{fig:tcp-handshake}
% \end{figure}
% \paragraph{TCP threeway handshake}
% \chapter{An ML-flavoured programming language based on rows}
\chapter{ML-flavoured calculi for effect handler oriented programming}
\label{ch:base-language}
\dhil{TODO merge this chapter with ``Effect handler calculi''}
In this chapter we introduce a core calculus, \BCalc{}, which we shall
later use as the basis for exploration of design considerations for
effect handlers. This calculus is based on \CoreLinks{} by
\citet{LindleyC12}, which distils the essence of the functional
multi-tier web-programming language
\Links{}~\cite{CooperLWY06}. \Links{} belongs to the
ML-family~\cite{MilnerTHM97} of programming languages as it features
typical characteristics of ML languages such as a static type system
supporting parametric polymorphism with type inference (in fact Links
supports first-class polymorphism), and its evaluation semantics is
strict. However, \Links{} differentiates itself from the rest of the
ML-family by making crucial use of \emph{row polymorphism} to support
extensible records, variants, and tracking of computational
effects. Thus \Links{} has a rather strong emphasis on structural
types rather than nominal types.
\CoreLinks{} captures all of these properties of \Links{}. Our
calculus \BCalc{} differs in several aspects from \CoreLinks{}. For
example, the underlying formalism of \CoreLinks{} is call-by-value,
whilst the formalism of \BCalc{} is \emph{fine-grain
call-by-value}~\cite{LevyPT03}, which shares similarities with
A-normal form (ANF)~\cite{FlanaganSDF93} as it syntactically
distinguishes between value and computation terms by mandating every
intermediate computation being named. However unlike ANF, fine-grain
call-by-value remains closed under $\beta$-reduction. The reason for
choosing fine-grain call-by-value as our formalism is entirely due to
convenience. As we shall see in Chapter~\ref{ch:unary-handlers}
fine-grain call-by-value is a convenient formalism for working with
continuations. Another point of difference between \CoreLinks{} and
\BCalc{} is that the former models the integrated database query
sublanguage of \Links{}. We do not consider the query sublanguage at
all, and instead our focus is entirely on modelling the interaction
and programming with computational effects.
\section{Syntax and static semantics}
\label{sec:syntax-base-language}
As \BCalc{} is intrinsically typed, we begin by presenting the syntax
of kinds and types in
Section~\ref{sec:base-language-types}. Subsequently in
Section~\ref{sec:base-language-terms} we present the term syntax,
before presenting the formation rules for types in
Section~\ref{sec:base-language-type-rules}. As a convention, we always
work up to $\alpha$-conversion~\cite{Church32} of types and
terms. Following \citet{Pierce02} we omit cases in definitions that
deal only with the bureaucracy of renaming. For any transformation
$\sembr{-}$ on a term $M$, or type, we write $\sembr{M} \adef M'$ to
mean that $M'$ is the result of transforming $M$ where implicit
renaming may have occurred.
% Typically the presentation of a programming language begins with its
% syntax. If the language is typed there are two possible starting
% points: Either one presents the term syntax first, or alternatively,
% the type syntax first. Although the choice may seem rather benign
% there is, however, a philosophical distinction to be drawn between
% them. Terms are, on their own, entirely meaningless, whilst types
% provide, on their own, an initial approximation of the semantics of
% terms. This is particularly true in an intrinsic typed system perhaps
% less so in an extrinsic typed system. In an intrinsic system types
% must necessarily be precursory to terms, as terms ultimately depend on
% the types. Following this argument leaves us with no choice but to
% first present the type syntax of \BCalc{} and subsequently its term
% syntax.
\subsection{Types and their kinds}
\label{sec:base-language-types}
%
\begin{figure}
\begin{syntax}
% \slab{Value types} &A,B &::= & A \to C
% \mid \alpha
% \mid \forall \alpha^K.C
% \mid \Record{R}
% \mid [R]\\
% \slab{Computation types}
% &C,D &::= & A \eff E \\
% \slab{Effect types} &E &::= & \{R\}\\
% \slab{Row types} &R &::= & \ell : P;R \mid \rho \mid \cdot \\
% \slab{Presence types} &P &::= & \Pre{A} \mid \Abs \mid \theta\\
% %\slab{Labels} &\ell & & \\
% % \slab{Types} &T &::= & A \mid C \mid E \mid R \mid P \\
% \slab{Kinds} &K &::= & \Type \mid \Row_\mathcal{L} \mid \Presence
% \mid \Comp \mid \Effect \\
% \slab{Label sets} &\mathcal{L} &::=& \emptyset \mid \{\ell\} \uplus \mathcal{L}\\
% %\slab{Type variables} &\alpha, \rho, \theta& \\
% \slab{Type environments} &\Gamma &::=& \cdot \mid \Gamma, x:A \\
% \slab{Kind environments} &\Delta &::=& \cdot \mid \Delta, \alpha:K
\slab{Value\mathrm{~}types} &A,B \in \ValTypeCat &::= & A \to C
\mid \forall \alpha^K.C
\mid \Record{R} \mid [R]
\mid \alpha \\
\slab{Computation\mathrm{~}types\!\!}
&C,D \in \CompTypeCat &::= & A \eff E \\
\slab{Effect\mathrm{~}types} &E \in \EffectCat &::= & \{R\}\\
\slab{Row\mathrm{~}types} &R \in \RowCat &::= & \ell : P;R \mid \rho \mid \cdot \\
\slab{Presence\mathrm{~}types\!\!\!\!\!} &P \in \PresenceCat &::= & \Pre{A} \mid \Abs \mid \theta\\
\\
\slab{Types} &T \in \TypeCat &::= & A \mid C \mid E \mid R \mid P \\
\slab{Kinds} &K \in \KindCat &::= & \Type \mid \Comp \mid \Effect \mid \Row_\mathcal{L} \mid \Presence \\
\slab{Label\mathrm{~}sets} &\mathcal{L} \in \LabelCat &::=& \emptyset \mid \{\ell\} \uplus \mathcal{L}\\\\
\slab{Type\mathrm{~}environments} &\Gamma \in \TyEnvCat &::=& \cdot \mid \Gamma, x:A \\
\slab{Kind\mathrm{~}environments} &\Delta \in \KindEnvCat &::=& \cdot \mid \Delta, \alpha:K \\
\end{syntax}
\caption{Syntax of types, kinds, and their environments.}
\label{fig:base-language-types}
\end{figure}
%
The types are divided into several distinct syntactic categories which
are given in Figure~\ref{fig:base-language-types} along with the
syntax of kinds and environments.
%
\paragraph{Value types}
We distinguish between values and computations at the level of
types. Value types comprise the function type $A \to C$, which maps
values of type $A$ to computations of type $C$; the polymorphic type
$\forall \alpha^K . C$ is parameterised by a type variable $\alpha$ of
kind $K$; and the record type $\Record{R}$ represents records with
fields constrained by row $R$. Dually, the variant type $[R]$
represents tagged sums constrained by row $R$.
\paragraph{Computation types and effect types}
The computation type $A \eff E$ is given by a value type $A$ and an
effect type $E$, which specifies the effectful operations a
computation inhabiting this type may perform. An effect type
$E = \{R\}$ is constrained by row $R$.
\paragraph{Row types}
Row types play a pivotal role in our type system as effect, record,
and variant types are uniformly given by row types. A \emph{row type}
describes a collection of distinct labels, each annotated by a
presence type. A presence type indicates whether a label is
\emph{present} with type $A$ ($\Pre{A}$), \emph{absent} ($\Abs$) or
\emph{polymorphic} in its presence ($\theta$).
%
For example, the effect row $\{\Read:\Pre{\Int},\Write:\Abs,\cdot\}$
denotes a read-only context in which the operation label $\Read$ may
occur to access some integer value, whilst the operation label
$\Write$ cannot appear.
%
Row types are either \emph{closed} or \emph{open}. A closed row type
ends in~$\cdot$, whilst an open row type ends with a \emph{row
variable} $\rho$ (in an effect row we usually use $\varepsilon$
rather than $\rho$ and refer to it as an \emph{effect variable}).
%
The example effect row above is closed, an open variation of it ends
in an effect variable $\varepsilon$,
i.e. $\{\Read:\Pre{\Int},\Write:\Abs,\varepsilon\}$.
%
The row variable in an open row type can be instantiated with
additional labels subject to the restriction that each label may only
occur at most once (we enforce this restriction at the level of
kinds). We identify rows up to the reordering of labels as follows.
%
\begin{mathpar}
\inferrule*[Lab=\rowlab{Closed}]
{~}
{\cdot \equiv_{\mathrm{row}} \cdot}
\inferrule*[Lab=\rowlab{Open}]
{~}
{\rho \equiv_{\mathrm{row}} \rho'}
\inferrule*[Lab=\rowlab{Head}]
{R \equiv_{\mathrm{row}} R'}
{\ell:P;R \equiv_{\mathrm{row}} \ell:P;R'}
\inferrule*[Lab=\rowlab{Swap}]
{R \equiv_{\mathrm{row}} R'}
{\ell:P;\ell':P';R \equiv_{\mathrm{row}} \ell':P';\ell:P;R'}
\end{mathpar}
%
% The last rule $\rowlab{Swap}$ let us identify rows up to the
% reordering of labels. For instance, the two rows
% $\ell_1 : P_1; \cdots; \ell_n : P_n; \cdot$ and
% $\ell_n : P_n; \cdots ; \ell_1 : P_1; \cdot$ are equivalent.
%
The \rowlab{Closed} rule states that the closed marker $\cdot$ is
equivalent to itself, similarly the \rowlab{Open} rule states that any
two row variables are equivalent if and only if they have the same
syntactic name. The \rowlab{Head} rule compares the head of two given
rows and inductively compares their tails. The \rowlab{Swap} rule
permits reordering of labels. We assume structural equality on
labels. The \rowlab{Head} rule
%
The standard zero and unit types are definable using rows. We define
the zero type as the empty, closed variant $\ZeroType \defas
[\cdot]$. Dually, the unit type is defined as the empty, closed record
type, i.e. $\UnitType \defas \Record{\cdot}$.
% As absent labels in closed rows are redundant we will, for example,
% consider the following two rows equivalent
% $\Read:\Pre{\Int},\Write:\Abs,\cdot \equiv_{\mathrm{row}}
% \Read:\Pre{\Int},\cdot$.
For brevity, we shall often write $\ell : A$
to mean $\ell : \Pre{A}$. % and omit $\cdot$ for closed rows.
%
\begin{figure}
\begin{mathpar}
% alpha : K
\inferrule*[Lab=\klab{TyVar}]
{ }
{\Delta, \alpha : K \vdash \alpha : K}
% Computation
\inferrule*[Lab=\klab{Comp}]
{ \Delta \vdash A : \Type \\
\Delta \vdash E : \Effect \\
}
{\Delta \vdash A \eff E : \Comp}
% A -E-> B, A : Type, E : Row, B : Type
\inferrule*[Lab=\klab{Fun}]
{ \Delta \vdash A : \Type \\
\Delta \vdash C : \Comp \\
}
{\Delta \vdash A \to C : \Type}
% forall alpha : K . A : Type
\inferrule*[Lab=\klab{Forall}]
{ \Delta, \alpha : K \vdash C : \Comp}
{\Delta \vdash \forall \alpha^K . \, C : \Type}
% Record
\inferrule*[Lab=\klab{Record}]
{ \Delta \vdash R : \Row_\emptyset}
{\Delta \vdash \Record{R} : \Type}
% Variant
\inferrule*[Lab=\klab{Variant}]
{ \Delta \vdash R : \Row_\emptyset}
{\Delta \vdash [R] : \Type}
% Effect
\inferrule*[Lab=\klab{Effect}]
{ \Delta \vdash R : \Row_\emptyset}
{\Delta \vdash \{R\} : \Effect}
% Present
\inferrule*[Lab=\klab{Present}]
{\Delta \vdash A : \Type}
{\Delta \vdash \Pre{A} : \Presence}
% Absent
\inferrule*[Lab=\klab{Absent}]
{ }
{\Delta \vdash \Abs : \Presence}
% Empty row
\inferrule*[Lab=\klab{EmptyRow}]
{ }
{\Delta \vdash \cdot : \Row_\mathcal{L}}
% Extend row
\inferrule*[Lab=\klab{ExtendRow}]
{ \Delta \vdash P : \Presence \\
\Delta \vdash R : \Row_{\mathcal{L} \uplus \{\ell\}}
}
{\Delta \vdash \ell : P;R : \Row_\mathcal{L}}
\end{mathpar}
\caption{Kinding rules}
\label{fig:base-language-kinding}
\end{figure}
%
\paragraph{Kinds}
The kinds classify the different categories of types. The $\Type$ kind
classifies value types, $\Presence$ classifies presence annotations,
$\Comp$ classifies computation types, $\Effect$ classifies effect
types, and lastly $\Row_{\mathcal{L}}$ classifies rows.
%
The formation rules for kinds are given in
Figure~\ref{fig:base-language-kinding}. The kinding judgement
$\Delta \vdash T : K$ states that type $T$ has kind $K$ in kind
environment $\Delta$.
%
The row kind is annotated by a set of labels $\mathcal{L}$. We use
this set to track the labels of a given row type to ensure uniqueness
amongst labels in each row type. For example, the kinding rule
$\klab{ExtendRow}$ uses this set to constrain which labels may be
mentioned in the tail of $R$.% We shall elaborate on this in
% Section~\ref{sec:row-polymorphism}.
\paragraph{Environments}
Kind and type environments are right-extended sequences of bindings. A
kind environment binds type variables to their kinds, whilst a type
environment binds term variables to their types.
\paragraph{Type variables} The type structure has three syntactically
distinct type variables (the kinding system gives us five semantically
distinct notions of type variables). As we sometimes wish to refer
collectively to type variables, we define the set of type variables,
$\TyVarCat$, to be generated by:
%
\[
\TyVarCat \defas
\ba[t]{@{~}l@{~}l}
&\{ A \in \ValTypeCat \mid A \text{ has the form } \alpha \}\\
\cup &\{ R \in \RowCat \mid R \text{ has the form } \rho \}\\
\cup &\{ P \in \PresenceCat \mid P \text{ has the form } \theta \}
\ea
\]
% Value types comprise the function type $A \to C$, whose domain
% is a value type and its codomain is a computation type $B \eff E$,
% where $E$ is an effect type detailing which effects the implementing
% function may perform. Value types further comprise type variables
% $\alpha$ and quantification $\forall \alpha^K.C$, where the quantified
% type variable $\alpha$ is annotated with its kind $K$. Finally, the
% value types also contains record types $\Record{R}$ and variant types
% $[R]$, which are built up using row types $R$. An effect type $E$ is
% also built up using a row type. A row type is a sequence of fields of
% labels $\ell$ annotated with their presence information $P$. The
% presence information denotes whether a label is present $\Pre{A}$ with
% some type $A$, absent $\Abs$, or polymorphic in its presence
% $\theta$. A row type may be either \emph{open} or \emph{closed}. An
% open row ends in a row variable $\rho$ which can be instantiated with
% additional fields, effectively growing the row, whilst a closed row
% ends in $\cdot$, meaning the row cannot grow further.
% The kinds comprise $\Type$ for regular type variables, $\Presence$ for
% presence variables, $\Comp$ for computation type variables, $\Effect$
% for effect variables, and lastly $\Row_{\mathcal{L}}$ for row
% variables. The row kind is annotated by a set of labels
% $\mathcal{L}$. We use this set to track the labels of a given row type
% to ensure uniqueness amongst labels in each row type. We shall
% elaborate on this in Section~\ref{sec:row-polymorphism}.
\paragraph{Free type variables} Sometimes we need to compute the free
type variables ($\FTV$) of a given type. To this end we define a
metafunction $\FTV$ by induction on the type structure, $T$, and
point-wise on type environments, $\Gamma$.
%
\[
\ba[t]{@{~}l@{~~~~~~}c@{~}l}
\multicolumn{3}{c}{\begin{eqs}
\FTV &:& \TypeCat \to \TyVarCat
\end{eqs}}\\
\ba[t]{@{}l}
\begin{eqs}
% \FTV &:& \ValTypeCat \to \TyVarCat\\
\FTV(\alpha) &\defas& \{\alpha\}\\
\FTV(\forall \alpha^K.C) &\defas& \FTV(C) \setminus \{\alpha\}\\
\FTV(A \to C) &\defas& \FTV(A) \cup \FTV(C)\\
\FTV(A \eff E) &\defas& \FTV(A) \cup \FTV(E)\\
\FTV(\{R\}) &\defas& \FTV(R)\\
\FTV(\Record{R}) &\defas& \FTV(R)\\
\FTV([R]) &\defas& \FTV(R)\\
% \FTV(l:P;R) &\defas& \FTV(P) \cup \FTV(R)\\
% \FTV(\Pre{A}) &\defas& \FTV(A)\\
% \FTV(\Abs) &\defas& \emptyset\\
% \FTV(\theta) &\defas& \{\theta\}
\end{eqs}\ea & &
\begin{eqs}
% \FTV([R]) &\defas& \FTV(R)\\
% \FTV(\Record{R}) &\defas& \FTV(R)\\
% \FTV(\{R\}) &\defas& \FTV(R)\\
% \FTV &:& \RowCat \to \TyVarCat\\
\FTV(\cdot) &\defas& \emptyset\\
\FTV(\rho) &\defas& \{\rho\}\\
\FTV(l:P;R) &\defas& \FTV(P) \cup \FTV(R)\\
% \FTV &:& \PresenceCat \to \TyVarCat\\
\FTV(\theta) &\defas& \{\theta\}\\
\FTV(\Abs) &\defas& \emptyset\\
\FTV(\Pre{A}) &\defas& \FTV(A)\\
\end{eqs}\\\\
\multicolumn{3}{c}{\begin{eqs}
\FTV &:& \TyEnvCat \to \TyVarCat\\
\FTV(\cdot) &\defas& \emptyset\\
\FTV(\Gamma,x : A) &\defas& \FTV(\Gamma) \cup \FTV(A)
\end{eqs}}
% \begin{eqs}
% \FTV(\theta) &\defas& \{\theta\}\\
% \FTV(\Abs) &\defas& \emptyset\\
% \FTV(\Pre{A}) &\defas& \FTV(A)
% \end{eqs} & &
% \begin{eqs}
% \FTV(\cdot) &\defas& \emptyset\\
% \FTV(\Gamma,x : A) &\defas& \FTV(\Gamma) \cup \FTV(A)
% \end{eqs}
\ea
\]
%
\paragraph{Type substitution}
We define a type substitution map,
$\sigma : (\TyVarCat \times \TypeCat)^\ast$ as list of pairs mapping a
type variable to its replacement. We denote a single mapping as
$T/\alpha$ meaning substitute $T$ for the variable $\alpha$. We write
multiple mappings using list notation,
i.e. $[T_0/\alpha_0,\dots,T_n/\alpha_n]$. The domain of a substitution
map is set generated by projecting the first component, i.e.
%
\[
\bl
\dom : (\TyVarCat \times \TypeCat)^\ast \to \TyVarCat\\
\dom(\sigma) \defas \{ \alpha \mid (\_/\alpha) \in \sigma \}
\el
\]
%
The application of a type substitution map on a type term, written
$T\sigma$ for some type $T$, is defined inductively on the type
structure as follows.
%
\[
\ba[t]{@{~}l@{~}c@{~}r}
\multicolumn{3}{c}{
\begin{eqs}
(A \eff E)\sigma &\defas& A\sigma \eff E\sigma\\
(A \to C)\sigma &\defas& A\sigma \to C\sigma\\
(\forall \alpha^K.C)\sigma &\adef& \forall \alpha^K.C\sigma\\
\alpha\sigma &\defas& \begin{cases}
A & \text{if } (\alpha,A) \in \sigma\\
\alpha & \text{otherwise}
\end{cases}
\end{eqs}}\\
\begin{eqs}
\Record{R}\sigma &\defas& \Record{R[B/\beta]}\\
{[R]}\sigma &\defas& [R\sigma]\\
\{R\}\sigma &\defas& \{R\sigma\}\\
\cdot\sigma &\defas& \cdot\\
\rho\sigma &\defas& \begin{cases}
R & \text{if } (\rho, R) \in \sigma\\
\rho & \text{otherwise}
\end{cases}\\
\end{eqs}
& ~~~~~~~~~~ &
\begin{eqs}
(\ell : P;R)\sigma &\defas& (\ell : P\sigma; R\sigma)\\
\theta\sigma &\defas& \begin{cases}
P & \text{if } (\theta,P) \in \sigma\\
\theta & \text{otherwise}
\end{cases}\\
\Abs\sigma &\defas& \Abs\\
\Pre{A}\sigma &\defas& \Pre{A\sigma}
\end{eqs}
\ea
\]
%
\paragraph{Types and their inhabitants}
We now have the basic vocabulary to construct types in $\BCalc$. For
instance, the signature of the standard polymorphic identity function
is
%
\[
\forall \alpha^\Type. \alpha \to \alpha \eff \emptyset.
\]
%
Modulo the empty effect signature, this type is akin to the type one
would give for the identity function in System
F~\cite{Girard72,Reynolds74}, and thus we can use standard techniques
from parametricity~\cite{Wadler89} to reason about inhabitants of this
signature. However, in our system we can give an even more general
type to the identity function:
%
\[
\forall \alpha^\Type,\varepsilon^{\Row_\emptyset}. \alpha \to \alpha \eff \{\varepsilon\}.
\]
%
This type is polymorphic in its effect signature as signified by the
singleton open effect row $\{\varepsilon\}$, meaning it may be used in
an effectful context. By contrast, the former type may only be used in
a strictly pure context, i.e. the effect-free context.
%
%\dhil{Maybe say something about reasoning effect types}
%
We can use the effect system to give precise types to effectful
computations. For example, we can give the signature of some
polymorphic computation that may only be run in a read-only context
%
\[
\forall \alpha^\Type, \varepsilon^{\Row_{\{\Read,\Write\}}}. \alpha \eff \{\Read:\Int;\Write:\Abs;\varepsilon\}.
\]
%
The effect row comprise a nullary $\Read$ operation returning some
integer and an absent operation $\Write$. The absence of $\Write$
means that the computation cannot run in a context that admits a
present $\Write$. It can, however, run in a context that admits a
presence polymorphic $\Write : \theta$ as the presence variable
$\theta$ may instantiated to $\Abs$. An inhabitant of this type may be
run in larger effect contexts, i.e. contexts that admit more
operations, because the row ends in an effect variable.
%
The type and effect system is also precise about how a higher-order
function may use its function arguments. For example consider the
signature of a map-operation over some datatype such as
$\Option~\alpha^\Type \defas [\None;\Some:\alpha;\cdot]$
%
\[
\forall \alpha^\Type,\beta^\Type,\varepsilon^{\Row_\emptyset}. \Record{\alpha \to \beta \eff \{\varepsilon\}; \Option~\alpha;\cdot} \to \Option~\beta \eff \{\varepsilon\}.
\]
%
% The $\dec{map}$ function for
% lists is a canonical example of a higher-order function which is
% parametric in its own effects and the effects of its function
% argument. Supposing $\BCalc$ have some polymorphic list datatype
% $\List$, then we would be able to ascribe the following signature to
% $\dec{map}$
% %
% \[
% \forall \alpha^\Type,\beta^\Type,\varepsilon^{\Row_\emptyset}. \Record{\alpha \to \beta \eff \{\varepsilon\},\List~\alpha} \to \List~\beta \eff \{\varepsilon\}.
% \]
%
The first argument is the function that will be applied to the data
carried by second argument. Note that the two effect rows are
identical and share the same effect variable $\varepsilon$, it is thus
evident that an inhabitant of this type can only perform whatever
effects its first argument is allowed to perform.
Higher-order functions may also transform their function arguments,
e.g. modify their effect rows. The following is the signature of a
higher-order function which restricts its argument's effect context
%
\[
\forall \alpha^\Type, \varepsilon^{\Row_{\{\Read\}}},\varepsilon'^{\Row_\emptyset}. (\UnitType \to \alpha \eff \{\Read:\Int;\varepsilon\}) \to (\UnitType \to \alpha \eff \{\Read:\Abs;\varepsilon\}) \eff \{\varepsilon'\}.
\]
%
The function argument is allowed to perform a $\Read$ operation,
whilst the returned function cannot. Moreover, the two functions share
the same effect variable $\varepsilon$. Like the option-map signature
above, an inhabitant of this type performs no effects of its own as
the (right-most) effect row is a singleton row containing a distinct
effect variable $\varepsilon'$.
\paragraph{Syntactic sugar}
Explicitly writing down all of the kinding and type annotations is a
bit on the heavy side. In order to simplify the notation of our future
examples we are going to adopt a few conventions. First, we shall not
write kind annotations, when the kinds can unambiguously be inferred
from context. Second, we do not write quantifiers in prenex
position. Type variables that appear unbound in a signature are
implicitly understood be bound at the outermost level of the type
(this convention is commonly used by practical programming language,
e.g. SML~\cite{MilnerTHM97} and
Haskell~\cite{JonesABBBFHHHHJJLMPRRW99}). Third, we shall adopt the
convention that the row types for closed records and variants are
implicitly understood to end in a $\cdot$, whereas for effect rows we
shall adopt the opposite convention that an effect row is implicitly
understood to be open and ending in a fresh $\varepsilon$ unless it
ends in an explicit $\cdot$. In Section~\ref{sec:effect-sugar} we will
elaborate more on the syntactic sugar for effects. The rationale for
these conventions is that they align with a ML programmer's intuition
for monomorphic record and variant types, and in this dissertation
records and variants will often be monomorphic. Conversely, effect
rows will most often be open.
\subsection{Terms}
\label{sec:base-language-terms}
%
\begin{figure}
\begin{syntax}
\slab{Variables} &x \in \VarCat&&\\
\slab{Values} &V,W \in \ValCat &::= & x
\mid \lambda x^A .\, M \mid \Lambda \alpha^K .\, M
\mid \Record{} \mid \Record{\ell = V;W} \mid (\ell~V)^R \\
& & &\\
\slab{Computations} &M,N \in \CompCat &::= & V\,W \mid V\,T\\
& &\mid& \Let\; \Record{\ell=x;y} = V \; \In \; N\\
& &\mid& \Case\; V \{\ell~x \mapsto M; y \mapsto N\} \mid \Absurd^C~V\\
& &\mid& \Return~V \mid \Let \; x \revto M \; \In \; N\\
\slab{Terms} &t \in \TermCat &::= & x \mid V \mid M
\end{syntax}
\caption{Term syntax of \BCalc{}.}
\label{fig:base-language-term-syntax}
\end{figure}
%
The syntax for terms is given in
Figure~\ref{fig:base-language-term-syntax}. We assume a countably
infinite set of names $\VarCat$ from which we draw fresh variable
names. We shall typically denote term variables by $x$, $y$, or $z$.
%
The syntax partitions terms into values and computations.
%
Value terms comprise variables ($x$), lambda abstraction
($\lambda x^A . \, M$), type abstraction ($\Lambda \alpha^K . \, M$),
and the introduction forms for records and variants. Records are
introduced using the empty record $(\Record{})$ and record extension
$(\Record{\ell = V; W})$, whilst variants are introduced using
injection $((\ell~V)^R)$, which injects a field with label $\ell$ and
value $V$ into a row whose type is $R$. We include the row type
annotation in to support bottom-up type reconstruction.
All elimination forms are computation terms. Abstraction and type
abstraction are eliminated using application ($V\,W$) and type
application ($V\,A$) respectively.
%
The record eliminator $(\Let \; \Record{\ell=x;y} = V \; \In \; N)$
splits a record $V$ into $x$, the value associated with $\ell$, and
$y$, the rest of the record. Non-empty variants are eliminated using
the case construct ($\Case\; V\; \{\ell~x \mapsto M; y \mapsto N\}$),
which evaluates the computation $M$ if the tag of $V$ matches
$\ell$. Otherwise it falls through to $y$ and evaluates $N$. The
elimination form for empty variants is ($\Absurd^C~V$).
%
There is one computation introduction form, namely, the trivial
computation $(\Return~V)$ which returns value $V$. Its elimination
form is the expression $(\Let \; x \revto M \; \In \; N)$ which evaluates
$M$ and binds the result value to $x$ in $N$.
%
%
As our calculus is intrinsically typed, we annotate terms with type or
kind information (term abstraction, type abstraction, injection,
operations, and empty cases). However, we shall omit these annotations
whenever they are clear from context.
\paragraph{Free variables} A given term is said to be \emph{closed} if
every applied occurrence of a variable is preceded by some
corresponding binding occurrence. Any applied occurrence of a variable
that is not preceded by a binding occurrence is said be \emph{free
variable}. We define the function $\FV : \TermCat \to \VarCat$
inductively on the term structure to compute the free variables of any
given term.
%
\[
\bl
\ba[t]{@{~}l@{~}c@{~}l}
\begin{eqs}
\FV(x) &\defas& \{x\}\\
\FV(\lambda x^A.M) &\defas& \FV(M) \setminus \{x\}\\
\FV(\Lambda \alpha^K.M) &\defas& \FV(M)\\[1.0ex]
\FV(V\,W) &\defas& \FV(V) \cup \FV(W)\\
\FV(\Return~V) &\defas& \FV(V)\\
\end{eqs}
& \qquad\qquad &
\begin{eqs}
\FV(\Record{}) &\defas& \emptyset\\
\FV(\Record{\ell = V; W}) &\defas& \FV(V) \cup \FV(W)\\
\FV((\ell~V)^R) &\defas& \FV(V)\\[1.0ex]
\FV(V\,T) &\defas& \FV(V)\\
\FV(\Absurd^C~V) &\defas& \FV(V)\\
\end{eqs}
\ea\\
\begin{eqs}
\FV(\Let\;x \revto M \;\In\;N) &\defas& \FV(M) \cup (\FV(N) \setminus \{x\})\\
\FV(\Let\;\Record{\ell=x;y} = V\;\In\;N) &\defas& \FV(V) \cup (\FV(N) \setminus \{x, y\})\\
\FV(\Case~V~\{\ell\,x \mapsto M; y \mapsto N\} &\defas& \FV(V) \cup (\FV(M) \setminus \{x\}) \cup (\FV(N) \setminus \{y\})
\end{eqs}
\el
\]
%
The function computes the set of free variables bottom-up. Most cases
are homomorphic on the syntax constructors. The interesting cases are
those constructs which feature term binders: lambda abstraction, let
bindings, pair deconstructing, and case splitting. In each of those
cases we subtract the relevant binder(s) from the set of free
variables.
\paragraph{Tail recursion}
In practice implementations of functional programming languages tend
to be tail-recursive in order to enable unbounded iteration. Otherwise
nested (repeated) function calls would quickly run out of stack space
on a conventional computer.
%
Intuitively, tail-recursion permits an already allocated stack frame
for some on-going function call to be reused by a nested function
call, provided that this nested call is the last thing to occur before
returning from the on-going function call.
%
A special case is when the nested function call is a fresh invocation
of the on-going function call, i.e. a self-reference. In this case the
nested function call is known as a \emph{tail recursive call},
otherwise it is simply known as a \emph{tail call}.
%
Thus the qualifier ``tail-recursive'' may be somewhat confusing as for
an implementation to be tail-recursive it must support recycling of
stack frames for tail calls; it is not sufficient to support tail
recursive calls.
%
Any decent implementation of Standard ML~\cite{MilnerTHM97},
OCaml~\cite{LeroyDFGRV20}, or Scheme~\cite{SperberDFSFM10} will be
tail-recursive. I deliberately say implementation rather than
specification, because it is often the case that the specification or
the user manual do not explicitly require a suitable implementation to
be tail-recursive; in fact of the three languages just mentioned only
Scheme explicitly mandates an implementation to be
tail-recursive~\cite{SperberDFSFM10}.
%
% The Scheme specification actually goes further and demands that an
% implementation is \emph{properly tail-recursive}, which provides
% strict guarantees on the asymptotic space consumption of tail
% calls~\cite{Clinger98}.
Tail calls will become important in Chapter~\ref{ch:cps} when we will
discuss continuation passing style as an implementation technique for
effect handlers, as tail calls happen to be ubiquitous in continuation
passing style.
%
Therefore let us formally characterise tail calls.
%
For our purposes, the most robust characterisation is a syntactic
characterisation, as opposed to a semantic characterisation, because
in the presence of control effects (which we will add in
Chapter~\ref{ch:unary-handlers}) it surprisingly tricky to describe
tail calls in terms of control flow such as ``the last thing to occur
before returning from the enclosing function'' as a function may
return multiple times. In particular, the effects of a function may be
replayed several times.
%
For this reason we will adapt a syntactic characterisation of tail
calls due to \citet{Clinger98}. First, we define what it means for a
computation to syntactically \emph{appear in tail position}.
%
\begin{definition}[Tail position]\label{def:tail-comp}
Tail position is defined for computation terms as follows.
%
\begin{itemize}
\item The body $M$ of a $\lambda$-abstraction ($\lambda x. M$) appears in
tail position.
\item The body $M$ of a $\Lambda$-abstraction $(\Lambda \alpha.M)$
appears in tail position.
\item If $\Case\;V\;\{\ell~x \mapsto M; y \mapsto N\}$ appears in tail
position, then both $M$ and $N$ appear in tail positions.
\item If $\Let\;\Record{\ell = x; y} = V \;\In\;N$ appears in tail
position, then $N$ is in tail position.
\item If $\Let\;x \revto M\;\In\;N$ appears in tail position, then
$N$ appear in tail position.
\item Nothing else appears in tail position.
\end{itemize}
\end{definition}
%
\begin{definition}[Tail call]\label{def:tail-call}
An application term $V\,W$ is said to be a tail call if it appears
in tail position.
\end{definition}
%
% The syntactic position of a tail call is often referred to as the
% \emph{tail-position}.
%
\subsection{Typing rules}
\label{sec:base-language-type-rules}
%
\begin{figure}
Values
\begin{mathpar}
% Variable
\inferrule*[Lab=\tylab{Var}]
{x : A \in \Gamma}
{\typv{\Delta;\Gamma}{x : A}}
% Abstraction
\inferrule*[Lab=\tylab{Lam}]
{\typ{\Delta;\Gamma, x : A}{M : C}}
{\typv{\Delta;\Gamma}{\lambda x^A .\, M : A \to C}}
% Polymorphic abstraction
\inferrule*[Lab=\tylab{PolyLam}]
{\typv{\Delta,\alpha : K;\Gamma}{M : C} \\
\alpha \notin \FTV(\Gamma)
}
{\typv{\Delta;\Gamma}{\Lambda \alpha^K .\, M : \forall \alpha^K . \,C}}
\\
% unit : ()
\inferrule*[Lab=\tylab{Unit}]
{ }
{\typv{\Delta;\Gamma}{\Record{} : \UnitType}}
% Extension
\inferrule*[Lab=\tylab{Extend}]
{ \typv{\Delta;\Gamma}{V : A} \\
\typv{\Delta;\Gamma}{W : \Record{\ell:\Abs;R}}
}
{\typv{\Delta;\Gamma}{\Record{\ell=V;W} : \Record{\ell:\Pre{A};R}}}
% Inject
\inferrule*[Lab=\tylab{Inject}]
{\typv{\Delta;\Gamma}{V : A}}
{\typv{\Delta;\Gamma}{(\ell~V)^R : [\ell : \Pre{A}; R]}}
\end{mathpar}
Computations
\begin{mathpar}
% Application
\inferrule*[Lab=\tylab{App}]
{\typv{\Delta;\Gamma}{V : A \to C} \\
\typv{\Delta;\Gamma}{W : A}
}
{\typ{\Delta;\Gamma}{V\,W : C}}
% Polymorphic application
\inferrule*[Lab=\tylab{PolyApp}]
{\typv{\Delta;\Gamma}{V : \forall \alpha^K . \, C} \\
\Delta \vdash A : K
}
{\typ{\Delta;\Gamma}{V\,A : C[A/\alpha]}}
% Split
\inferrule*[Lab=\tylab{Split}]
{\typv{\Delta;\Gamma}{V : \Record{\ell : \Pre{A};R}} \\\\
\typ{\Delta;\Gamma, x : A, y : \Record{\ell : \Abs; R}}{N : C}
}
{\typ{\Delta;\Gamma}{\Let \; \Record{\ell =x;y} = V\; \In \; N : C}}
% Case
\inferrule*[Lab=\tylab{Case}]
{ \typv{\Delta;\Gamma}{V : [\ell : \Pre{A};R]} \\\\
\typ{\Delta;\Gamma,x:A}{M : C} \\\\
\typ{\Delta;\Gamma,y:[\ell : \Abs;R]}{N : C}
}
{\typ{\Delta;\Gamma}{\Case \; V \{\ell\; x \mapsto M;y \mapsto N \} : C}}
% Absurd
\inferrule*[Lab=\tylab{Absurd}]
{\typv{\Delta;\Gamma}{V : []}}
{\typ{\Delta;\Gamma}{\Absurd^C \; V : C}}
% Return
\inferrule*[Lab=\tylab{Return}]
{\typv{\Delta;\Gamma}{V : A}}
{\typc{\Delta;\Gamma}{\Return \; V : A}{E}}
\\
% Let
\inferrule*[Lab=\tylab{Let}]
{\typc{\Delta;\Gamma}{M : A}{E} \\
\typ{\Delta;\Gamma, x : A}{N : C}
}
{\typ{\Delta;\Gamma}{\Let \; x \revto M\; \In \; N : C}}
\end{mathpar}
\caption{Typing rules}
\label{fig:base-language-type-rules}
\end{figure}
%
Thus the rule states that a type abstraction $(\Lambda \alpha. M)$ has
type $\forall \alpha.C$ if the computation $M$ has type $C$ assuming
$\alpha : K$ and $\alpha$ does not appear in the free type variables
of current type environment $\Gamma$. The \tylab{Unit} rule provides
the basis for all records as it simply states that the empty record
has type unit. The \tylab{Extend} rule handles record
extension. Supposing we wish to extend some record $\Record{W}$ with
$\ell = V$, that is $\Record{\ell = V; W}$. This extension has type
$\Record{\ell : \Pre{A};R}$ if and only if $V$ is well-typed and we
can ascribe $W : \Record{\ell : \Abs; R}$. Since
$\Record{\ell : \Abs; R}$ must be well-kinded with respect to
$\Delta$, the label $\ell$ cannot be mentioned in $W$, thus $\ell$
cannot occur more than once in the record. Similarly, the dual rule
\tylab{Inject} states that the injection $(\ell~V)^R$ has type
$[\ell : \Pre{A}; R]$ if the payload $V$ is well-typed. The implicit
well-kindedness condition on $R$ ensures that $\ell$ cannot be
injected twice. To illustrate how the kinding system prevents
duplicated labels consider the following ill-typed example
%
\[
(\dec{S}~\Unit)^{\dec{S}:\UnitType} : [\dec{S}:\UnitType;\dec{S}:\UnitType].
\]
%
Typing fails because the resulting row type is ill-kinded by the
\klab{ExtendRow}-rule:
\begin{mathpar}
\inferrule*[leftskip=6.5em,Right={\klab{Variant}}]
{\inferrule*[Right={\klab{ExtendRow}}]
{\vdash \Pre{\UnitType} : \Presence \\
\inferrule*[Right={\klab{ExtendRow}}]
{\vdash \Pre{\UnitType} : \Presence \\ \vdash \cdot : \Row_{\color{red}{\{\dec{S}\} \uplus \{\dec{S}\}}}}
{\vdash \dec{S}:\Pre{\UnitType};\cdot : \Row_{\emptyset \uplus \{\dec{S}\}}}}
{\vdash \dec{S}:\Pre{\UnitType};\dec{S}:\Pre{\UnitType};\cdot : \Row_{\emptyset}}
}
{\vdash [\dec{S}:\Pre{\UnitType};\dec{S}:\Pre{\UnitType};\cdot] : \Type}
\end{mathpar}
%
The two sets $\{\dec{S}\}$ and $\{\dec{S}\}$ are clearly not disjoint,
thus the second premise of the last application of \klab{ExtendRow}
cannot be satisfied.
\paragraph{Typing computations}
The \tylab{App} rule states that an application $V\,W$ has computation
type $C$ if the function-term $V$ has type $A \to C$ and the
argument term $W$ has type $A$, that is both the argument type and the
domain type of the abstractor agree.
%
The type application rule \tylab{PolyApp} tells us that a type
application $V\,A$ is well-typed whenever the abstractor term $V$ has
the polymorphic type $\forall \alpha^K.C$ and the type $A$ has kind
$K$. This rule makes use of type substitution.
%
The \tylab{Split} rule handles typing of record deconstructing. When
splitting a record term $V$ on some label $\ell$ binding it to $x$ and
the remainder to $y$. The label we wish to split on must be present
with some type $A$, hence we require that
$V : \Record{\ell : \Pre{A}; R}$. This restriction prohibits us for
splitting on an absent or presence polymorphic label. The
continuation of the splitting, $N$, must then have some computation
type $C$ subject to the following restriction: $N : C$ must be
well-typed under the additional assumptions $x : A$ and
$y : \Record{\ell : \Abs; R}$, statically ensuring that it is not
possible to split on $\ell$ again in the continuation $N$.
%
The \tylab{Case} rule is similar, but has two possible continuations:
the success continuation, $M$, and the fall-through continuation, $N$.
The label being matched must be present with some type $A$ in the type
of the scrutinee, thus we require $V : [\ell : \Pre{A};R]$. The
success continuation has some computation $C$ under the assumption
that the binder $x : A$, whilst the fall-through continuation also has
type $C$ it is subject to the restriction that the binder
$y : [\ell : \Abs;R]$ which statically enforces that no subsequent
case split in $N$ can match on $\ell$.
%
The \tylab{Absurd} states that we can ascribe any computation type to
the term $\Absurd~V$ if $V$ has the empty type $[]$.
%
The trivial computation term is typed by the \tylab{Return} rule,
which says that $\Return\;V$ has computation type $A \eff E$ if the
value $V$ has type $A$.
%
The \tylab{Let} rule types let bindings. The computation being bound,
$M$, must have computation type $A \eff E$, whilst the continuation,
$N$, must have computation $C$ subject to the additional assumption
that the binder $x : A$.
\section{Dynamic semantics}
\label{sec:base-language-dynamic-semantics}
%
\begin{figure}
\begin{reductions}
\semlab{App} & (\lambda x^A . \, M) V &\reducesto& M[V/x] \\
\semlab{TyApp} & (\Lambda \alpha^K . \, M) A &\reducesto& M[A/\alpha] \\
\semlab{Split} & \Let \; \Record{\ell = x;y} = \Record{\ell = V;W} \; \In \; N &\reducesto& N[V/x,W/y] \\
\semlab{Case_1} &
\Case \; (\ell~V)^R \{ \ell \; x \mapsto M; y \mapsto N\} &\reducesto& M[V/x] \\
\semlab{Case_2} &
\Case \; (\ell~V)^R \{ \ell' \; x \mapsto M; y \mapsto N\} &\reducesto& N[(\ell~V)^R/y], \hfill\quad \text{if } \ell \neq \ell' \\
\semlab{Let} &
\Let \; x \revto \Return \; V \; \In \; N &\reducesto& N[V/x] \\
\semlab{Lift} &
\EC[M] &\reducesto& \EC[N], \hfill\quad \text{if } M \reducesto N \\
\end{reductions}
\begin{syntax}
\slab{Evaluation contexts} & \mathcal{E} \in \EvalCat &::=& [~] \mid \Let \; x \revto \mathcal{E} \; \In \; N
\end{syntax}
%
%\dhil{Describe evaluation contexts as functions: decompose and plug.}
%
%%\[
% Evaluation context lift
%% \inferrule*[Lab=\semlab{Lift}]
%% { M \reducesto N }
%% { \mathcal{E}[M] \reducesto \mathcal{E}[N]}
%% \]
\caption{Contextual small-step semantics}
\label{fig:base-language-small-step}
\end{figure}
%
In this section I will present the dynamic semantics of \BCalc{}. I
have chosen opt to use a \citet{Felleisen87}-style contextual
small-step semantics, since in conjunction with fine-grain
call-by-value (FGCBV), it yields a considerably simpler semantics than
the traditional structural operational semantics
(SOS)~\cite{Plotkin04a}, because only the rule for let bindings admits
a continuation wheres in ordinary call-by-value SOS each congruence
rule admits a continuation.
%
The simpler semantics comes at the expense of a more verbose syntax,
which is not a concern as one can readily convert between fine-grain
call-by-value and ordinary call-by-value.
The reduction semantics are based on a substitution model of
computation. Thus, before presenting the reduction rules, we define an
adequate substitution function. As usual we work up to
$\alpha$-conversion~\cite{Church32} of terms in $\BCalc{}$.
%
\paragraph{Term substitution}
We define a term substitution map,
$\sigma : (\VarCat \times \ValCat)^\ast$ as list of pairs mapping a
variable to its value replacement. We denote a single mapping as $V/x$
meaning substitute $V$ for the variable $x$. We write multiple
mappings using list notation, i.e. $[V_0/x_0,\dots,V_n/x_n]$. The
domain of a substitution map is set generated by projecting the first
component, i.e.
%
\[
\bl
\dom : (\VarCat \times \ValCat)^\ast \to \ValCat\\
\dom(\sigma) \defas \{ x \mid (\_/x) \in \sigma \}
\el
\]
%
The application of a type substitution map on a term $t \in \TermCat$,
written $t\sigma$, is defined inductively on the term structure as
follows.
%
\[
\ba[t]{@{~}l@{~}c@{~}r}
\begin{eqs}
x\sigma &\defas& \begin{cases}
V & \text{if } (x, V) \in \sigma\\
x & \text{otherwise}
\end{cases}\\
(\lambda x^A.M)\sigma &\adef& \lambda x^A.M\sigma\\
(\Lambda \alpha^K.M)\sigma &\defas& \Lambda \alpha^K.M\sigma\\
(V~T)\sigma &\defas& V\sigma~T
\end{eqs}
&~~&
\begin{eqs}
(V~W)\sigma &\defas& V\sigma~W\sigma\\
\Unit\sigma &\defas& \Unit\\
\Record{\ell = V; W}\sigma &\defas& \Record{\ell = V\sigma;W\sigma}\\
(\ell~V)^R\sigma &\defas& (\ell~V\sigma)^R\\
\end{eqs}\bigskip\\
\multicolumn{3}{c}{
\begin{eqs}
(\Let\;\Record{\ell = x; y} = V\;\In\;N)\sigma &\adef& \Let\;\Record{\ell = x; y} = V\sigma\;\In\;N\sigma\\
(\Case\;(\ell~V)^R\{
\ell~x \mapsto M
; y \mapsto N \})\sigma
&\adef&
\Case\;(\ell~V\sigma)^R\{
\ell~x \mapsto M\sigma
; y \mapsto N\sigma \}\\
(\Let\;x \revto M \;\In\;N)\sigma &\adef& \Let\;x \revto M[V/y] \;\In\;N\sigma
\end{eqs}}
\ea
\]
%
The attentive reader will notice that I am using the same notation for
type and term substitutions. In fact, we shall go further and unify
the two notions of substitution by combining them. As such we may
think of a combined substitution map as pair of a term substitution
map and a type substitution map, i.e.
$\sigma : (\VarCat \times \ValCat)^\ast \times (\TyVarCat \times
\TypeCat)^\ast$. The application of a combined substitution mostly the
same as the application of a term substitution map save for a couple
equations in which we need to apply the type substitution map
component to a type annotation and type abstraction which now might
require a change of name of the bound type variable
%
\[
\bl
(\lambda x^A.M)\sigma \defas \lambda x^{A\sigma.2}.M\sigma, \qquad
(V~T)\sigma \defas V\sigma~T\sigma.2, \qquad
(\ell~V)^R\sigma \defas (\ell~V\sigma)^{R\sigma.2}\medskip\\
\begin{eqs}
(\Lambda \alpha^K.M)\sigma &\adef& \Lambda \alpha^K.M\sigma\\
(\Case\;(\ell~V)^R\{
\ell~x \mapsto M
; y \mapsto N \})\sigma
&\adef&
\Case\;(\ell~V\sigma)^{R\sigma.2}\{
\ell~x \mapsto M\sigma
; y \mapsto N\sigma \}.
\end{eqs}
\el
\]
%
% We shall go further and use the
% notation to mean simultaneous substitution of types and terms, that is
% we
% %
% We justify this choice by the fact that we can lift type substitution
% pointwise on the term syntax constructors, enabling us to use one
% uniform notation for substitution.
% %
% Thus we shall generally allow a mix
% of pairs of variables and values and pairs of type variables and types
% to occur in the same substitution map.
\paragraph{Reduction semantics}
The reduction relation $\reducesto \subseteq \CompCat \times \CompCat$
relates a computation term to another if the former can reduce to the
latter in a single step. Figure~\ref{fig:base-language-small-step}
depicts the reduction rules. The application rules \semlab{App} and
\semlab{TyApp} eliminates a lambda and type abstraction, respectively,
by substituting the argument for the parameter in their body
computation $M$.
%
Record splitting is handled by the \semlab{Split} rule: splitting on
some label $\ell$ binds the payload $V$ to $x$ and the remainder $W$
to $y$ in the continuation $N$.
%
Disjunctive case splitting is handled by the two rules
\semlab{Case_1} and \semlab{Case_2}. The former rule handles the
success case, when the scrutinee's tag $\ell$ matches the tag of the
success clause, thus binds the payload $V$ to $x$ and proceeds to
evaluate the continuation $M$. The latter rule handles the
fall-through case, here the scrutinee gets bounds to $y$ and
evaluation proceeds with the continuation $N$.
%
The \semlab{Let} rule eliminates a trivial computation term
$\Return\;V$ by substituting $V$ for $x$ in the continuation $N$.
%
\paragraph{Evaluation contexts}
Recall from Section~\ref{sec:base-language-terms},
Figure~\ref{fig:base-language-term-syntax} that the syntax of let
bindings allows a general computation term $M$ to occur on the right
hand side of the binding, i.e. $\Let\;x \revto M \;\In\;N$. Thus we
are seemingly stuck in the general case, as the \semlab{Let} rule only
applies if the right hand side is a trivial computation.
%
However, it is at this stage we make use of the notion of
\emph{evaluation contexts} due to \citet{Felleisen87}. An evaluation
context is syntactic construction which decompose the dynamic
semantics into a set of base rules (i.e. the rules presented thus far)
and an inductive rule, which enables us to focus on a particular
computation term, $M$, in some larger context, $\EC$, and reduce it in
the said context to another computation $N$ if $M$ reduces outside out
the context to that particular $N$. In our formalism, we call this
rule \semlab{Lift}. Evaluation contexts are generated from the empty
context ($[~]$) and let expressions ($\Let\;x \revto \EC \;\In\;N$).
The choices of using fine-grain call-by-value and evaluation contexts
may seem odd, if not arbitrary at this point; the reader may wonder
with good reason why we elect to use fine-grain call-by-value over
ordinary call-by-value. In Chapter~\ref{ch:unary-handlers} we will
reap the benefits from our design choices, as we shall see that the
combination of fine-grain call-by-value and evaluation contexts
provide the basis for a convenient, simple semantic framework for
working with continuations.
\paragraph{Syntactic sugar}
We will adopt a few conventions to make the notation more convenient
for writing out examples. First, we elide type annotations when they
are clear from the context.
%
We will often write code in direct-style assuming the standard
left-to-right call-by-value elaboration into fine-grain
call-by-value~\citep{Moggi91, FlanaganSDF93}.
%
For example, the expression $f\,(h\,w) + g\,\Unit$ is syntactic sugar
for:
%
{
\[
\ba[t]{@{~}l}
\Let\; x \revto h\,w \;\In\;
\Let\; y \revto f\,x \;\In\;
\Let\; z \revto g\,\Unit \;\In\;
y + z
\ea
\]}%
%
We define sequencing of computations in the standard way.
%
{
\[
M;N \defas \Let\;x \revto M \;\In\;N, \quad \text{where $x \notin FV(N)$}
\]}%
%
We make use of standard syntactic sugar for pattern matching. For
instance, we write
%
{
\[
\lambda\Unit.M \defas \lambda x^{\One}.M, \quad \text{where $x \notin FV(M)$}
\]}%
%
for suspended computations. We encode booleans using variants:
\begin{mathpar}
\Bool \defas [\dec{True}:\UnitType;\dec{False}:\UnitType]
\True \defas \dec{True}\,\Unit
\False \defas \dec{False}\,\Unit
\If\;V\;\Then\;M\;\Else\;N \defas \Case\;V\;\{\dec{True}~\Unit \mapsto M; \dec{False}~\Unit \mapsto N\}
\end{mathpar}%
\section{Metatheoretic properties of \BCalc{}}
\label{sec:base-language-metatheory}
Thus far we have defined the syntax, static semantics, and dynamic
semantics of \BCalc{}. In this section, we state and prove some
customary metatheoretic properties about \BCalc{}.
%
We begin by showing that substitutions preserve typeability.
%
\begin{lemma}[Preservation of typing under substitution]\label{lem:base-language-subst}
Let $\sigma$ be any type substitution and $V \in \ValCat$ be any
value and $M \in \CompCat$ a computation such that
$\typ{\Delta;\Gamma}{V : A}$ and $\typ{\Delta;\Gamma}{M : C}$, then
$\typ{\Delta;\Gamma\sigma}{V\sigma : A\sigma}$ and
$\typ{\Delta;\Gamma\sigma}{M\sigma : C\sigma}$.
\end{lemma}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
%
% \dhil{It is clear to me at this point, that I want to coalesce the
% substitution functions. Possibly define them as maps rather than ordinary functions.}
The reduction semantics satisfy a \emph{unique decomposition}
property, which guarantees the existence and uniqueness of complete
decomposition for arbitrary computation terms into evaluation
contexts.
%
\begin{lemma}[Unique decomposition]\label{lem:base-language-uniq-decomp}
For any computation $M \in \CompCat$ it holds that $M$ is either
stuck or there exists a unique evaluation context $\EC \in \EvalCat$
and a redex $N \in \CompCat$ such that $M = \EC[N]$.
\end{lemma}
%
\begin{proof}
By structural induction on $M$.
\begin{description}
\item[Base step] $M = N$ where $N$ is either $\Return\;V$,
$\Absurd^C\;V$, $V\,W$, or $V\,T$. In each case take $\EC = [\,]$
such that $M = \EC[N]$.
\item[Inductive step]
%
There are several cases to consider. In each case we must find an
evaluation context $\EC$ and a computation term $M'$ such that
$M = \EC[M']$.
\begin{itemize}
\item[Case] $M = \Let\;\Record{\ell = x; y} = V\;\In\;N$: Take $\EC = [\,]$ such that $M = \EC[\Let\;\Record{\ell = x; y} = V\;\In\;N]$.
\item[Case] $M = \Case\;V\,\{\ell\,x \mapsto M'; y \mapsto N\}$:
Take $\EC = [\,]$ such that
$M = \EC[\Case\;V\,\{\ell\,x \mapsto M'; y \mapsto N\}]$.
\item[Case] $M = \Let\;x \revto M' \;\In\;N$: By the induction
hypothesis it follows that $M'$ is either stuck or it
decomposes (uniquely) into an evaluation context $\EC'$ and a
redex $N'$. If $M'$ is stuck, then so is $M$. Otherwise take
$\EC = \Let\;x \revto \EC'\;\In\;N$ such that $M = \EC[N']$.
\end{itemize}
\end{description}
\end{proof}
%
The calculus satisfies the standard \emph{progress} property, which
states that \emph{every} closed computation term either reduces to a
trivial computation term $\Return\;V$ for some value $V$, or there
exists some $N$ such that $M \reducesto N$.
%
\begin{definition}[Computation normal form]\label{def:base-language-comp-normal}
A computation $M \in \CompCat$ is said to be \emph{normal} if it is
of the form $\Return\; V$ for some value $V \in \ValCat$.
\end{definition}
%
\begin{theorem}[Progress]\label{thm:base-language-progress}
Suppose $\typ{}{M : C}$, then $M$ is normal or there exists
$\typ{}{N : C}$ such that $M \reducesto^\ast N$.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
%
% \begin{corollary}
% Every closed computation term in \BCalc{} is terminating.
% \end{corollary}
%
The calculus also satisfies the \emph{subject reduction} property,
which states that if some computation $M$ is well typed and reduces to
some other computation $M'$, then $M'$ is also well typed.
%
\begin{theorem}[Subject reduction]\label{thm:base-language-preservation}
Suppose $\typ{\Delta;\Gamma}{M : C}$ and $M \reducesto N$, then
$\typ{\Delta;\Gamma}{N : C}$.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
Even more the calculus, as specified, is \emph{strongly normalising},
meaning that every closed computation term reduces to a trivial
computation term. In other words, any realisable function in $\BCalc$
is effect-free and total.
%
\begin{claim}[Type soundness]\label{clm:soundness}
If $\typ{}{M : C}$, then there exists $\typ{}{N : C}$ such that
$M \reducesto^\ast N \not\reducesto$ and $N$ is normal.
\end{claim}
%
We will not prove this theorem here as the required proof gadgetry is
rather involved~\cite{LindleyS05}, and we will soon dispense of this
property.
\section{A primitive effect: recursion}
\label{sec:base-language-recursion}
%
As $\BCalc$ is (claimed to be) strongly normalising it provides a
solid and minimal basis for studying the expressiveness of any
extension, and in particular, which primitive effects any such
extension may sneak into the calculus.
However, we cannot write many (if any) interesting programs in
\BCalc{}. The calculus is simply not expressive enough. In order to
bring the calculus closer to the ML-family of languages we endow the
calculus with a fixpoint operator which introduces recursion as a
primitive effect. % We dub the resulting calculus \BCalcRec{}.
%
First we augment the syntactic category of values with a new
abstraction form for recursive functions.
%
\begin{syntax}
& V,W \in \ValCat &::=& \cdots \mid \Rec \; f^{A \to C} \, x.M
\end{syntax}
%
The $\Rec$ construct binds the function name $f$ and its argument $x$
in the body $M$. Typing of recursive functions is standard.
%
\begin{mathpar}
\inferrule*[Lab=\tylab{Rec}]
{\typ{\Delta;\Gamma,f : A \to C, x : A}{M : C}}
{\typ{\Delta;\Gamma}{(\Rec \; f^{A \to C} \, x . M) : A \to C}}
\end{mathpar}
%
The reduction semantics are also standard.
%
\begin{reductions}
\semlab{Rec} &
(\Rec \; f^{A \to C} \, x.M)\,V &\reducesto& M[(\Rec \; f^{A \to C}\,x.M)/f, V/x]
\end{reductions}
%
Every occurrence of $f$ in $M$ is replaced by the recursive abstractor
term, while every $x$ in $M$ is replaced by the value argument $V$.
The introduction of recursion means that the claimed type soundness
theorem (Claim~\ref{clm:soundness}) no longer holds as some programs
may now diverge.
\subsection{Tracking divergence via the effect system}
\label{sec:tracking-div}
%
With the $\Rec$-operator in place we can now implement the
factorial function.
%
\[
\bl
\dec{fac} : \Int \to \Int \eff \emptyset\\
\dec{fac} \defas \Rec\;f~n.
\ba[t]{@{}l}
\Let\;is\_zero \revto n = 0\;\In\\
\If\;is\_zero\;\Then\; \Return\;1\\
\Else\;\ba[t]{@{~}l}
\Let\; n' \revto n - 1 \;\In\\
\Let\; m \revto f~n' \;\In\\
n * m
\ea
\ea
\el
\]
%
The $\dec{fac}$ function computes $n!$ for any non-negative integer
$n$. If $n$ is negative then $\dec{fac}$ diverges as the function will
repeatedly select the $\Else$-branch in the conditional
expression. Thus this function is not total on its domain. Yet the
effect signature does not alert us about the potential divergence. In
fact, in this particular instance the effect row on the computation
type is empty, which might deceive the doctrinaire to think that this
function is `pure'. Whether this function is pure or impure depend on
the precise notion of purity -- which we have yet to choose. Shortly
we shall make clear the notion of purity that we have mind, however,
first let us briefly illustrate how we might utilise the effect system
to track divergence.
The key to tracking divergence is to modify the \tylab{Rec} to inject
some primitive operation into the effect row.
%
\begin{mathpar}
\inferrule*[Lab=$\tylab{Rec}^\ast$]
{\typ{\Delta;\Gamma,f : A \to B\eff\{\dec{Div}:\ZeroType\}, x : A}{M : B\eff\{\dec{Div}:\ZeroType\}}}
{\typ{\Delta;\Gamma}{(\Rec \; f^{A \to B\eff\{\dec{Div}:\ZeroType\}} \, x .M) : A \to B\eff\{\dec{Div}:\ZeroType\}}}
\end{mathpar}
%
In this typing rule we have chosen to inject an operation named
$\dec{Div}$ into the effect row of the computation type on the
recursive binder $f$. The operation is primitive, because it can never
be directly invoked, rather, it occurs as a side-effect of applying a
$\Rec$-definition (this is also why we ascribe it type $\ZeroType$,
though, we may use whatever type we please).
%
Using this typing rule we get that
$\dec{fac} : \Int \to \Int \eff \{\dec{Div}:\ZeroType\}$. Consequently,
every application-site of $\dec{fac}$ must now permit the $\dec{Div}$
operation in order to type check.
% \begin{example}
To illustrate effect tracking in action consider the following
suspended computation.
%
\[
\bl
\lambda\Unit. \dec{fac}~3
\el
\]
%
The computation calculates $3!$ when forced.
%
The typing derivation for this computation illustrates how the
application of $\dec{fac}$ causes its effect row to be propagated
outwards. Let
$\Gamma = \{\dec{fac} : \Int \to \Int \eff
\{\dec{Div}:\ZeroType\}\}$.
%
\begin{mathpar}
\inferrule*[Right={\tylab{Lam}}]
{\inferrule*[Right={\tylab{App}}]
{\typ{\emptyset;\Gamma,\Unit:\UnitType}{\dec{fac} : \Int \to \Int \eff \{\dec{Div}:\ZeroType\}}\\
\typ{\emptyset;\Gamma,\Unit:\UnitType}{3 : \Int}
}
{\typc{\emptyset;\Gamma,\Unit:\UnitType}{\dec{fac}~3 : \Int}{\{\dec{Div}:\ZeroType\}}}
}
{\typ{\emptyset;\Gamma}{\lambda\Unit.\dec{fac}~3} : \UnitType \to \Int \eff \{\dec{Div}:\ZeroType\}}
\end{mathpar}
%
The information that the computation applies a possibly divergent
function internally gets reflected externally in its effect
signature.\medskip
%
A possible inconvenience of the current formulation of
$\tylab{Rec}^\ast$ is that it recursion cannot be mixed with other
computational effects. The reason being that the effect row on
$A \to B\eff \{\dec{Div}:\ZeroType\}$ is closed. Thus in a practical
general-purpose programming language implementation it is likely be
more convenient to leave the tail of the effect row open as to allow
recursion to be used in larger effect contexts. The rule formulation
is also rather coarse as it renders every $\Rec$-definition as
possibly divergent -- even definitions that are obviously
non-divergent such as the $\Rec$-variation of the identity function:
$\Rec\;f\,x.x$. A practical implementation could utilise a static
termination checker~\cite{Walther94} to obtain more fine-grained
tracking of divergence.
The question remains as to whether we should track divergence. In this
dissertation I choose not to track divergence in the effect
system. This choice is a slight departure from the real implementation
of Links~\cite{LindleyC12}. However, the focus of this part of the
dissertation is on programming with \emph{user-definable computational
effects}. Recursion is not a user-definable effect in our setting,
and therefore, we may regard divergence information as adding noise to
effect signatures. This choice ought not to alienate programmers as it
aligns with, say, the notion of purity employed by
Haskell~\cite{JonesABBBFHHHHJJLMPRRW99,Sabry98}.
% A
% solemnly sworn pure functional programmer might respond with a
% resounding ``\emph{yes!}'', whilst a pragmatic functional programmer
% might respond ``\emph{it depends}''. I side with the latter. My take
% is that, on one hand if we find ourselves developing safety-critical
% systems, a static approximation of the dynamic behaviour of functions
% can be useful, and thus, we ought to track divergence (and other
% behaviours). On the other hand, if we find ourselves doing general
% purpose programming, then it may be situational useful to know whether
% some function may exhibit divergent behaviour
% By fairly lightweight means we can obtain a finer analysis of
% $\Rec$-definitions by simply having an additional typing rule for
% the application of $\Rec$.
% %
% \begin{mathpar}
% \inferrule*[lab=$\tylab{AppRec}^\ast$]
% { E' = \{\dec{Div}:\ZeroType\} \uplus E\\
% \typ{\Delta}{E'}\\\\
% \typ{\Delta;\Gamma}{\Rec\;f^{A \to B \eff E}\,x.M : A \to B \eff E}\\
% \typ{\Delta;\Gamma}{W : A}
% }
% {\typ{\Delta;\Gamma}{(\Rec\;f^{A \to B \eff E}\,x.M)\,W : B \eff E'}}
% \end{mathpar}
% %
% \subsection{Notions of purity}
% \label{sec:notions-of-purity}
% The term `pure' is heavily overloaded in the programming literature.
% %
% \dhil{In this thesis we use the Haskell notion of purity.}
\section{Related work}
\paragraph{Row polymorphism} Row polymorphism was originally
introduced by \citet{Wand87} as a typing discipline for extensible
records. The style of row polymorphism used in this chapter is due to
\citet{Remy94}. It was designed to work well with type inference as
typically featured in the ML-family of programming
languages. \citeauthor{Remy94} also describes a slight variation of
this system, where the presence polymorphism annotations may depend on
a concrete type, e.g. $\ell : \theta.\Int;R$ means that the label is
polymorphic in its presence, however, if it is present then it has
presence type $\Int$.
Either of \citeauthor{Remy94}'s row systems have set semantics, i.e. a
row cannot contain duplicated labels. An alternative semantics based
on dictionaries is used by \citet{Leijen05}. In
\citeauthor{Leijen05}'s system labels may be duplicated, which
introduces a form of scoping for labels, which for example makes it
possible to shadow fields in a record. There is no notion of presence
information in \cite{Leijen05}'s system, and thus, as a result
\citeauthor{Leijen05}-style rows simplify the overall type structure.
\citet{Leijen14} has used this system as the basis for the effect
system of Koka.
\citet{MorrisM19} have developed a unifying theory of rows, which
collects the aforementioned row systems under one umbrella. Their
system provides a general account of record extension and projection,
and dually, variant injection and branching.
\paragraph{Effect tracking} As mentioned in
Section~\ref{sec:back-to-directstyle} the original effect system was
developed by \citet{LucassenG88} to provide a lightweight facility for
static concurrency analysis. Since then effect systems have been
employed to perform a variety of static analyses,
e.g. \citet{TofteT94,TofteT97} describe a region-based memory
management system that makes use of a type and effect system to infer
and track lifetimes of regions; \citet{BentonK99} use a monadic effect
system to identify opportunities for optimisations in the intermediate
language of their ML to Java compiler;
%
and \citet{LindleyC12} use a variation of the row system presented in
this chapter to support abstraction and predicable code generation for
database programming in Links. Row types are used to give structural
types to SQL rows in queries, whilst their effect system is used to
differentiate between \emph{tame} and \emph{wild} functions, where a
tame function is one whose body can be translated and run directly on
the database, whereas a wild function cannot.
\section{Effect handler calculi}
\label{ch:unary-handlers}
%
Programming with effect handlers is a dichotomy of \emph{performing}
and \emph{handling} of effectful operations --- or alternatively a
dichotomy of \emph{constructing} and \emph{deconstructing} effects. An
operation is a constructor of an effect. By itself an operation has no
predefined semantics. A handler deconstructs an effect by
pattern-matching on its operations. By matching on a particular
operation, a handler instantiates the operation with a particular
semantics of its own choosing. The key ingredient to make this work in
practice is \emph{delimited control}. Performing an operation reifies
the remainder of the computation up to the nearest enclosing handler
of the operation as a continuation. This continuation is exposed to
the programmer via the handler as a first-class value, and thus, it
may be invoked, discarded, or stored for later use at the discretion
of the programmer.
Effect handlers provide a structured and modular interface for
programming with delimited control. They are structured in the sense
that the invocation site of an operation is decoupled from the use
site of its continuation. A handler consists of a collection of
operation clauses, one for each operation it handles. Effect handlers
are modular as a handler will only capture and expose continuations
for operations that it handles, other operation invocations pass
seamlessly through the handler such that the operation can be handled
by another suitable handler. This allows modular construction of
effectful programs, where multiple handlers can be composed to fully
interpret the effect signature of the whole program.
% There exists multiple flavours of effect handlers. The handlers
% introduced by \citet{PlotkinP09} are known as \emph{deep} handlers,
% and they are semantically defined as folds over computation
% trees. Dually, \emph{shallow} handlers are defined as case-splits over
% computation trees.
%
The purpose of the chapter is to augment the base calculi \BCalc{}
with effect handlers, and demonstrate their practical versatility by
way of a programming case study. The primary focus is on so-called
\emph{deep} and \emph{shallow} variants of handlers. In
Section~\ref{sec:unary-deep-handlers} we endow \BCalc{} with deep
handlers, which we put to use in
Section~\ref{sec:deep-handlers-in-action} where we implement a
\UNIX{}-style operating system. In
Section~\ref{sec:unary-shallow-handlers} we extend \BCalc{} with shallow
handlers, and subsequently we use them to extend the functionality of
the operating system example. Finally, in
Section~\ref{sec:unary-parameterised-handlers} we will look at
\emph{parameterised} handlers, which are a refinement of ordinary deep
handlers.
From here onwards I will make a slight change of terminology to
disambiguate programmatic continuations, i.e. continuations exposed to
the programmer, from continuations in continuation passing style
(Chapter~\ref{ch:cps}) and continuations in abstract machines
(Chapter~\ref{ch:abstract-machine}). In the remainder of this
dissertation I refer to programmatic continuations as `resumptions',
and reserve the term `continuation' for continuations concerning
implementation details.
\paragraph{Relation to prior work} The deep and shallow handler
calculi that are introduced in Section~\ref{sec:unary-deep-handlers},
Section~\ref{sec:unary-shallow-handlers}, and
Section~\ref{sec:unary-parameterised-handlers} are adapted with minor
syntactic changes from the following work.
%
\begin{enumerate}[i]
\item \bibentry{HillerstromL16}
\item \bibentry{HillerstromL18} \label{en:sec-handlers-L18}
\item \bibentry{HillerstromLA20} \label{en:sec-handlers-HLA20}
\end{enumerate}
%
The `pipes' example in Section~\ref{sec:unary-shallow-handlers}
appears in items \ref{en:sec-handlers-L18} and
\ref{en:sec-handlers-HLA20} above.
\section{Deep handlers}
\label{sec:unary-deep-handlers}
%
As our starting point we take the regular base calculus, \BCalc{},
without the recursion operator and extend it with deep handlers to
yield the calculus \HCalc{}. We elect to do so because deep handlers
do not require the power of an explicit fixpoint operator to be a
practical programming abstraction. Building \HCalc{} on top of
\BCalc{} with the recursion operator requires no change in semantics.
%
Deep handlers~\cite{PlotkinP09,Pretnar10} are defined by folds
(specifically \emph{catamorphisms}~\cite{MeijerFP91}) over computation
trees, meaning they provide a uniform semantics to the handled
operations of a given computation. In contrast, shallow handlers are
defined as case-splits over computation trees, and thus, allow a
nonuniform semantics to be given to operations. We will discuss this
point in more detail in Section~\ref{sec:unary-shallow-handlers}.
\subsection{Performing effectful operations}
\label{sec:eff-language-perform}
An effectful operation is a purely syntactic construction, which has
no predefined dynamic semantics. In our calculus effectful operations
are a computational phenomenon, and thus, their introduction form is a
computation term. To type operation we augment the syntactic category
of value types with a new arrow.
%
\begin{syntax}
\slab{Value\textrm{ }types} &A,B \in \ValTypeCat &::=& \cdots \mid A \opto B\\
\slab{Computations} &M,N \in \CompCat &::=& \cdots \mid (\Do \; \ell~V)^E
\end{syntax}
%
The operation arrow, $\opto$, denotes the operation space. The
operation space arrow is similar to the function space arrow in that
the type $A$ denotes the domain type of the operation, i.e. the type
of the operation payload, and the codomain type $B$ denotes the return
type of the operation. Contrary to the function space constructor,
$\to$, the operation space constructor does not have an associated
effect row. As we will see later, the reason that the operation space
constructor does not have an effect row is that the effects of an
operation is conferred by its handler.
The intended behaviour of the new computation term $(\Do\; \ell~V)^E$
is that it performs some operation $\ell$ with value argument
$V$. Thus the $\Do$-construct is similar to the typical
exception-signalling $\keyw{throw}$ or $\keyw{raise}$ constructs found
in programming languages with support for exceptions. In fact
operationally, an effectful operation may be thought of as an
exception which is resumable~\cite{Leijen17}. The term is annotated
with an effect row $E$, providing a way to make the current effect
context accessible during typing.
%
\begin{mathpar}
\inferrule*[Lab=\tylab{Do}]
{ \typ{\Delta}{E} \\
E = \{\ell : A \opto B; R\} \\
\typ{\Delta;\Gamma}{V : A}
}
{\typc{\Delta;\Gamma}{(\Do \; \ell \; V)^E : B}{E}}
\end{mathpar}
%
An operation invocation is only well-typed if the effect row $E$ is
well-kinded and mentions the operation with a present type, or put
differently: the current effect context must permit an instance of the
operation to occur. The argument type $A$ must be the same as the
domain type of the operation. The type of the whole term is the
(value) return type of the operation paired with the current effect
context.
We have the basic machinery for writing effectful programs, albeit we
cannot evaluate those programs without handlers to ascribe a semantics
to the operations.
% \paragraph{Computation trees} We have the basic machinery for writing
% effectful programs, albeit we cannot evaluate those programs without
% handlers to ascribe a semantics to the operations.
% %
% For instance consider the signature of computations over some state at
% type $\beta$.
% %
% \[
% \State~\beta \defas \{\Get : \UnitType \opto \beta; \Put : \beta \opto \UnitType\}
% \]
% %
% \[
% \bl
% \dec{condupd} : \Record{\Bool;\beta} \to \beta \eff \State~\beta\\
% \dec{condupd}~\Record{b;v} \defas
% \bl
% \Let\; x \revto \Do\;\Get\,\Unit\;\In\\
% \If\;b\;\Then\;\;\Do\;\Put~v;\,x\;
% \Else\;x
% \el
% \el
% \]
% %
% This exact notation is due to \citet{Lindley14}, though the idea of
% using computation trees dates back to at least
% \citet{Kleene59,Kleene63}.
% \dhil{Introduce notation for computation trees.}
\subsection{Handling of effectful operations}
%
The elimination form for an effectful operation is an effect
handler. Effect handlers interpret the effectful segments of a
program.
%
The addition of handlers requires us to extend the type algebra of
$\BCalc$ with a kind for handlers and a new syntactic category for
handler types.
%
\begin{syntax}
\slab{Kinds} &K \in \KindCat &::=& \cdots \mid \Handler\\
\slab{Handler\textrm{ }types} &F \in \HandlerTypeCat &::=& C \Harrow D\\
\slab{Types} &T \in \TypeCat &::=& \cdots \mid F
\end{syntax}
%
The syntactic category of kinds is augmented with the kind $\Handler$
which we will ascribe to handler types $F$. The arrow, $\Harrow$,
denotes the handler space. The type structure suggests that a handler
is a transformer of computations, since by looking solely at the types
a handler takes a computation of type $C$ and returns another
computation of type $D$. As such, we may think of a handler as a sort
of generalised function, that work over computations rather than bare
values (this observation is exploited in the \Frank{} programming
language, where a function is but a special case of a
handler~\cite{LindleyMM17,ConventLMM20}).
%
The following kinding rule checks whether a handler type is
well-kinded.
%
\begin{mathpar}
\inferrule*[Lab=\klab{Handler}]
{ \Delta \vdash C : \Comp \\
\Delta \vdash D : \Comp
}
{\Delta \vdash C \Harrow D : \Handler}
\end{mathpar}
%
With the type structure in place, we can move on to the term syntax
for handlers. Handlers extend the syntactic category of computations
with a new computation form as well as introducing a new syntactic
category of handler definitions.
%
\begin{syntax}
\slab{Computations} &M,N \in \CompCat &::=& \cdots \mid \Handle \; M \; \With \; H\\[1ex]
\slab{Handlers} &H \in \HandlerCat &::=& \{ \Return \; x \mapsto M \}
\mid \{ \OpCase{\ell}{p}{r} \mapsto N \} \uplus H\\
\slab{Terms} &t \in \TermCat &::=& \cdots \mid H
\end{syntax}
%
The handle construct $(\Handle \; M \; \With \; H)$ is the counterpart
to $\Do$. It runs computation $M$ using handler $H$. A handler $H$
consists of a return clause $\{\Return \; x \mapsto M\}$ and a
possibly empty set of operation clauses
$\{\OpCase{\ell}{p_\ell}{r_\ell} \mapsto N_\ell\}_{\ell \in \mathcal{L}}$.
%
The return clause $\{\Return \; x \mapsto M\}$ defines how to
interpret the final return value of a handled computation, i.e. a
computation that has been fully reduced to $\Return~V$ for some value
$V$. The variable $x$ is bound to the final return value in the body
$M$.
%
Each operation clause
$\{\OpCase{\ell}{p_\ell}{r_\ell} \mapsto N_\ell\}_{\ell \in
\mathcal{L}}$ defines how to interpret an invocation of some
operation $\ell$. The variables $p_\ell$ and $r_\ell$ are bound in the
body $N_\ell$. The binding occurrence $p_\ell$ binds the payload of
the operation and $r_\ell$ binds the resumption of the operation
invocation, which is the delimited continuation from the invocation
site up of $\ell$ to and including the enclosing handler.
Given a handler $H$, we often wish to refer to the clause for a
particular operation or the return clause; for these purposes we
define two convenient projections on handlers in the metalanguage.
\[
\ba{@{~}r@{~}c@{~}l@{~}l}
\hell &\defas& \{\OpCase{\ell}{p}{r} \mapsto N \}, &\quad \text{where } \{\OpCase{\ell}{p}{r} \mapsto N \} \in H\\
\hret &\defas& \{\Return\; x \mapsto N \}, &\quad \text{where } \{\Return\; x \mapsto N \} \in H\\
\ea
\]
%
The $\hell$ projection yields the singleton set consisting of the
operation clause in $H$ that handles the operation $\ell$, whilst
$\hret$ yields the singleton set containing the return clause of $H$.
%
We define the \emph{domain} of an handler as the set of operation
labels it handles, i.e.
%
\begin{equations}
\dom &:& \HandlerCat \to \LabelCat\\
\dom(\{\Return\;x \mapsto M\}) &\defas& \emptyset\\
\dom(\{\OpCase{\ell}{p}{r} \mapsto M\} \uplus H) &\defas& \{\ell\} \cup \dom(H)
\end{equations}
\subsection{Static semantics}
There are two typing rules for handlers. The first rule type checks
the $\Handle$-construct and the second rule type checks handler
definitions.
%
\begin{mathpar}
\inferrule*[Lab=\tylab{Handle}]
{
\typ{\Gamma}{M : C} \\
\typ{\Gamma}{H : C \Harrow D}
}
{\Gamma \vdash \Handle \; M \; \With\; H : D}
%\mprset{flushleft}
\inferrule*[Lab=\tylab{Handler}]
{{\bl
C = A \eff \{(\ell_i : A_i \opto B_i)_i; R\} \\
D = B \eff \{(\ell_i : P_i)_i; R\}\\
H = \{\Return\;x \mapsto M\} \uplus \{ \OpCase{\ell_i}{p_i}{r_i} \mapsto N_i \}_i
\el}\\\\
\typ{\Delta;\Gamma, x : A}{M : D}\\\\
[\typ{\Delta;\Gamma,p_i : A_i, r_i : B_i \to D}{N_i : D}]_i
}
{\typ{\Delta;\Gamma}{H : C \Harrow D}}
\end{mathpar}
%
The \tylab{Handle} rule is simply the application rule for handlers.
%
The \tylab{Handler} rule is where most of the work happens. The effect
rows on the input computation type $C$ and the output computation type
$D$ must mention every operation in the domain of the handler. In the
output row those operations may be either present ($\Pre{A}$), absent
($\Abs$), or polymorphic in their presence ($\theta$), whilst in the
input row they must be mentioned with a present type as those types
are used to type operation clauses.
%
In each operation clause the resumption $r_i$ must have the same
return type, $D$, as its handler. In the return clause the binder $x$
has the same type, $C$, as the result of the input computation.
\subsection{Dynamic semantics}
We augment the operational semantics with two new reduction rules: one
for handling return values and another for handling operations.
%{\small{
\begin{reductions}
\semlab{Ret} &
\Handle \; (\Return \; V) \; \With \; H &\reducesto& N[V/x], \hfill\text{where } \hret = \{ \Return \; x \mapsto N \} \\
\semlab{Op} &
\Handle \; \EC[\Do \; \ell \, V] \; \With \; H
&\reducesto& N[V/p, \lambda y . \, \Handle \; \EC[\Return \; y] \; \With \; H/r], \\
\multicolumn{4}{@{}r@{}}{
\hfill\ba[t]{@{~}r@{~}l}
\text{where}& \hell = \{ \OpCase{\ell}{p}{r} \mapsto N \}\\
\text{and} & \ell \notin \BL(\EC)
\ea
}
\end{reductions}%}}%
%
The rule \semlab{Ret} invokes the return clause of the current handler
$H$ and substitutes $V$ for $x$ in the body $N$.
%
The rule \semlab{Op} handles an operation $\ell$ subject to two
conditions. The first condition ensures that the operation is only
captured by a handler if its handler definition $H$ contains a
corresponding operation clause for the operation. Otherwise the
operation passes seamlessly through the handler such that another
suitable handler can handle the operation. This phenomenon is known as
\emph{effect forwarding}. It is key to enable modular composition of
effectful computations.
%
The second condition ensures the operation $\ell$ and that the
operation does not appear in the \emph{bound labels} ($\BL$) of the
inner context $\EC$. The bound label condition enforces that an
operation is always handled by the nearest enclosing suitable handler.
%
Formally, we define the notion of bound labels,
$\BL : \EvalCat \to \LabelCat$, inductively over the structure of
evaluation contexts.
%
\begin{equations}
\BL([~]) &=& \emptyset \\
\BL(\Let\;x \revto \EC\;\In\;N) &=& \BL(\EC) \\
\BL(\Handle\;\EC\;\With\;H) &=& \BL(\EC) \cup \dom(H) \\
\end{equations}
%
To illustrate the necessity of this condition consider the following
example with two nested handlers which both handle the same operation
$\ell$.
%
\[
\bl
\ba{@{~}r@{~}c@{~}l}
H_{\mathsf{inner}} &\defas& \{\OpCase{\ell}{p}{r} \mapsto r~42; \Return\;x \mapsto \Return~x\}\\
H_{\mathsf{outer}} &\defas& \{\OpCase{\ell}{p}{r} \mapsto r~0;\Return\;x \mapsto \Return~x \}
\ea\medskip\\
\Handle \;
\left(\Handle\; \Do\;\ell~\Record{}\;\With\; H_{\mathsf{inner}}\right)\;
\With\; H_{\mathsf{outer}}
\reducesto^+ \begin{cases}
\Return\;42 & \text{Innermost}\\
\Return\;0 & \text{Outermost}
\end{cases}
\el
\]
%
Without the bound label condition there are two possible results as
the choice of which handler to pick for $\ell$ is ambiguous, meaning
reduction would be nondeterministic. Conversely, with the bound label
condition we obtain that the above term reduces to $\Return\;42$,
because $\ell$ is bound in the computation term of the outermost
$\Handle$.
%
The decision to always select the nearest enclosing suitable handler
for an operation invocation is a conscious choice. In fact, it is the
\emph{only} natural and sensible choice as picking any other handler
than the nearest enclosing renders programming with effect handlers
anti-modular. Consider the other extreme of always selecting the
outermost suitable handler, then the meaning of any effectful program
fragment depends on the entire ambient context. For example, consider
using integer addition as the composition operator to compose the
inner handle expression from above with a copy of itself.
%
\[
\bl
\dec{fortytwo} \defas \Handle\;\Do\;\ell~\Unit\;\With\;H_{\mathsf{inner}} \medskip\\
\EC[\dec{fortytwo} + \dec{fortytwo}] \reducesto^+ \begin{cases}
\Return\; 84 & \text{when $\EC$ is empty}\\
? & \text{otherwise}
\end{cases}
\el
\]
%
Clearly, if the ambient context $\EC$ is empty, then we can derive the
result by reasoning locally about each constituent separately and
subsequently add their results together to obtain the computation term
$\Return\;84$. Conversely, if the ambient context is nonempty, then we
need to account for the possibility that some handler for $\ell$ is
could be present in the context. For instance if
$\EC = \Handle\;[~]\;\With\;H_{\mathsf{outer}}$ then the result would
be $\Return\;0$, which we cannot derive locally from looking at the
immediate constituents. Thus we can argue that if we want programming
to remain modular and compositional, then we must necessarily always
select the nearest enclosing suitable handler for an operation.
%
The resumption $r$ includes both the captured evaluation context and
the handler. Invoking the resumption causes the both the evaluation
context and handler to be reinstalled, meaning subsequent invocations
of $\ell$ get handled by the same handler. This is a defining
characteristic of deep handlers.
The metatheoretic properties of $\BCalc$ transfer to $\HCalc$ with
little extra effort, although we must amend the definition of
computation normal forms as there are now two ways in which a
computation term can terminate: successfully returning a value or
getting stuck on an unhandled operation.
%
\begin{definition}[Computation normal forms]\label{def:comp-normal-form}
We say that a computation term $N$ is normal with respect to an
effect signature $E$, if $N$ is either of the form $\Return\;V$, or
$\EC[\Do\;\ell\,W]$ where $\ell \in E$ and $\ell \notin \BL(\EC)$.
\end{definition}
%
\begin{theorem}[Progress]
Suppose $\typ{}{M : C}$, then either there exists $\typ{}{N : C}$
such that $M \reducesto^+ N$ and $N$ is normal, or $M$ diverges.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
%
\begin{theorem}[Subject reduction]
Suppose $\typ{\Gamma}{M : C}$ and $M \reducesto M'$, then
$\typ{\Gamma}{M' : C}$.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
\subsection{Effect sugar}
\label{sec:effect-sugar}
The row polymorphism formalism underlying the effect system is rigid
with regard to presence information. Every effect row which share the
same effect variable must mention the exact same operations to be
complete, that is they must mention whether an operation is present,
absent, or polymorphic in their its presence. Consequently, in
higher-order effectful programming this can cause duplication of
information, which in turn can cause effect signatures to become
overly verbose. In most cases verbosity is undesirable if the extra
information is redundant, and in practice it can be real nuisance in
larger codebases.
%
We can retrospectively fix this issue with some syntactic sugar rather
than redesigning the entire effect system to rectify this problem.
To this end, I will take inspiration from the effect system of Frank,
which allows eliding redundant information in many
cases~\cite{LindleyMM17}.
%
In the following I will describe an ad-hoc elaboration scheme for
effect rows, which is designed to guess the programmer's intent for
first-order and second-order functions, but it might not work so well for
third-order and above. The reason for focusing on first-order and
second-order functions is that many familiar and useful functions are
either first-order or second-order, and in the following sections we
will mostly be working with first-order and second-order functions
(although, it should be noted that there exist useful functions at
higher order, e.g. in Chapter~\ref{ch:handlers-efficiency} we shall
use third-order functions; for an example of a sixth order function
see \citet{Okasaki98}).
First, let us consider the familiar second-order function $\map$ for
lists which is completely parametric in its effects.
%
\[
\ba{@{~}l@{~}l@{}l@{~}l}
&\map : \Record{\alpha \to \beta &;\List~\alpha} \to &\List~\beta\\
\equiv&\map : \Record{\alpha \to \beta \eff \{\varepsilon\}&;\List~\alpha}
\to &\List~\beta \eff \{\varepsilon\}
\ea
\]
%
For this type to be correct in $\HCalc$ (and $\BCalc$ for that matter)
we must annotate the computation type with their effect row.
%
These effect annotations do not convey any additional information,
because the function is entirely parametric in all effects, thus the
ink spent on the annotations is really wasted in this instance.
%
To fix this we simply need to instantiate each computation type with
the same effect variable $\varepsilon$.
A slightly more interesting example is a second-order function which
itself performs some operation, but is otherwise parametric in the
effects of its argument.
%
\[
\ba{@{~}l@{~}l@{}l@{~}l}
&(A \to B_1 \eff \{ &\varepsilon\}) &\to B_2 \eff \{\ell : A' \opto B';\varepsilon\}\\
\equiv&(A \to B_1 \eff \{\ell : A' \opto B';&\varepsilon\}) &\to B_2 \eff \{\ell : A' \opto B';\varepsilon\}
\ea
\]
%
To be type-correct both rows must mention the $\ell$
operation. However, this information is redundant on the functional
parameter. The idea here is to push the information of the ambient
effect row $B_2$ inwards to $B_1$ such that the functional argument
can be granted the ability to perform $\ell$.
%
The following infix function $\vartriangleleft$ implements the inward
push of information by copying operations from its right parameter to
its left parameter.
%
\[
\ba{@{~}r@{~}c@{~}l}
\vartriangleleft &:& \EffectCat \times \EffectCat \to \EffectCat\\
E \vartriangleleft \{\cdot\} &\defas& E\\
E \vartriangleleft \{\varepsilon\} &\defas& E\\
E \vartriangleleft \{\ell : P;R\} &\defas&
\begin{cases}
\{\ell : P\} \uplus (E \vartriangleleft \{R\}) & \text{if } \ell \notin E\\
\{R\} \vartriangleleft E & \text{otherwise}
\end{cases}\\
\ea
\]
%
This function essentially computes the union of the two effect rows.
The most frequent case to occur is a second-order function which
handles the effects of its argument.
%
\[
\ba{@{~}l@{~}l@{}l}
& (A \to B_1 \eff \{\ell : A' \opto B';\varepsilon\}) \to B_2 \eff \{ &\varepsilon\}\\
\equiv& (A \to B_1 \eff \{\ell : A' \opto B';\varepsilon\}) \to B_2 \eff \{\ell : \theta;&\varepsilon\}
\ea
\]
%
To capture the intuition that operations have been handled, we would
like to not mention the handled operations in the effect row attached
to $B_2$.
%
The idea is to propagate the information of the effect row attached to
$B_1$ outwards such that this information can be used to complete the
effect row on $B_2$. To complete the row we need to copy the
operations unique to the effect row of $B_1$ into the effect row of
$B_2$ and instantiate them with a fresh presence variable.
%
The following function $\vartriangleright$ propagates information from
its left parameter to its right parameter.
%
\[
\ba{@{~}r@{~}c@{~}l}
\vartriangleright &:& \EffectCat \times \EffectCat \to \EffectCat\\
\{\cdot\} \vartriangleright E &\defas& E\\
\{\varepsilon\} \vartriangleright E &\defas& E\\
\{\ell : A \opto B;R\} \vartriangleright E &\defas&
\begin{cases}
\{\ell : \theta\} \uplus (\{R\} \vartriangleright E) & \text{if } \ell \notin E\\
\{R\} \vartriangleright E & \text{otherwise}
\end{cases}\\
\{\ell : P;R\} \vartriangleright E &\defas&
\begin{cases}
\{\ell : P\} \uplus (\{R\} \vartriangleright E) & \text{if } \ell \notin E\\
\{R\} \vartriangleright E & \text{otherwise}
\end{cases}
\ea
\]
%
The only subtlety occur in the interesting case which is when an
operation is present in the left row, but not in the right row. In
this case we instantiate the operation with a fresh presence variable
in the output row.
Propagation of information in either direction should only happen if
the effect rows share the same effect variable. To avoid erroneous
propagation of information we implement and use the following guarded
variations of $\vartriangleleft$ and $\vartriangleright$.
%
\[
\ba{@{}l@{\qquad}@{}l}
\ba[t]{@{~}r@{~}l@{~}c@{~}l}
\{R;\varepsilon\} \blacktriangleleft &\{R';\varepsilon\} &\defas&
\{R;\varepsilon\} \vartriangleleft \{R';\varepsilon\}\\
\{R;\varepsilon\} \blacktriangleleft &\{R';\varepsilon'\} &\defas&
\{R;\varepsilon\}
\ea &
\ba[t]{@{~}r@{~}l@{~}c@{~}l}
\{R;\varepsilon\} \blacktriangleright &\{R';\varepsilon\} &\defas&
\{R;\varepsilon\} \vartriangleright \{R';\varepsilon\}\\
\{R;\varepsilon\} \blacktriangleright &\{R';\varepsilon'\} &\defas&
\{R';\varepsilon'\}
\ea
\ea
\]
%
The following function $\inward{-}$ pushes the ambient effect row
$\eamb$ inward a given type. I will omit the homomorphic cases as
there is only one interesting case.
%
\begin{equations}
\pcomp{-} &:& \CompTypeCat \times \EffectCat \to \CompTypeCat\\
\pcomp{A \eff E}_{\eamb} &\defas& \pval{A}_{\eamb} \eff E \blacktriangleleft \eamb
\end{equations}
%
The following function $\outward{-}$ combines and propagates the
effect rows of a type outward. Again, I omit the homomorphic cases.
%
\[
\ba{@{}l@{\qquad}@{}l}
\ba[t]{@{~}r@{~}c@{~}l}
\xval{-} &:& \ValTypeCat \to \EffectCat\\
\xval{\alpha} &\defas& \{\cdot\}\\
\xval{A \to C} &\defas& \xval{A} \blacktriangleright \xcomp{C} \\
\ea &
\ba[t]{@{~}r@{~}c@{~}l}
\xcomp{-} &:& \CompTypeCat \to \EffectCat\\
\xcomp{A \eff E} &\defas& \xval{A} \blacktriangleright E \smallskip\\
\ea \smallskip\\
\ba[t]{@{~}r@{~}c@{~}l}
\xpre{-} &:& \PresenceCat \to \EffectCat\\
% \xpre{\Pre{A}} &\defas& \xval{A}\\
\xpre{\Abs} &\defas& \xpre{\theta} \defas \{\cdot\}
\ea &
\ba[t]{@{~}r@{~}c@{~}l}
\xrow{-} &:& \RowCat \to \Effect\\
\xrow{\cdot} &\defas& \xval{\rho} \defas \{\cdot\}\\
\xrow{\ell : P;R} &\defas& \xval{P} \blacktriangleright \xval{R}
\ea
\ea
\]
% \[
% \ba{@{}l@{\qquad}@{}l}
% \ba[t]{@{~}r@{~}c@{~}l}
% \xval{-} &:& \ValTypeCat \to \EffectCat\\
% \xval{\alpha} &\defas& \{\cdot\}\\
% \xval{\Record{R}} &\defas& \xval{[R]} \defas \xval{R}\\
% \xval{A \to C} &\defas& \xval{A} \blacktriangleright \xcomp{C} \\
% \xval{\forall \alpha^K.C} &\defas& \xcomp{C}
% \ea &
% \ba[t]{@{~}r@{~}c@{~}l}
% \xcomp{-} &:& \CompTypeCat \to \EffectCat\\
% \xcomp{A \eff E} &\defas& \xval{A} \blacktriangleright E \smallskip\\
% \xrow{-} &:& \RowCat \to \Effect\\
% \xrow{\cdot} &\defas& \xval{\rho} \defas \{\cdot\}\\
% \xrow{\ell : P;R} &\defas& \xval{P} \blacktriangleright \xval{R}
% \ea\\
% \multicolumn{2}{c}{
% \ba{@{~}r@{~}c@{~}l}
% \xpre{-} &:& \PresenceCat \to \EffectCat\\
% \xpre{\Pre{A}} &\defas& \xval{A}\\
% \xpre{\Abs} &\defas& \xpre{\theta} \defas \{\cdot\}
% \ea}
% \ea
% \]
%
% \[
% \ba{@{}l@{\qquad}@{}l}
% \ba[t]{@{~}r@{~}c@{~}l}
% \pval{-} &:& \ValTypeCat \times \EffectCat \to \ValTypeCat\\
% \pval{\alpha}_{\eamb} &\defas& \alpha\\
% \pval{\Record{R}}_{\eamb} &\defas& \Record{\prow{R}_{\eamb}}\\
% \pval{[R]}_{\eamb} &\defas& [\prow{R}_{\eamb}]\\
% \pval{A \to C}_{\eamb} &\defas& \pval{A}_{\eamb} \to \pcomp{C}_{\eamb} \\
% \pval{\forall \alpha^K.C}_{\eamb} &\defas& \forall\alpha^K.\pcomp{C}
% \ea &
% \ba[t]{@{~}r@{~}c@{~}l}
% \pcomp{-} &:& \CompTypeCat \times \EffectCat \to \CompTypeCat\\
% \pcomp{A \eff E}_{\eamb} &\defas& \pval{A}_{\eamb} \eff E \blacktriangleleft \eamb \smallskip\\
% \prow{-} &:& \RowCat \times \EffectCat \to \RowCat\\
% \prow{\cdot}_{\eamb} &\defas& \{\cdot\}\\
% \prow{\rho}_{\eamb} &\defas& \{\rho\}\\
% \prow{\ell : P;R}_{\eamb} &\defas& \ppre{P};\prow{R}
% \ea\\
% \multicolumn{2}{c}{
% \ba[t]{@{~}r@{~}c@{~}l}
% \ppre{-} &:& \PresenceCat \times \EffectCat \to \EffectCat\\
% \ppre{\Pre{A}}_{\eamb} &\defas& \pval{A}_{\eamb}\\
% \ppre{\Abs}_{\eamb} &\defas& \Abs\\
% \ppre{\theta} &\defas& \theta
% \ea}
% \ea
% \]
%
We combine all of the above functions to implement the effect row
elaboration for top-level function types.
%
\begin{equations}
\trval{-} &:& \ValTypeCat \to \ValTypeCat\\
\trval{A \to B \eff E} &\defas& \pval{A}_{E'} \to \pval{B}_{E'} \eff E'\\
\multicolumn{3}{l}{\quad\where~E' = (\xval{A} \blacktriangleright E) \vartriangleright (\xval{B} \blacktriangleright E)}
\end{equations}
%
The function $\trval{-}$ traverses the abstract syntax of its argument
twice. The first traversal propagates effect information outwards to
the ambient effect row $E$. The second traversal pushes the full
ambient information $E'$ inwards.
%
The construction of $E'$ makes use of the fact that
$E \vartriangleright E = E$.
%
As a remark, note that the function $\trval{-}$ do not have to
consider handler types, because they cannot appear at the top-level in
$\HCalc$. With this syntactic sugar in place we can program with
second-order effectful functions without having to write down
redundant information.
\section{Shallow handlers}
\label{sec:unary-shallow-handlers}
Shallow handlers are an alternative to deep handlers. Shallow handlers
are defined as case-splits over computation trees, whereas deep
handlers are defined as folds. Consequently, a shallow handler
application unfolds only a single layer of the computation tree.
%
Semantically, the difference between deep and shallow handlers is
analogous to the difference between \citet{Church41} and
\citet{Scott62} encoding techniques for data types in the sense that
the recursion is intrinsic to the former, whilst recursion is
extrinsic to the latter.
%
Thus a fixpoint operator is necessary to make programming with shallow
handlers practical.
Shallow handlers offer more flexibility than deep handlers as they do
not hard wire a particular recursion scheme. Shallow handlers are
favourable when catamorphisms are not the natural solution to the
problem at hand.
%
A canonical example of when shallow handlers are desirable over deep
handlers is \UNIX{}-style pipes, where the natural implementation is
in terms of two mutually recursive functions (specifically
\emph{mutumorphisms}~\cite{Fokkinga90}), which is convoluted to
implement with deep
handlers~\cite{KammarLO13,HillerstromL18,HillerstromLA20}.
In this section we take the full $\BCalc$ as our starting point and
extend it with shallow handlers, resulting in the calculus $\SCalc$.
The calculus borrows some syntax and semantics from \HCalc{}, whose
presentation will not be duplicated in this section.
% Often deep handlers are attractive because they are semantically
% well-behaved and provide appropriate structure for efficient
% implementations using optimisations such as fusion~\cite{WuS15}, and
% as we saw in the previous they codify a wide variety of applications.
% %
% However, they are not always convenient for implementing other
% structural recursion schemes such as mutual recursion.
\subsection{Syntax and static semantics}
The syntax and semantics for effectful operation invocations are the
same as in $\HCalc$. Handler definitions and applications also have
the same syntax as in \HCalc{}, although we shall annotate the
application form for shallow handlers with a superscript $\dagger$ to
distinguish it from deep handler application.
%
\begin{syntax}
\slab{Computations} &M,N \in \CompCat &::=& \cdots \mid \ShallowHandle \; M \; \With \; H\\[1ex]
\end{syntax}
%
The static semantics for $\Handle^\dagger$ are the same as the static
semantics for $\Handle$.
%
\begin{mathpar}
\inferrule*[Lab=\tylab{Handle^\dagger}]
{
\typ{\Gamma}{M : C} \\
\typ{\Gamma}{H : C \Harrow D}
}
{\Gamma \vdash \ShallowHandle \; M \; \With\; H : D}
%\mprset{flushleft}
\inferrule*[Lab=\tylab{Handler^\dagger}]
{{\bl
C = A \eff \{(\ell_i : A_i \opto B_i)_i; R\} \\
D = B \eff \{(\ell_i : P_i)_i; R\}\\
H = \{\Return\;x \mapsto M\} \uplus \{ \OpCase{\ell_i}{p_i}{r_i} \mapsto N_i \}_i
\el}\\\\
\typ{\Delta;\Gamma, x : A}{M : D}\\\\
[\typ{\Delta;\Gamma,p_i : A_i, r_i : B_i \to C}{N_i : D}]_i
}
{\typ{\Delta;\Gamma}{H : C \Harrow D}}
\end{mathpar}
%
The \tylab{Handler^\dagger} rule is remarkably similar to the
\tylab{Handler} rule. In fact, the only difference is the typing of
resumptions $r_i$. The codomain of $r_i$ is $C$ rather than $D$,
meaning that a resumption returns a value of the same type as the
input computation. In general the type $C$ may be different from the
output type $D$, thus it is evident from this typing rule that the
handler does not guard invocations of resumptions $r_i$.
\subsection{Dynamic semantics}
There are two reduction rules.
%{\small{
\begin{reductions}
\semlab{Ret^\dagger} &
\ShallowHandle \; (\Return \; V) \; \With \; H &\reducesto& N[V/x], \hfill\text{where } \hret = \{ \Return \; x \mapsto N \} \\
\semlab{Op^\dagger} &
\ShallowHandle \; \EC[\Do \; \ell \, V] \; \With \; H
&\reducesto& N[V/p, \lambda y . \, \EC[\Return \; y]/r], \\
\multicolumn{4}{@{}r@{}}{
\hfill\ba[t]{@{~}r@{~}l}
\text{where}& \hell = \{ \OpCase{\ell}{p}{r} \mapsto N \}\\
\text{and} & \ell \notin \BL(\EC)
\ea
}
\end{reductions}%}}%
%
The rule \semlab{Ret^\dagger} is the same as the \semlab{Ret} rule for
deep handlers --- there is no difference in how the return value is
handled. The \semlab{Op^\dagger} rule is almost the same as the
\semlab{Op} rule the crucial difference being the construction of the
resumption $r$. The resumption consists entirely of the captured
context $\EC$. Thus an invocation of $r$ does not reinstall its
handler as in the setting of deep handlers, meaning is up to the
programmer to supply the handler the next invocation of $\ell$ inside
$\EC$. This handler may be different from $H$.
The basic metatheoretic properties of $\SCalc$ are a carbon copy of
the basic properties of $\HCalc$.
%
\begin{theorem}[Progress]
Suppose $\typ{}{M : C}$, then either there exists $\typ{}{N : C}$
such that $M \reducesto^+ N$ and $N$ is normal, or $M$ diverges.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
%
\begin{theorem}[Subject reduction]
Suppose $\typ{\Gamma}{M : C}$ and $M \reducesto M'$, then
$\typ{\Gamma}{M' : C}$.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
\section{Parameterised handlers}
\label{sec:unary-parameterised-handlers}
Parameterised handlers are a variation of ordinary deep handlers with
an embedded functional state cell. This state cell is only accessible
locally within the handler. The use of state within the handler is
opaque to both the ambient context and the context of the computation
being handled. Semantically, parameterised handlers are defined as
folds with state threading over computation trees.
We take the deep handler calculus $\HCalc$ as our starting point and
extend it with parameterised handlers to yield the calculus
$\HPCalc$. The parameterised handler extension interacts nicely with
shallow handlers, and as such it can be added to $\SCalc$ with low
effort.
\subsection{Syntax and static semantics}
In addition to a computation, a parameterised handler also take a
value as argument. This argument is the initial value of the state
cell embedded inside the handler.
%
\begin{syntax}
\slab{Handler\textrm{ }types} & F &::=& \cdots \mid \Record{C; A} \Rightarrow^\param D\\
\slab{Computations} & M,N &::=& \cdots \mid \ParamHandle\; M \;\With\; H^\param(W)\\
\slab{Parameterised\textrm{ }definitions} & H^\param &::=& q^A.~H
\end{syntax}
%
The syntactic category of handler types $F$ is extended with a new
kind of handler arrow for parameterised handlers. The left hand side
of the arrow is a pair, whose first component denotes the type of the
input computation and the second component denotes the type of the
handler parameter. The right hand side denotes the return type of the
handler.
%
The computations category is extended with a new application form for
handlers, which runs a computation $M$ under a parameterised handler
$H$ applied to the value $W$.
%
Finally, a new category is added for parameterised handler
definitions. A parameterised handler definition is a new binding form
$(q^A.~H)$, where $q$ is the name of the parameter, whose type is $A$,
and $H$ is an ordinary handler definition $H$. The parameter $q$ is
accessible in the $\Return$ and operation clauses of $H$.
As with ordinary deep handlers and shallow handlers, there are two
typing rules: one for handler application and another for handler
definitions.
%
\begin{mathpar}
% Handle
\inferrule*[Lab=\tylab{Handle^\param}]
{
% \typ{\Gamma}{V : A} \\
\typ{\Gamma}{M : C} \\
\typ{\Gamma}{W : A} \\
\typ{\Gamma}{H^\param : \Record{C; A} \Harrow^\param D}
}
{\Gamma \vdash \ParamHandle \; M \; \With\; H^\param(W) : D}
\end{mathpar}
%
The $\tylab{Handle^\param}$ rule is similar to the $\tylab{Handle}$
and $\tylab{Handle^\dagger}$ rules, except that it has to account for
the parameter $W$, whose type has to be compatible with the second
component of the domain type of the handler definition $H^\param$.
%
The typing rule for parameterised handler definitions adapts the
corresponding typing rule $\tylab{Handler}$ for ordinary deep handlers
with the addition of a parameter.
%
\begin{mathpar}
% Parameterised handler
\inferrule*[Lab=\tylab{Handler^\param}]
{{\bl
C = A \eff \{(\ell_i : A_i \to B_i)_i; R\} \\
D = B \eff \{(\ell_i : P)_i; R\} \\
H = \{\Return\;x \mapsto M\} \uplus \{ \OpCase{\ell_i}{p_i}{r_i} \mapsto N_i \}_i
\el}\\\\
\typ{\Delta;\Gamma, q : A', x : A}{M : D}\\\\
[\typ{\Delta;\Gamma,q : A', p_i : A_i, r_i : \Record{B_i;A'} \to D}{N_i : D}]_i
}
{\typ{\Delta;\Gamma}{(q^{A'} . H) : \Record{C;A'} \Harrow^\param D}}
\end{mathpar}
%%
The key differences between the \tylab{Handler} and
\tylab{Handler^\param} rules are that in the latter the return and
operation cases are typed with respect to the parameter $q$, and that
resumptions $r_i$ have type $\Record{B_\ell;A'} \to D$, that is a
parameterised resumption is a binary function, where the first
argument is the interpretation of an operation and the second argument
is the (updated) handler state. The return type of $r_i$ is the same
as the return type of the handler, meaning that an invocation of $r_i$
is guarded in the same way as an invocation of an ordinary deep
resumption.
\subsection{Dynamic semantics}
The two reduction rules for parameterised handlers adapt the reduction
rules for ordinary deep handlers with a parameter.
%
\begin{reductions}
\semlab{Ret^\param} &
\ParamHandle \; (\Return \; V) \; \With \; (q.H)(W) &\reducesto& N[V/x,W/q],\\
\multicolumn{4}{@{}r@{}}{
\hfill\text{where } \hret = \{ \Return \; x \mapsto N \}} \\
\semlab{Op^\param} &
\ParamHandle \; \EC[\Do \; \ell \, V] \; \With \; (q.H)(W)
&\reducesto& N[V/p,W/q,R/r],\\
\multicolumn{4}{@{}r@{}}{
\hfill\ba[t]{@{~}r@{~}l}
\text{where}& R = \lambda\Record{y;q'}.\ParamHandle\;\EC[\Return \; y]\;\With\;(q.H)(q')\\
\text{and} &\hell = \{ \OpCase{\ell}{p}{r} \mapsto N \}\\
\text{and} &\ell \notin \BL(\EC)
\ea
}
\end{reductions}
%
The rule $\semlab{Ret^\param}$ handles the return value of a
computation. Just like the rule $\semlab{Ret}$ the return value $V$ is
substituted for the binder $x$ in the return case body
$N$. Furthermore the value $W$ is substituted for the handler
parameter $q$ in $N$, meaning the handler parameter is accessible in
the return case.
The $\semlab{Op^\param}$ handles an operation invocation. Both the
operation payload $V$ and handler argument $W$ are accessible inside
the case body $N$. As with ordinary deep handlers, the resumption
rewraps its handler, but with the slight twist that the parameterised
handler definition is applied to the updated parameter value $q'$
rather than the original value $W$. This achieves the effect of state
passing as the value of $q'$ becomes available upon the next
activation of the handler.
The metatheoretic properties of $\HCalc$ carries over to $\HPCalc$.
\begin{theorem}[Progress]
Suppose $\typ{}{M : C}$, then either there exists $\typ{}{N : C}$
such that $M \reducesto^+ N$ and $N$ is normal, or $M$ diverges.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
%
\begin{theorem}[Subject reduction]
Suppose $\typ{\Gamma}{M : C}$ and $M \reducesto M'$, then
$\typ{\Gamma}{M' : C}$.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
\part{Implementation}
\label{p:implementation}
\chapter{Continuation-passing style}
\label{ch:cps}
Continuation-passing style (CPS) is a \emph{canonical} program
notation that makes every facet of control flow and data flow
explicit. In CPS every function takes an additional function-argument
called the \emph{continuation}, which represents the next computation
in evaluation position. CPS is canonical in the sense that it is
definable in pure $\lambda$-calculus without any further
primitives. As an informal illustration of CPS consider again the
ever-green factorial function from Section~\ref{sec:tracking-div}.
%
\[
\bl
\dec{fac} : \Int \to \Int\\
\dec{fac} \defas \lambda n.
\ba[t]{@{}l}
\Let\;isz \revto n = 0\;\In\\
\If\;isz\;\Then\; \Return\;1\\
\Else\;\ba[t]{@{~}l}
\Let\; n' \revto n - 1 \;\In\\
\Let\; m \revto \dec{fac}~n' \;\In\\
\Let\;res \revto n * m \;\In\\
\Return\;res
\ea
\ea
\el
\]
%
The above implementation of the function $\dec{fac}$ is given in
direct-style fine-grain call-by-value. In CPS notation the
implementation of this function changes as follows.
%
\[
\bl
\dec{fac}_{\dec{cps}} : \Int \to (\Int \to \alpha) \to \alpha\\
\dec{fac}_{\dec{cps}} \defas \lambda n.\lambda k.
(=_{\dec{cps}})~n~0~
(\lambda isz.
\ba[t]{@{~}l}
\If\;isz\;\Then\; k~1\\
\Else\;
(-_{\dec{cps}})~n~\bl 1\\
(\lambda n'.
\dec{fac}_{\dec{cps}}~\bl n'\\
(\lambda m. (*_{\dec{cps}})~n~\bl m\\
(\lambda res. k~res))))
\el
\el
\el
\ea
\el
\]
%
There are several worthwhile observations to make about the
differences between the two implementations $\dec{fac}$ and
$\dec{fac}_{\dec{cps}}$.
%
Firstly note that their type signatures differ. The CPS version has an
additional formal parameter of type $\Int \to \alpha$ which is the
continuation. By convention the continuation parameter is named $k$ in
the implementation. As usual, the continuation represents the
remainder of computation. In this specific instance $k$ represents the
undelimited current continuation of an application of
$\dec{fac}_{\dec{cps}}$. Given a value of type $\Int$, the
continuation produces a result of type $\alpha$, which is the
\emph{answer type} of the entire program. Thus applying
$\dec{fac}_{\dec{cps}}~3$ to the identity function ($\lambda x.x$)
yields $6 : \Int$, whilst applying it to the predicate
$\lambda x. x > 2$ yields $\True : \Bool$.
% or put differently: it determines what to do with the result
% returned by an invocation of $\dec{fac}_{\dec{cps}}$.
%
Secondly note that every $\Let$-binding in $\dec{fac}$ has become a
function application in $\dec{fac}_{\dec{cps}}$. The binding sequence
in the $\Else$-branch has been turned into a series of nested function
applications. The functions $=_{\dec{cps}}$, $-_{\dec{cps}}$, and
$*_{\dec{cps}}$ denote the CPS versions of equality testing,
subtraction, and multiplication respectively.
%
For clarity, I have meticulously written each continuation function on
a newline. For instance, the continuation of the
$-_{\dec{cps}}$-application is another application of
$\dec{fac}_{\dec{cps}}$, whose continuation is an application of
$*_{\dec{cps}}$, and its continuation is an application of the current
continuation, $k$, of $\dec{fac}_{\dec{cps}}$.
%
Each $\Return$-computation has been turned into an application of the
current continuation $k$. In the $\Then$-branch the continuation
applied to $1$, whilst in the $\Else$-branch the continuation is
applied to the result obtained by multiplying $n$ and $m$.
%
Thirdly note that every function application occurs in tail position
(recall Definition~\ref{def:tail-comp}). This is a characteristic
property of CPS transforms that make them feasible as a practical
implementation strategy, since programs in CPS notation require only a
constant amount of stack space to run, namely, a single activation
frame~\cite{Appel92}. Although, the pervasiveness of closures in CPS
means that CPS programs make heavy use of the heap for closure
allocation.
%
Some care must be taken when CPS transforming a program as if done
naïvely the image may be inflated with extraneous
terms~\cite{DanvyN05}. For example in $\dec{fac}_{\dec{cps}}$ the
continuation term $(\lambda res.k~res)$ is redundant as it is simply
an eta expansion of the continuation $k$. A more optimal transform
would simply pass $k$. Extraneous terms can severely impact the
runtime performance of a CPS program. A smart CPS transform recognises
and eliminates extraneous terms at translation
time~\cite{DanvyN03}. Extraneous terms come in various disguises as we
shall see later in this chapter.
The complete exposure of the control flow makes CPS a good fit for
implementing control operators such as effect handlers. It is an
established intermediate representation used by compilers, providing
it with merits as a practical compilation
target~\cite{Appel92,Kennedy07}.
The purpose of this chapter is to use the CPS formalism to develop a
universal implementation strategy for deep, shallow, and parameterised
effect handlers. Section~\ref{sec:target-cps} defines a suitable
target calculus $\UCalc$ for CPS transformed
programs. Section~\ref{sec:cps-cbv} demonstrates how to CPS transform
$\BCalc$-programs to $\UCalc$-programs. In Section~\ref{sec:fo-cps}
develop a CPS transform for deep handlers through step-wise refinement
of the initial CPS transform for $\BCalc$. The resulting CPS transform
is adapted in Section~\ref{sec:cps-shallow} to support for shallow
handlers. As a by-product we develop the notion of \emph{generalised
continuation}, which provides a versatile abstraction for
implementing effect handlers. We use generalised continuations to
implement parameterised handlers in Section~\ref{sec:cps-param}.
%
%
%\dhil{The focus of the introduction should arguably not be to explain CPS.}
%\dhil{Justify CPS as an implementation technique}
%\dhil{Give a side-by-side reduction example of $\dec{fac}$ and $\dec{fac}_{\dec{cps}}$.}
% \dhil{Define desirable properties of a CPS translation: properly tail-recursive, no static administrative redexes}
%
% \begin{definition}[Properly tail-recursive~\cite{Danvy06}]
% %
% A CPS translation $\cps{-}$ is properly tail-recursive if the
% continuation of every CPS transformed tail call $\cps{V\,W}$ within
% $\cps{\lambda x.M}$ is $k$, where
% \begin{equations}
% \cps{\lambda x.M} &=& \lambda x.\lambda k.\cps{M}\\
% \cps{V\,W} &=& \cps{V}\,\cps{W}\,k.
% \end{equations}
% \end{definition}
% \[
% \ba{@{~}l@{~}l}
% \pcps{(\lambda x.(\lambda y.\Return\;y)\,x)\,\Unit} &= (\lambda x.(\lambda y.\lambda k.k\,y)\,x)\,\Unit\,(\lambda x.x)\\
% &\reducesto ((\lambda y.\lambda k.k\,y)\,\Unit)\,(\lambda x.x)\\
% &\reducesto (\lambda k.k\,\Unit)\,(\lambda x.x)\\
% &\reducesto (\lambda x.x)\,\Unit\\
% &\reducesto \Unit
% \ea
% \]
\paragraph{Relation to prior work} This chapter is based on the
following work.
%
\begin{enumerate}[i]
\item \bibentry{HillerstromLAS17}\label{en:ch-cps-HLAS17}
\item \bibentry{HillerstromL18} \label{en:ch-cps-HL18}
\item \bibentry{HillerstromLA20} \label{en:ch-cps-HLA20}
\end{enumerate}
%
Section~\ref{sec:higher-order-uncurried-deep-handlers-cps} is
based on item \ref{en:ch-cps-HLAS17}, however, I have adapted it to
follow the notation and style of item \ref{en:ch-cps-HLA20}.
\section{Initial target calculus}
\label{sec:target-cps}
%
\begin{figure}
\flushleft
\textbf{Syntax}
\begin{syntax}
\slab{Values} &U, V, W \in \UValCat &::= & x \mid \lambda x.M \mid % \Rec\,g\,x.M
\mid \Record{} \mid \Record{V, W} \mid \ell
\smallskip \\
\slab{Computations} &M,N \in \UCompCat &::= & V \mid M\,W \mid \Let\; \Record{x,y} = V \; \In \; N\\
& &\mid& \Case\; V\, \{\ell \mapsto M; y \mapsto N\} \mid \Absurd\,V
\smallskip \\
\slab{Evaluation contexts} &\EC \in \UEvalCat &::= & [~] \mid \EC\;W \\
\end{syntax}
\textbf{Reductions}
\begin{reductions}
\usemlab{App} & (\lambda x . \, M) V &\reducesto& M[V/x] \\
% \usemlab{Rec} & (\Rec\,g\,x.M) V &\reducesto& M[\Rec\,g\,x.M/g,V/x]\\
\usemlab{Split} & \Let \; \Record{x,y} = \Record{V,W} \; \In \; N &\reducesto& N[V/x,W/y] \\
\usemlab{Case_1} &
\Case \; \ell \; \{ \ell \mapsto M; y \mapsto N\} &\reducesto& M \\
\usemlab{Case_2} &
\Case \; \ell \; \{ \ell' \mapsto M; y \mapsto N\} &\reducesto& N[\ell/y], \hfill\quad \text{if } \ell \neq \ell' \\
\usemlab{Lift} &
\EC[M] &\reducesto& \EC[N], \hfill \text{if } M \reducesto N \\
\end{reductions}
\textbf{Syntactic sugar}
\[
\begin{eqs}
\Let\;x=V\;\In\;N &\equiv & N[V/x]\\
\ell \; V & \equiv & \Record{\ell; V}\\
\Record{} & \equiv & \ell_{\Record{}} \\
\Record{\ell = V; W} & \equiv & \Record{\ell, \Record{V, W}}\\
\nil &\equiv & \ell_{\nil} \\
V \cons W & \equiv & \Record{\ell_{\cons}, \Record{V, W}}\\
\Case\;V\;\{\ell\;x \mapsto M; y \mapsto N \} &\equiv&
\ba[t]{@{~}l}
\Let\;y = V\;\In\; \Let\;\Record{z,x} = y\;\In \\
\Case\; z\;\{ \ell \mapsto M; z' \mapsto N \}
\ea\\
\Let\; \Record{\ell=x;y} = V\;\In\;N &\equiv&
\ba[t]{@{~}l}
\Let\; \Record{z,z'} = V\;\In\;\Let\; \Record{x,y} = z'\;\In \\
\Case\;z\;\{\ell \mapsto N; z'' \mapsto \ell_\bot \}
\ea
\end{eqs}
\]
\caption{Untyped target calculus for the CPS translations.}
\label{fig:cps-cbv-target}
\end{figure}
%
The syntax, semantics, and syntactic sugar for the target calculus
$\UCalc$ is given in Figure~\ref{fig:cps-cbv-target}. The calculus
largely amounts to an untyped variation of $\BCalc$, specifically
we retain the syntactic distinction between values ($V$) and
computations ($M$).
%
The values ($V$) comprise lambda abstractions ($\lambda x.M$),
% recursive functions ($\Rec\,g\,x.M$),
empty tuples ($\Record{}$), pairs ($\Record{V,W}$), and first-class
labels ($\ell$).
%
Computations ($M$) comprise values ($V$), applications ($M~V$), pair
elimination ($\Let\; \Record{x, y} = V \;\In\; N$), label elimination
($\Case\; V \;\{\ell \mapsto M; x \mapsto N\}$), and explicit marking
of unreachable code ($\Absurd$). A key difference from $\BCalc$ is
that the function position of an application is allowed to be a
computation (i.e., the application form is $M~W$ rather than
$V~W$). Later, when we refine the initial CPS translation we will be
able to rule out this relaxation.
The reduction semantics follows the trend of the previous reduction
semantics in the sense that it is a small-step context-based reduction
semantics. Evaluation contexts comprise the empty context and function
application.
To make the notation more lightweight, we define syntactic sugar for
variant values, record values, list values, let binding, variant
eliminators, and record eliminators. We use pattern matching syntax
for deconstructing variants, records, and lists. For desugaring
records, we assume a failure constant $\ell_\bot$ (e.g. a divergent
term) to cope with the case of pattern matching failure.
% \dhil{Most of the primitives are Church encodable. Discuss the value
% of treating them as primitive rather than syntactic sugar (target
% languages such as JavaScript has similar primitives).}
\section{Transforming fine-grain call-by-value}
\label{sec:cps-cbv}
We start by giving a CPS translation of $\BCalc$ in
Figure~\ref{fig:cps-cbv}. Fine-grain call-by-value admits a
particularly simple CPS translation due to the separation of values
and computations. All constructs from the source language are
translated homomorphically into the target language $\UCalc$, except
for $\Return$ and $\Let$ (and type abstraction because the translation
performs type erasure). Lifting a value $V$ to a computation
$\Return~V$ is interpreted by passing the value to the current
continuation $k$. Sequencing computations with $\Let$ is translated by
applying the translation of $M$ to the translation of the continuation
$N$, which is ultimately applied to the current continuation $k$. In
addition, we explicitly $\eta$-expand the translation of a type
abstraction in order to ensure that value terms in the source calculus
translate to value terms in the target.
\begin{figure}
\flushleft
\textbf{Values} \\
\[
\bl
\begin{eqs}
\cps{-} &:& \ValCat \to \UValCat\\
\cps{x} &=& x \\
\cps{\lambda x.M} &=& \lambda x.\cps{M} \\
\cps{\Lambda \alpha.M} &=& \lambda k.\cps{M}~k \\
% \cps{\Rec\,g\,x.M} &=& \Rec\,g\,x.\cps{M}\\
\cps{\Record{}} &=& \Record{} \\
\cps{\Record{\ell = V; W}} &=& \Record{\ell = \cps{V}; \cps{W}} \\
\cps{\ell~V} &=& \ell~\cps{V} \\
\end{eqs}
\el
\]
\textbf{Computations}
\[
\bl
\begin{eqs}
\cps{-} &:& \CompCat \to \UCompCat\\
\cps{V\,W} &=& \cps{V}\,\cps{W} \\
\cps{V\,T} &=& \cps{V} \\
\cps{\Let\; \Record{\ell=x;y} = V \; \In \; N} &=& \Let\; \Record{\ell=x;y} = \cps{V} \; \In \; \cps{N} \\
\cps{\Case~V~\{\ell~x \mapsto M; y \mapsto N\}} &=&
\Case~\cps{V}~\{\ell~x \mapsto \cps{M}; y \mapsto \cps{N}\} \\
\cps{\Absurd~V} &=& \Absurd~\cps{V} \\
\cps{\Return~V} &=& \lambda k.k\,\cps{V} \\
\cps{\Let~x \revto M~\In~N} &=& \lambda k.\cps{M}(\lambda x.\cps{N}\,k) \\
\end{eqs}
\el
\]
\caption{First-order CPS translation of $\BCalc$.}
\label{fig:cps-cbv}
\end{figure}
\section{Transforming deep effect handlers}
\label{sec:fo-cps}
The translation of a computation term by the basic CPS translation in
Section~\ref{sec:cps-cbv} takes a single continuation parameter that
represents the context.
%
In the presence of effect handlers in the source language, it becomes
necessary to keep track of two kinds of contexts in which each
computation executes: a \emph{pure context} that tracks the state of
pure computation in the scope of the current handler, and an
\emph{effect context} that describes how to handle operations in the
scope of the current handler.
%
Correspondingly, we have both \emph{pure continuations} ($k$) and
\emph{effect continuations} ($h$).
%
As handlers can be nested, each computation executes in the context of
a \emph{stack} of pairs of pure and effect continuations.
On entry into a handler, the pure continuation is initialised to a
representation of the return clause and the effect continuation to a
representation of the operation clauses. As pure computation proceeds,
the pure continuation may grow, for example when executing a
$\Let$. If an operation is encountered then the effect continuation is
invoked.
%
The current continuation pair ($k$, $h$) is packaged up as a
\emph{resumption} and passed to the current handler along with the
operation and its argument. The effect continuation then either
handles the operation, invoking the resumption as appropriate, or
forwards the operation to an outer handler. In the latter case, the
resumption is modified to ensure that the context of the original
operation invocation can be reinstated upon invocation of the
resumption.
%
\subsection{Curried translation}
\label{sec:first-order-curried-cps}
We first consider a curried CPS translation that extends the
translation of Figure~\ref{fig:cps-cbv}. The extension to operations
and handlers is localised to the additional features because currying
conveniently lets us get away with a shift in interpretation: rather
than accepting a single continuation, translated computation terms now
accept an arbitrary even number of arguments representing the stack of
pure and effect continuations. Thus, we can conservatively extend the
translation in Figure~\ref{fig:cps-cbv} to cover $\HCalc$, where we
imagine there being some number of extra continuation arguments that
have been $\eta$-reduced. The translation of operations and handlers
is as follows.
%
\begin{equations}
\cps{-} &:& \CompCat \to \UCompCat\\
\cps{\Do\;\ell\;V} &\defas& \lambda k.\lambda h.h~\Record{\ell,\Record{\cps{V}, \lambda x.k~x~h}} \\
\cps{\Handle \; M \; \With \; H} &\defas& \cps{M}~\cps{\hret}~\cps{\hops} \medskip\\
\cps{-} &:& \HandlerCat \to \UCompCat\\
\cps{\{ \Return \; x \mapsto N \}} &\defas& \lambda x . \lambda h . \cps{N} \\
\cps{\{ \ell~p~r \mapsto N_\ell \}_{\ell \in \mathcal{L}}}
&\defas&
\lambda \Record{z,\Record{p,r}}. \Case~z~
\{ (\ell \mapsto \cps{N_\ell})_{\ell \in \mathcal{L}}; y \mapsto \hforward(y,p,r) \} \\
\hforward(y,p,r) &\defas& \lambda k. \lambda h. h\,\Record{y,\Record{p, \lambda x.\,r\,x\,k\,h}}
\end{equations}
%
The translation of $\Do\;\ell\;V$ abstracts over the current pure
($k$) and effect ($h$) continuations passing an encoding of the
operation into the latter. The operation is encoded as a triple
consisting of the name $\ell$, parameter $\cps{V}$, and resumption
$\lambda x.k~x~h$, which passes the same effect continuation $h$ to
ensure deep handler semantics.
The translation of $\Handle~M~\With~H$ invokes the translation of $M$
with new pure and effect continuations for the return and operation
clauses of $H$.
%
The translation of a return clause is a term which garbage collects
the current effect continuation $h$.
%
The translation of a set of operation clauses is a function which
dispatches on encoded operations, and in the default case forwards to
an outer handler.
%
In the forwarding case, the resumption is extended by the parent
continuation pair to ensure that an eventual invocation of the
resumption reinstates the handler stack.
The translation of complete programs feeds the translated term the
identity pure continuation (which discards its handler argument), and
an effect continuation that is never intended to be called.
%
\begin{equations}
\pcps{-} &:& \CompCat \to \UCompCat\\
\pcps{M} &\defas& \cps{M}~(\lambda x.\lambda h.x)~(\lambda \Record{z,\_}.\Absurd~z) \\
\end{equations}
%
Conceptually, this translation encloses the translated program in a
top-level handler with an empty collection of operation clauses and an
identity return clause.
A pleasing property of this particular CPS translation is that it is a
conservative extension to the CPS translation for $\BCalc$. Alas, this
translation also suffers two displeasing properties which makes it
unviable in practice.
\begin{enumerate}
\item The image of the translation is not \emph{properly
tail-recursive}~\citep{Danvy06,DanvyF92,Steele78}, meaning not
every function application occur in tail position in the image, and
thus the image is not stackless. Consequently, the translation
cannot readily be used as the basis for an implementation. This
deficiency is essentially due to the curried representation of the
continuation stack.
\item The image of the translation yields static administrative
redexes, i.e. redexes that could be reduced statically. This is a
classic problem with CPS translations. This problem can be dealt
with by introducing a second pass to clean up the
image~\cite{Plotkin75}. By clever means the clean up pass and the
translation pass can be fused together to make an one-pass
translation~\cite{DanvyF92,DanvyN03}.
\end{enumerate}
The following minimal example readily illustrates both issues.
%
\begin{align*}
\pcps{\Return\;\Record{}}
= & (\lambda k.k\,\Record{})\,(\lambda x.\lambda h.x)\,(\lambda \Record{z,\_}.\Absurd\,z) \\
\reducesto& ((\lambda x.\lambda h.x)\,\Record{})\,(\lambda \Record{z,\_}.\Absurd\,z) \numberthis\label{eq:cps-admin-reduct-1}\\
\reducesto& (\lambda h.\Record{})\,(\lambda \Record{z,\_}.\Absurd\,z) \numberthis\label{eq:cps-admin-reduct-2}\\
\reducesto& \Record{}
\end{align*}
%
The second and third reductions simulate handling $\Return\;\Record{}$
at the top level. The second reduction partially applies the curried
function term $\lambda x.\lambda h.x$ to $\Record{}$, which must
return a value such that the third reduction can be
applied. Consequently, evaluation is not tail-recursive.
%
The lack of tail-recursion is also apparent in our relaxation of
fine-grain call-by-value in Figure~\ref{fig:cps-cbv-target} as the
function position of an application can be a computation.
%
In Section~\ref{sec:first-order-uncurried-cps} we will refine this
translation to be properly tail-recursive.
%
As for administrative redexes, observe that the first reduction is
administrative. It is an artefact introduced by the translation, and
thus it has nothing to do with the dynamic semantics of the original
term. We can eliminate such redexes statically. We will address this
issue in Section~\ref{sec:higher-order-uncurried-deep-handlers-cps}.
Nevertheless, we can show that the image of this CPS translation
simulates the preimage. Due to the presence of administrative
reductions, the simulation is not on the nose, but instead up to
congruence.
%
For reduction in the untyped target calculus we write
$\reducesto_{\textrm{cong}}$ for the smallest relation containing
$\reducesto$ that is closed under the term formation constructs.
%
\begin{theorem}[Simulation]
\label{thm:fo-simulation}
If $M \reducesto N$ then $\pcps{M} \reducesto_{\textrm{cong}}^+
\pcps{N}$.
\end{theorem}
\begin{proof}
The result follows by composing a call-by-value variant of
\citeauthor{ForsterKLP19}'s translation from effect handlers to
delimited continuations~\citeyearpar{ForsterKLP19} with
\citeauthor{MaterzokB12}'s CPS translation for delimited
continuations~\citeyearpar{MaterzokB12}.
\end{proof}
% \paragraph*{Remark}
% We originally derived this curried CPS translation for effect handlers
% by composing \citeauthor{ForsterKLP17}'s translation from effect
% handlers to delimited continuations~\citeyearpar{ForsterKLP17} with
% \citeauthor{MaterzokB12}'s CPS translation for delimited
% continuations~\citeyearpar{MaterzokB12}.
\subsection{Uncurried translation}
\label{sec:first-order-uncurried-cps}
%
%
\begin{figure}
\flushleft
\textbf{Syntax}
\begin{syntax}
\slab{Computations} &M,N \in \UCompCat &::= & \cdots \mid \XCancel{M\,W} \mid V\,W \mid U\,V\,W \smallskip \\
\XCancel{\slab{Evaluation contexts}} &\XCancel{\EC \in \UEvalCat} &::= & \XCancel{[~] \mid \EC\;W} \\
\end{syntax}
\textbf{Reductions}
\begin{reductions}
\usemlab{App_1} & (\lambda x . M) V &\reducesto& M[V/x] \\
\usemlab{App_2} & (\lambda x . \lambda y. \, M) V\, W &\reducesto& M[V/x,W/y] \\
\XCancel{\usemlab{Lift}} & \XCancel{\EC[M]} &\reducesto& \XCancel{\EC[N], \hfill \text{if } M \reducesto N}
\end{reductions}
\caption{Adjustments to the syntax and semantics of $\UCalc$.}
\label{fig:refined-cps-cbv-target}
\end{figure}
%
In this section we will refine the CPS translation for deep handlers
to make it properly tail-recursive. As remarked in the previous
section, the lack of tail-recursion is apparent in the syntax of the
target calculus $\UCalc$ as it permits an arbitrary computation term
in the function position of an application term.
%
As a first step we may restrict the syntax of the target calculus such
that the term in function position must be a value. With this
restriction the syntax of $\UCalc$ implements the property that any
term constructor features at most one computation term, and this
computation term always appears in tail position. This restriction
suffices to ensure that every function application will be in tail
position.
%
Figure~\ref{fig:refined-cps-cbv-target} contains the adjustments to
syntax and semantics of $\UCalc$. The target calculus now supports
both unary and binary application forms. As we shall see shortly,
binary application turns out be convenient when we enrich the notion
of continuation. Both application forms are comprised only of value
terms. As a result the dynamic semantics of $\UCalc$ no longer makes
use of evaluation contexts. The reduction rule $\usemlab{App_1}$
applies to unary application and it is the same as the
$\usemlab{App}$-rule in Figure~\ref{fig:cps-cbv-target}. The new
$\usemlab{App_2}$-rule applies to binary application: it performs a
simultaneous substitution of the arguments $V$ and $W$ for the
parameters $x$ and $y$, respectively, in the function body $M$.
%
These changes to $\UCalc$ immediately invalidate the curried
translation from the previous section as the image of the translation
is no longer well-formed.
%
The crux of the problem is that the curried interpretation of
continuations causes the CPS translation to produce `large'
application terms, e.g. the translation rule for effect forwarding
produces a three-argument application term.
%
To rectify this problem we can adapt the technique of
\citet{MaterzokB12} to uncurry our CPS translation. Uncurrying
necessitates a change of representation for continuations: a
continuation is now an alternating list of pure continuation functions
and effect continuation functions. Thus, we move to an explicit
representation of the runtime handler stack.
%
The change of continuation representation means the CPS translation
for effect handlers is no longer a conservative extension. The
translation is adjusted as follows to account for the new
representation.
%
\begin{equations}
\cps{-} &:& \CompCat \to \UCompCat\\
\cps{\Return~V} &\defas& \lambda (k \cons ks).k\,\cps{V}\,ks \\
\cps{\Let~x \revto M~\In~N} &\defas& \lambda (k \cons ks).\cps{M}((\lambda x.\lambda ks'.\cps{N}(k \cons ks')) \cons ks)
\smallskip \\
\cps{\Do\;\ell\;V} &\defas& \lambda (k \cons h \cons ks).h\,\Record{\ell,\Record{\cps{V}, \lambda x.\lambda ks'.k\,x\,(h \cons ks')}}\,ks
\smallskip \\
\cps{\Handle \; M \; \With \; H} &\defas& \lambda ks . \cps{M} (\cps{\hret} \cons \cps{\hops} \cons ks) \medskip\\
\cps{-} &:& \HandlerCat \to \UCompCat\\
\cps{\{\Return \; x \mapsto N\}} &\defas& \lambda x.\lambda ks.\Let\; (h \cons ks') = ks \;\In\; \cps{N}\,ks'
\\
\cps{\{\ell \; p \; r \mapsto N_\ell\}_{\ell \in \mathcal{L}}}
&\defas&
\bl
\lambda \Record{z,\Record{p,r}}. \lambda ks. \Case \; z \;
\{( \bl\ell \mapsto \cps{N_\ell}\,ks)_{\ell \in \mathcal{L}};\,\\
y \mapsto \hforward((y,p,r),ks) \}\el \\
\el \\
\hforward((y,p,r),ks) &\defas& \bl
\Let\; (k' \cons h' \cons ks') = ks \;\In\; \\
h'\,\Record{y, \Record{p, \lambda x.\lambda ks''.\,r\,x\,(k' \cons h' \cons ks'')}}\,ks'\\
\el \medskip\\
\pcps{-} &:& \CompCat \to \UCompCat\\
\pcps{M} &\defas& \cps{M}~((\lambda x.\lambda ks.x) \cons (\lambda \Record{z,\Record{p,r}}. \lambda ks.\,\Absurd~z) \cons \nil)
\end{equations}
%
The other cases are as in the original CPS translation in
Figure~\ref{fig:cps-cbv}.
%
Since we now use a list representation for the stacks of
continuations, we have had to modify the translations of all the
constructs that manipulate continuations. For $\Return$ and $\Let$, we
extract the top continuation $k$ and manipulate it analogously to the
original translation in Figure~\ref{fig:cps-cbv}. For $\Do$, we
extract the top pure continuation $k$ and effect continuation $h$ and
invoke $h$ in the same way as the curried translation, except that we
explicitly maintain the stack $ks$ of additional continuations. The
translation of $\Handle$, however, pushes a continuation pair onto the
stack instead of supplying them as arguments. Handling of operations
is the same as before, except for explicit passing of the
$ks$. Forwarding now pattern matches on the stack to extract the next
continuation pair, rather than accepting them as arguments.
%
% Proper tail recursion coincides with a refinement of the target
% syntax. Now applications are either of the form $V\,W$ or of the form
% $U\,V\,W$. We could also add a rule for applying a two argument lambda
% abstraction to two arguments at once and eliminate the
% $\usemlab{Lift}$ rule, but we defer this until our higher order
% translation in Section~\ref{sec:higher-order-uncurried-cps}.
Let us revisit the example from
Section~\ref{sec:first-order-curried-cps} to see first hand that our
refined translation makes the example properly tail-recursive.
%
\begin{equations}
\pcps{\Return\;\Record{}}
&= & (\lambda (k \cons ks).k\,\Record{}\,ks)\,((\lambda x.\lambda ks.x) \cons (\lambda \Record{z, \_}.\lambda ks.\Absurd\,z) \cons \nil) \\
&\reducesto& (\lambda x.\lambda ks.x)\,\Record{}\,((\lambda \Record{z,\_}.\lambda ks.\Absurd\,z) \cons \nil)\\
&\reducesto& \Record{}
\end{equations}
%
The reduction sequence in the image of this uncurried translation has
one fewer steps (disregarding the administrative steps induced by
pattern matching) than in the image of the curried translation. The
`missing' step is precisely the reduction marked
\eqref{eq:cps-admin-reduct-2}, which was a partial application of the
initial pure continuation function that was not in tail
position. Note, however, that the first reduction (corresponding to
\eqref{eq:cps-admin-reduct-1}) remains administrative, the reduction
is entirely static, and as such, it can be dealt with as part of the
translation.
%
\paragraph{Administrative redexes}
We can determine whether a redex is administrative in the image by
determining whether it corresponds to a redex in the preimage. If
there is no corresponding redex, then the redex is said to be
administrative. We can further classify an administrative redex as to
whether it is \emph{static} or \emph{dynamic}.
A static administrative redex is a by-product of the translation that
does not contribute to the implementation of the dynamic behaviour of
the preimage.
%
The separation between value and computation terms in fine-grain
call-by-value makes it evident where static administrative redexes can
arise. They arise from computation terms, which can clearly be seen
from the translation where each computation term induces a
$\lambda$-abstraction. Each induced $\lambda$-abstraction must
necessarily be eliminated by a unary application. These unary
applications are administrative; they do not correspond to reductions
in the preimage. The applications that do correspond to reductions in
the preimage are the binary (continuation) applications.
A dynamic administrative redex is a genuine implementation detail that
supports some part of the dynamic behaviour of the preimage. An
example of such a detail is the implementation of effect
forwarding. In $\HCalc$ effect forwarding involves no auxiliary
reductions, any operation invocation is instantaneously dispatched to
a suitable handler (if such one exists).
%
The translation presented above realises effect forwarding by
explicitly applying the next effect continuation. This application is
an example of a dynamic administrative reduction. Not every dynamic
administrative reduction is necessary, though. For instance, the
implementation of resumptions as a composition of
$\lambda$-abstractions gives rise to administrative reductions upon
invocation. As we shall see in
Section~\ref{sec:first-order-explicit-resump} administrative
reductions due to resumption invocation can be dealt with by choosing
a more clever implementation of resumptions.
\subsection{Resumptions as explicit reversed stacks}
\label{sec:first-order-explicit-resump}
%
% \dhil{Show an example involving administrative redexes produced by resumptions}
%
Thus far resumptions have been represented as functions, and
forwarding has been implemented using function composition. The
composition of resumption gives rise to unnecessary dynamic
administrative redexes as function composition necessitates the
introduction of an additional lambda abstraction.
%
As an illustration of how and where these administrative redexes arise
let us consider an example with an operation $\Ask : \Unit \opto \Int$
and two handlers $H_\Reader$ and $H_\Other$ such that
$H_\Reader^\Ask = \{\OpCase{\Ask}{\Unit}{r} \mapsto r~42\}$ whilst
$\Ask \not\in \dom(H_\Other)$. We denote the top-level continuation by
$ks_\top$.
%
% \[
% \bl
% \Reader \defas \{\Ask : \Unit \opto \Int\} \smallskip\\
% H_{\Reader} : \alpha \eff \Reader \Harrow \alpha, \{ \OpCase{\Ask}{\Unit}{r} \mapsto r~42 \} \in H_{\Reader}\\
% H_{\Other} : \alpha \eff \varepsilon \Harrow \alpha, \Ask \not\in \dom(H_{\Reader})
% \el
% \]
%
\begin{derivation}
&\pcps{\Handle\; (\Handle\; \Do\;\Ask\,\Unit\;\With\;H_{\Other})\;\With\;H_{\Reader}}\\
% =& \reason{definition of $\cps{-}$}\\
% % &\lambda ks.\cps{\Handle\; \Do\;\Ask\,\Unit\;\With\;H_{\Other}}(\cps{H_{\Reader}^\mret} \cons H_{\Reader}^\mops \cons ks)\\
% % =& \reason{}\\
% &(\lambda ks.(\lambda ks'.\cps{\Do\;\Ask\,\Unit}(\cps{H_{\Other}^\mret} \cons \cps{H_{\Other}^\mops} \cons ks'))(\cps{H_{\Reader}^\mret} \cons H_{\Reader}^\mops \cons ks))\,ks_\top\\
=& \reason{definition of $\pcps{-}$}\\
&(\lambda ks.
\bl
(\lambda ks'.
\bl
(\lambda (k \cons h \cons ks'').h\,\Record{\Ask,\Record{\Unit,\lambda x.\lambda ks'''.k~x~(h \cons ks''')}}\,ks'')\\
(\cps{H_{\Other}^\mret} \cons \cps{H_{\Other}^\mops} \cons ks'))
\el\\
(\cps{H_{\Reader}^\mret} \cons H_{\Reader}^\mops \cons ks))\,ks_\top
\el\\
% \reducesto^\ast& \reason{apply continuation}\\
% & (\lambda (k \cons h \cons ks'').h\,\Record{\Ask,\Record{\Unit,\lambda x.\lambda ks'''.k~x~(h \cons ks''')}})(\cps{H_{\Other}^\mret} \cons \cps{H_{\Other}^\mops} \cons \cps{H_{\Reader}^\mret} \cons H_{\Reader}^\mops \cons ks_\top)\\
\reducesto^\ast & \reason{multiple applications of \usemlab{App}, activation of $H_\Other$}\\
& \cps{H_{\Other}^\mops}\,\Record{\Ask,\Record{\Unit,\lambda x.\lambda ks'''.\cps{H_{\Other}^\mret}~x~(\cps{H_{\Other}^\mops} \cons ks''')}}\,(\cps{H_{\Reader}^\mret} \cons H_{\Reader}^\mops \cons ks_\top)\\
\reducesto^\ast & \reason{effect forwarding to $H_\Reader$}\\
& \bl
H_{\Reader}^\mops\,\Record{\Ask,\Record{\Unit,\lambda x.\lambda ks''. r_\dec{admin}~x~(H_{\Reader}^\mret \cons H_{\Reader}^\mops \cons ks'')}}\,ks_\top\\
\where~r_\dec{admin} \defas \lambda x.\lambda
ks'''.\cps{H_{\Other}^\mret}~x~(\cps{H_{\Other}^\mops} \cons ks''')
\el\\
\reducesto^\ast & \reason{invocation of the administrative resumption} \\
& r_\dec{admin}~42~(H_{\Reader}^\mret \cons H_{\Reader}^\mops \cons ks_\top)\\
\reducesto^\ast & \reason{invocation of the resumption of the operation invocation site}\\
& \cps{H_{\Other}^\mret}~42~(\cps{H_{\Other}^\mops} \cons
H_{\Reader}^\mret \cons H_{\Reader}^\mops \cons ks_\top)
\end{derivation}
%
Effect forwarding introduces the administrative abstraction
$r_{\dec{admin}}$, whose sole purpose is to forward the interpretation
of the operation to the operation invocation site. In a certain sense
$r_{\dec{admin}}$ is a sort of identity frame. The insertion of
identities ought to always trigger the alarm bells as an identity
computation is typically extraneous.
%
The amount of identity frames being generated scales linearly with the
number of handlers the operation needs to pass through before reaching
a suitable handler.
We can avoid generating these administrative resumption redexes by
applying a variation of the technique that we used in the previous
section to uncurry the curried CPS translation.
%
Rather than representing resumptions as functions, we move to an
explicit representation of resumptions as \emph{reversed} stacks of
pure and effect continuations. By choosing to reverse the order of
pure and effect continuations, we can construct resumptions
efficiently using regular cons-lists. We augment the syntax and
semantics of $\UCalc$ with a computation term
$\Let\;r=\Res\,V\;\In\;N$ which allow us to convert these reversed
stacks to actual functions on demand.
%
\begin{reductions}
\usemlab{Res}
& \Let\;r=\Res\,(V_n \cons \dots \cons V_1 \cons \nil)\;\In\;N
& \reducesto
& N[\lambda x\,k.V_1\,x\,(V_2 \cons \dots \cons V_n \cons k)/r]
\end{reductions}
%
This reduction rule reverses the stack, extracts the top continuation
$V_1$, and prepends the remainder onto the current stack $W$. The
stack representing a resumption and the remaining stack $W$ are
reminiscent of the zipper data structure for representing cursors in
lists~\cite{Huet97}. Thus we may think of resumptions as representing
pointers into the stack of handlers.
%
The translations of $\Do$, handling, and forwarding need to be
modified to account for the change in representation of
resumptions.
%
\begin{equations}
\cps{-} &:& \CompCat \to \UCompCat\\
\cps{\Do\;\ell\;V}
&\defas& \lambda k \cons h \cons ks.\,h\, \Record{\ell,\Record{\cps{V}, h \cons k \cons \nil}}\, ks
\medskip\\
%
\cps{-} &:& \HandlerCat \to \UCompCat\\
\cps{\{(\ell \; p \; r \mapsto N_\ell)_{\ell \in \mathcal{L}}\}}
&\defas& \bl
\lambda \Record{z,\Record{p,rs}}.\lambda ks.\Case \;z\; \{
\bl
(\ell \mapsto \Let\;r=\Res\;rs \;\In\; \cps{N_{\ell}}\, ks)_{\ell \in \mathcal{L}};\,\\
y \mapsto \hforward((y,p,rs),ks) \} \\
\el \\
\el \\
\hforward((y,p,rs),ks)
&\defas&\Let\; (k' \cons h' \cons ks') = ks \;\In\; h'\,\Record{y,\Record{p,h' \cons k' \cons rs}} \,ks'
\end{equations}
%
The translation of $\Do$ constructs an initial resumption stack,
operation clauses extract and convert the current resumption stack
into a function using the $\Res$ construct, and $\hforward$ augments
the current resumption stack with the current continuation pair.
%
\subsection{Higher-order translation for deep effect handlers}
\label{sec:higher-order-uncurried-deep-handlers-cps}
%
\begin{figure}
%
\textbf{Values}
%
\begin{displaymath}
\begin{eqs}
\cps{-} &:& \ValCat \to \UValCat\\
\cps{x} &\defas& x \\
\cps{\lambda x.M} &\defas& \dlam x\,ks.\Let\;(k \dcons h \dcons ks') = ks \;\In\;\cps{M} \sapp (\reflect k \scons \reflect h \scons \reflect ks') \\
% \cps{\Rec\,g\,x.M} &\defas& \Rec\;f\,x\,ks.\cps{M} \sapp \reflect ks\\
\cps{\Lambda \alpha.M} &\defas& \dlam \Unit\,ks.\Let\;(k \dcons h \dcons ks') = ks \;\In\;\cps{M} \sapp (\reflect k \scons \reflect h \scons \reflect ks') \\
\cps{\Record{}} &\defas& \Record{} \\
\cps{\Record{\ell = V; W}} &\defas& \Record{\ell = \cps{V}; \cps{W}} \\
\cps{\ell~V} &\defas& \ell~\cps{V} \\
\end{eqs}
\end{displaymath}
%
\textbf{Computations}
%
\begin{equations}
\cps{-} &:& \CompCat \to \SValCat^\ast \to \UCompCat\\
\cps{V\,W} &\defas& \slam \sks.\cps{V} \dapp \cps{W} \dapp \reify \sks \\
\cps{V\,T} &\defas& \slam \sks.\cps{V} \dapp \Record{} \dapp \reify \sks \\
\cps{\Let\; \Record{\ell=x;y} = V \; \In \; N} &\defas& \slam \sks.\Let\; \Record{\ell=x;y} = \cps{V} \; \In \; \cps{N} \sapp \sks \\
\cps{\Case~V~\{\ell~x \mapsto M; y \mapsto N\}} &\defas&
\slam \sks.\Case~\cps{V}~\{\ell~x \mapsto \cps{M} \sapp \sks; y \mapsto \cps{N} \sapp \sks\} \\
\cps{\Absurd~V} &\defas& \slam \sks.\Absurd~\cps{V} \\
\cps{\Return~V} &\defas& \slam \sk \scons \sks.\reify \sk \dapp \cps{V} \dapp \reify \sks \\
\cps{\Let~x \revto M~\In~N} &\defas& \slam \sk \scons \sks.\cps{M} \sapp
(\reflect (\dlam x\,\dhk.
\ba[t]{@{}l}
\Let\;(h \dcons \dhk') = \dhk\;\In\\
\cps{N} \sapp (\sk \scons \reflect h \scons \reflect \dhk')) \scons \sks)
\ea\\
\cps{\Do\;\ell\;V}
&\defas& \slam \sk \scons \sh \scons \sks.\reify \sh \dapp \Record{\ell,\Record{\cps{V}, \reify \sh \dcons \reify \sk \dcons \dnil}} \dapp \reify \sks\\
\cps{\Handle \; M \; \With \; H} &\defas& \slam \sks . \cps{M} \sapp (\reflect \cps{\hret} \scons \reflect \cps{\hops} \scons \sks)
%
\end{equations}
%
\textbf{Handler definitions}
%
\begin{equations}
\cps{-} &:& \HandlerCat \to \UValCat\\
\cps{\{\Return \; x \mapsto N\}} &\defas& \dlam x\, \dhk.
\ba[t]{@{~}l}
\Let\; (h \dcons \dk \dcons h' \dcons \dhk') = \dhk \;\In\\
\cps{N} \sapp (\reflect \dk \scons \reflect h' \scons \reflect \dhk')
\ea
\\
\cps{\{(\ell \; p \; r \mapsto N_\ell)_{\ell \in \mathcal{L}}\}}
&\defas& \bl
\dlam \Record{z,\Record{p,\dhkr}}\,\dhk.\Case \;z\; \{
\ba[t]{@{}l@{}c@{~}l}
&(\ell \mapsto&
\ba[t]{@{}l}
\Let\;r=\Res\;\dhkr \;\In\\
\Let\;(\dk \dcons h \dcons \dhk') = \dhk \;\In\\
\cps{N_{\ell}} \sapp (\reflect \dk \scons \reflect h \scons \reflect \dhk'))_{\ell \in \mathcal{L}};
\ea\\
&y \mapsto& \hforward((y,p,\dhkr),\dhk) \} \\
\ea \\
\el \\
\hforward((y,p,\dhkr),\dhk)
&\defas&\Let\; (\dk' \dcons h' \dcons \dhk') = \dhk \;\In\; h' \dapp \Record{y,\Record{p,h' \dcons \dk' \dcons \dhkr}} \dapp \dhk'
\end{equations}
%
\textbf{Top level program}
%
\begin{equations}
\pcps{-} &:& \CompCat \to \UCompCat\\
\pcps{M} &=& \cps{M} \sapp (\reflect (\dlam x\,\dhk.x) \scons \reflect (\dlam z\,\dhk.\Absurd~z) \scons \snil) \\
\end{equations}
\caption{Higher-order uncurried CPS translation of $\HCalc$.}
\label{fig:cps-higher-order-uncurried}
\end{figure}
%
In the previous sections, we have seen step-wise refinements of the
initial curried CPS translation for deep effect handlers
(Section~\ref{sec:first-order-curried-cps}) to be properly
tail-recursive (Section~\ref{sec:first-order-uncurried-cps}) and to
avoid yielding unnecessary dynamic administrative redexes for
resumptions (Section~\ref{sec:first-order-explicit-resump}).
%
There is still one outstanding issue, namely, that the translation
yields static administrative redexes. In this section we will further
refine the CPS translation to eliminate all static administrative
redexes at translation time.
%
Specifically, the translation will be adapted to a higher-order
one-pass CPS translation~\citep{DanvyF90} that partially evaluates
administrative redexes at translation time.
%
Following \citet{DanvyN03}, I will use a two-level lambda calculus
notation to distinguish between \emph{static} lambda abstraction and
application in the meta language and \emph{dynamic} lambda abstraction
and application in the target language. To disambiguate syntax
constructors in the respective calculi I will mark static constructors
with a {\color{blue}$\overline{\text{blue overline}}$}, whilst dynamic
constructors are marked with a
{\color{red}$\underline{\text{red underline}}$}. The principal idea is
that redexes marked as static are reduced as part of the translation,
whereas those marked as dynamic are reduced at runtime. To facilitate
this notation I will write application explicitly using an infix
``at'' symbol ($@$) in both calculi.
\paragraph{Static terms}
%
As in the dynamic target language, continuations are represented as
alternating lists of pure continuation functions and effect
continuation functions. To ease notation I will make use of pattern
matching notation. The static meta language is generated by the
following productions.
%
\begin{syntax}
\slab{Static\text{ }patterns} &\sP \in \SPatCat &::=& \sks \mid \sk \scons \sP\\
\slab{Static\text{ }values} & \sV, \sW \in \SValCat &::=& \reflect V \mid \sV \scons \sW \mid \slam \sP. \sM\\
\slab{Static\text{ }computations} & \sM \in \SCompCat &::=& \sV \mid \sV \sapp \sW \mid \sV \dapp V \dapp W
\end{syntax}
%
The patterns comprise only static list deconstructing. We let $\sP$
range over static patterns.
%
The static values comprise reflected dynamic values, static lists, and
static lambda abstractions. We let $\sV, \sW$ range over meta language
values; by convention we shall use variables $\sk$ to denote
statically known pure continuations, $\sh$ to denote statically known
effect continuations, and $\sks$ to denote statically known
continuations.
%
I shall use $\sM$ to range over static computations, which comprise
static values, static application and binary dynamic application of a
static value to two dynamic values.
%
Static computations are subject to the following equational axioms.
%
\begin{equations}
(\slam \sks. \sM) \sapp \sV &\defas& \sM[\sV/\sks]\\
(\slam \sk \scons \sks. \sM) \sapp (\sV \scons \sW) &\defas& (\slam \sks. \sM[\sV/\sk]) \sapp \sW\\
\end{equations}
%
The first equation is static $\beta$-equivalence, it states that
applying a static lambda abstraction with binder $\sks$ and body $\sM$
to a static value $\sV$ is equal to substituting $\sV$ for $\sks$ in
$\sM$. The second equation provides a means for applying a static
lambda abstraction to a static list component-wise.
%
Reflected static values are reified as dynamic language values
$\reify \sV$ by induction on their structure.
%
\[
\ba{@{}l@{\qquad}c}
\reify \reflect V \defas V
&\reify (\sV \scons \sW) \defas \reify \sV \dcons \reify \sW
\ea
\]
%
\paragraph{Higher-order translation}
%
As we shall see this translation manipulates the continuation
intricate ways; and since we maintain the interpretation of the
continuation as an alternating list of pure continuation functions and
effect continuation functions it is useful to define the `parity' of a
continuation as follows:
%
a continuation is said to be \emph{odd} if the top element is an
effect continuation function, otherwise it is said to \emph{even}.
%
The complete CPS translation is given in
Figure~\ref{fig:cps-higher-order-uncurried}. In essence, it is the
same as the refined first-order uncurried CPS translation, although
the notation is slightly more involved due to the separation of static
and dynamic parts.
As before, the translation comprises three translation functions, one
for each syntactic category: values, computations, and handler
definitions. Amongst the three functions, the translation function for
computations stands out, because it is the only one that operates on
static continuations. Its type signature,
$\cps{-} : \CompCat \to \SValCat^\ast \to \UCompCat$, signifies that
it is a binary function, taking a $\HCalc$-computation term as its
first argument and a static continuation (a list of static values) as
its second argument, and ultimately produces a $\UCalc$-computation
term. Thus the computation translation function is able to manipulate
the continuation. In fact, the translation is said to be higher-order
because the continuation parameter is a higher-order: it is a list of
functions.
To ensure that static continuation manipulation is well-defined the
translation maintains the invariant that the statically known
continuation stack ($\sk$) always contains at least two continuation
functions, i.e. a complete continuation pair consisting of a pure
continuation function and an effect continuation function.
%
This invariant guarantees that all translations are uniform in whether
they appear statically within the scope of a handler or not, and this
also simplifies the correctness proof
(Theorem~\ref{thm:ho-simulation}).
%
Maintaining this invariant has a cosmetic effect on the presentation
of the translation. This effect manifests in any place where a
dynamically known continuation stack is passed in (as a continuation
parameter $\dhk$), as it must be deconstructed using a dynamic
language $\Let$ to expose the continuation structure and subsequently
reconstructed as a static value with reflected variable names.
The translation of $\lambda$-abstractions provides an example of this
deconstruction and reconstruction in action. The dynamic continuation
$\dhk$ is deconstructed to expose to the next pure continuation
function $\dk$ and effect continuation $h$, and the remainder of the
continuation $\dhk'$; these names are immediately reflected and put
back together to form a static continuation that is provided to the
translation of the body computation $M$.
The only translation rule that consumes a complete reflected
continuation pair is the translation of $\Do$. The effect continuation
function, $\sh$, is dynamically applied to an operation package and
the reified remainder of the continuation $\sks$. As usual, the
operation package contains the payload and the resumption, which is
represented as a reversed continuation slice.
%
The only other translation rules that manipulate the continuation are
$\Return$ and $\Let$, which only consume the pure continuation
function $\sk$. For example, the translation of $\Return$ is a dynamic
application of $\sk$ to the translation of the value $V$ and the
remainder of the continuation $\sks$.
%
The shape of $\sks$ is odd, meaning that the top element is an effect
continuation function. Thus the pure continuation $\sk$ has to account
for this odd shape. Fortunately, the possible instantiations of the
pure continuation are few. We can derive the all possible
instantiations systematically by using the operational semantics of
$\HCalc$. According to the operational semantics the continuation of a
$\Return$-computation is either the continuation of a
$\Let$-expression or a $\Return$-clause (a bare top-level
$\Return$-computation is handled by the $\pcps{-}$ translation).
%
The translations of $\Let$-expressions and $\Return$-clauses each
account for odd continuations. For example, the translation of $\Let$
consumes the current pure continuation function and generates a
replacement: a pure continuation function which expects an odd dynamic
continuation $\dhk$, which it deconstructs to expose the effect
continuation $h$ along with the current pure continuation function in
the translation of $N$. The modified continuation is passed to the
translation of $M$.
%
To provide a flavour of how this continuation manipulation functions
in practice, consider the following example term.
%
\begin{derivation}
&\pcps{\Let\;x \revto \Return\;V\;\In\;N}\\
=& \reason{definition of $\pcps{-}$}\\
&\ba[t]{@{}l}(\slam \sk \scons \sks.\cps{\Return\;V} \sapp
(\reflect(\dlam x\,ks.
\ba[t]{@{}l}
\Let\;(h \dcons ks') = ks \;\In\\
\cps{N} \sapp (\sk \scons \reflect h \scons \reflect ks')) \scons \sks)
\ea\\
\sapp (\reflect (\dlam x\,ks.x) \scons \reflect (\dlam z\,ks.\Absurd~z) \scons \snil))
\ea\\
=& \reason{definition of $\cps{-}$}\\
&\ba[t]{@{}l}(\slam \sk \scons \sks.(\slam \sk \scons \sks. \reify \sk \dapp \cps{V} \dapp \reify \sks) \sapp
(\reflect(\dlam x\,ks.
\ba[t]{@{}l}
\Let\;(h \dcons ks') = ks \;\In\\
\cps{N} \sapp (\sk \scons \reflect h \scons \reflect ks')) \scons \sks)
\ea\\
\sapp (\reflect (\dlam x\,ks.x) \scons \reflect (\dlam z\,ks.\Absurd~z) \scons \snil))
\ea\\
=& \reason{static $\beta$-reduction}\\
&(\slam \sk \scons \sks. \reify \sk \dapp \cps{V} \dapp \reify \sks)
\sapp
(\reflect(\dlam x\,\dhk.
\ba[t]{@{}l}
\Let\;(h \dcons \dhk') = \dhk \;\In\\
\cps{N} \sapp
\ba[t]{@{}l}
(\reflect (\dlam x\,\dhk.x) \scons \reflect h \scons \reflect \dhk'))\\
~~\scons \reflect (\dlam z\,\dhk.\Absurd~z) \scons \snil))
\ea
\ea\\
=& \reason{static $\beta$-reduction}\\
&\ba[t]{@{}l@{~}l}
&(\dlam x\,\dhk.
\Let\;(h \dcons \dhk') = \dhk \;\In\;
\cps{N} \sapp
(\reflect (\dlam x\,\dhk.x) \scons \reflect h \scons \reflect \dhk'))\\
\dapp& \cps{V} \dapp ((\dlam z\,\dhk.\Absurd~z) \dcons \dnil)\\
\ea\\
\reducesto& \reason{\usemlab{App_2}}\\
&\Let\;(h \dcons \dhk') = (\dlam z\,\dhk.\Absurd~z) \dcons \dnil \;\In\;
\cps{N[V/x]} \sapp
(\reflect (\dlam x\,\dhk.x) \scons \reflect h \scons \reflect \dhk'))\\
\reducesto^+& \reason{dynamic pattern matching and substitution}\\
&\cps{N[V/x]} \sapp
(\reflect (\dlam x\,\dhk.x) \scons \reflect (\dlam z\,\dhk.\Absurd~z) \scons \reflect \dnil)
\end{derivation}
%
The translation of $\Return$ provides the generated dynamic pure
continuation function with the odd continuation
$((\dlam z\,ks.\Absurd~z) \dcons \dnil)$. After the \usemlab{App_2}
reduction, the pure continuation function deconstructs the odd
continuation in order to bind the current effect continuation function
to the name $h$, which would have been used during the translation of
$N$.
The translation of $\Handle$ applies the translation of $M$ to the
current continuation extended with the translation of the
$\Return$-clause, acting as a pure continuation function, and the
translation of operation-clauses, acting as an effect continuation
function.
%
The translation of a $\Return$-clause discards the effect continuation
$h$ and in addition exposes the next pure continuation $\dk$ and
effect continuation $h'$ which are reflected to form a static
continuation for the translation of $N$.
%
The translation of operation clauses unpacks the provided operation
package to perform a case-split on the operation label $z$. The branch
for $\ell$ deconstructs the continuation $\dhk$ in order to expose the
continuation structure. The forwarding branch also deconstructs the
continuation, but for a different purpose; it augments the resumption
$\dhkr$ with the next pure and effect continuation functions.
Let us revisit the example from
Section~\ref{sec:first-order-curried-cps} to see that the higher-order
translation eliminates the static redex at translation time.
%
\begin{equations}
\pcps{\Return\;\Record{}}
&=& (\slam \sk \scons \sks. \sk \dapp \Record{} \dapp \reify \sks) \sapp (\reflect (\dlam x\,\dhk.x) \scons \reflect (\dlam z\,\dhk.\Absurd\;z) \scons \snil)\\
&=& (\dlam x\,\dhk.x) \dapp \Record{} \dapp (\reflect (\dlam z\,\dhk.\Absurd\;z) \dcons \dnil)\\
&\reducesto& \Record{}
\end{equations}
%
In contrast with the previous translations, the reduction sequence in
the image of this translation contains only a single dynamic reduction
(disregarding the dynamic administrative reductions arising from
continuation construction and deconstruction); both
\eqref{eq:cps-admin-reduct-1} and \eqref{eq:cps-admin-reduct-2}
reductions have been eliminated as part of the translation.
The elimination of static redexes coincides with a refinements of the
target calculus. Unary application is no longer a necessary
primitive. Every unary application dealt with by the metalanguage,
i.e. all unary applications are static.
\paragraph{Implicit lazy continuation deconstruction}
%
An alternative to the explicit deconstruction of continuations is to
implicitly deconstruct continuations on demand when static pattern
matching fails. I took this approach in \citet{HillerstromLAS17}. On
one hand this approach leads to a slightly slicker presentation. On
the other hand it complicates the proof of correctness as one must
account for static pattern matching failure.
%
A practical argument in favour of the explicit eager continuation
deconstruction is that it is more accessible from an implementation
point of view. No implementation details are hidden away in side
conditions.
%
Also, it is not clear that lazy deconstruction has any advantage over
eager deconstruction, as the translation must reify the continuation
when it transitions from computations to values and reflect the
continuation when it transitions from values to computations, in which
case static pattern matching would fail.
\subsubsection{Correctness}
\label{sec:higher-order-cps-deep-handlers-correctness}
We establish the correctness of the higher-order uncurried CPS
translation via a simulation result in the style of
Plotkin~\cite{Plotkin75} (Theorem~\ref{thm:ho-simulation}). However,
before we can state and prove this result, we first several auxiliary
lemmas describing how translated terms behave. First, the higher-order
CPS translation commutes with substitution.
%
\begin{lemma}[Substitution]\label{lem:ho-cps-subst}
%
The higher-order uncurried CPS translation commutes with
substitution in value terms
%
\[
\cps{W}[\cps{V}/x] = \cps{W[V/x]},
\]
%
and with substitution in computation terms
\[
(\cps{M} \sapp (\sk \scons \sh \scons \sks))[\cps{V}/x]
= \cps{M[V/x]} \sapp (\sk \scons \sh \scons \sks)[\cps{V}/x],
\]
%
and with substitution in handler definitions
%
\begin{equations}
\cps{\hret}[\cps{V}/x]
&=& \cps{\hret[V/x]},\\
\cps{\hops}[\cps{V}/x]
&=& \cps{\hops[V/x]}.
\end{equations}
\end{lemma}
%
\begin{proof}
By mutual induction on the structure of $W$, $M$, $\hret$, and
$\hops$.
\end{proof}
%
It follows as a corollary that top-level substitution is well-behaved.
%
\begin{corollary}[Top-level substitution]
\[
\pcps{M}[\cps{V}/x] = \pcps{M[V/x]}.
\]
\end{corollary}
%
\begin{proof}
Follows immediately by the definitions of $\pcps{-}$ and
Lemma~\ref{lem:ho-cps-subst}.
\end{proof}
%
In order to reason about the behaviour of the \semlab{Op} rule, which
is defined in terms of an evaluation context, we need to extend the
CPS translation to evaluation contexts.
%
\begin{equations}
\cps{-} &:& \EvalCat \to \SValCat\\
\cps{[~]} &\defas& \slam \sks.\sks \\
\cps{\Let\; x \revto \EC \;\In\; N} &\defas& \slam \sk \scons \sks.\cps{\EC} \sapp
(\reflect(\dlam x\,ks.
\ba[t]{@{}l}
\Let\;(h \dcons ks') = ks\;\In\;\\
\cps{N} \sapp (\sk \scons \reflect h \scons \reflect ks')) \scons \sks)
\ea\\
\cps{\Handle\; \EC \;\With\; H} &\defas& \slam \sks. \cps{\EC} \sapp (\cps{\hret} \scons \cps{\hops} \scons \sks)
\end{equations}
%
The following lemma is the characteristic property of the CPS
translation on evaluation contexts.
%
It provides a means for decomposing an evaluation context, such that
we can focus on the computation contained within the evaluation
context.
%
\begin{lemma}[Decomposition]
\label{lem:decomposition}
%
\begin{equations}
\cps{\EC[M]} \sapp (\sV \scons \sW) &=& \cps{M} \sapp (\cps{\EC} \sapp (\sV \scons \sW)) \\
\end{equations}
%
\end{lemma}
%
\begin{proof}
By structural induction on the evaluation context $\EC$.
\end{proof}
%
Even though we have eliminated the static administrative redexes, we
still need to account for the dynamic administrative redexes that
arise from pattern matching against a reified continuation. To
properly account for these administrative redexes it is convenient to
treat list pattern matching as a primitive in $\UCalc$, therefore we
introduce a new reduction rule $\usemlab{SplitList}$ in $\UCalc$.
%
\begin{reductions}
\usemlab{SplitList} & \Let\; (k \dcons ks) = V \dcons W \;\In\; M &\reducesto& M[V/k, W/ks] \\
\end{reductions}
%
Note this rule is isomorphic to the \usemlab{Split} rule with lists
encoded as right nested pairs using unit to denote nil.
%
We write $\areducesto$ for the compatible closure of
\usemlab{SplitList}.
We also need to be able to reason about the computational content of
reflection after reification. By definition we have that
$\reify \reflect V = V$, the following lemma lets us reason about the
inverse composition.
%
\begin{lemma}[Reflect after reify]
\label{lem:reflect-after-reify}
%
Reflection after reification may give rise to dynamic administrative
reductions, i.e.
%
\[
\cps{M} \sapp (\sV_1 \scons \dots \sV_n \scons \reflect \reify \sW)
\areducesto^\ast \cps{M} \sapp (\sV_1 \scons \dots \sV_n \scons \sW)
\]
\end{lemma}
%
\begin{proof}
By induction on the structure of $M$.
\end{proof}
%
We next observe that the CPS translation simulates forwarding.
%
\begin{lemma}[Forwarding]
\label{lem:forwarding}
If $\ell \notin dom(H_1)$ then
%
\[
\cps{\hops_1} \dapp \Record{\ell,\Record{U, V}} \dapp (V_2 \dcons \cps{\hops_2} \dcons W)
\reducesto^+
\cps{\hops_2} \dapp \Record{\ell,\Record{U, \cps{\hops_2} \dcons V_2 \dcons V}} \dapp W
\]
%
\end{lemma}
%
\begin{proof}
By direct calculation.
\end{proof}
%
Now we show that the translation simulates the \semlab{Op}
rule.
%
\begin{lemma}[Handling]
\label{lem:handle-op}
If $\ell \notin BL(\EC)$ and $\hell = \{\ell\,p\,r \mapsto N_\ell\}$ then
%
\[
\bl
\cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\reflect\cps{\hret} \scons \reflect\cps{\hops} \scons \sV)) \reducesto^+\areducesto^\ast \\
\quad
(\cps{N_\ell} \sapp \sV)[\cps{V}/p, (\lambda y\,ks.\cps{\Return\;y} \sapp (\cps{\EC} \sapp (\reflect\cps{\hret} \scons \reflect\cps{\hops} \scons \reflect ks)))/]
\el
\]
%
\end{lemma}
%
\begin{proof}
Follows from Lemmas~\ref{lem:decomposition},
\ref{lem:reflect-after-reify}, and \ref{lem:forwarding}.
\end{proof}
%
Finally, we have the ingredients to state and prove the simulation
result. The following theorem shows that the only extra behaviour
exhibited by a translated term is the bureaucracy of deconstructing
the continuation stack.
%
\begin{theorem}[Simulation]
\label{thm:ho-simulation}
If $M \reducesto N$ then $\pcps{M} \reducesto^+ \areducesto^* \pcps{N}$.
\end{theorem}
%
\begin{proof}
By case analysis on the reduction relation using Lemmas
\ref{lem:decomposition}--\ref{lem:handle-op}. The \semlab{Op} case
follows from Lemma~\ref{lem:handle-op}.
\end{proof}
%
% In common with most CPS translations, full abstraction does not
% hold. However, as our semantics is deterministic it is straightforward
% to show a backward simulation result.
% %
% \begin{corollary}[Backwards simulation]
% If $\pcps{M} \reducesto^+ \areducesto^* V$ then there exists $W$ such that
% $M \reducesto^* W$ and $\pcps{W} = V$.
% \end{corollary}
% %
% \begin{proof}
% TODO\dots
% \end{proof}
%
\section{Transforming shallow effect handlers}
\label{sec:cps-shallow}
In this section we will continue to build upon the higher-order
uncurried CPS translation
(Section~\ref{sec:higher-order-uncurried-deep-handlers-cps}) in order
to add support for shallow handlers. The dynamic nature of shallow
handlers pose an interesting challenge, because unlike deep resumption
capture, a shallow resumption capture discards the handler leaving
behind a dangling pure continuation. The dangling pure continuation
must be `adopted' by whichever handler the resumption invocation occur
under. This handler is determined dynamically by the context, meaning
the CPS translation must be able to modify continuation pairs.
In Section~\ref{sec:cps-shallow-flawed} I will discuss an attempt at a
`natural' extension of the higher-order uncurried CPS translation for
deep handlers, but for various reasons this extension is
flawed. However, the insights gained by attempting this extension
leads to yet another change of the continuation representation
(Section~\ref{sec:generalised-continuations}) resulting in the notion
of a \emph{generalised continuation}.
%
In Section~\ref{sec:cps-gen-conts} we will see how generalised
continuations provide a basis for implementing deep and shallow effect
handlers simultaneously, solving all of the problems encountered thus
far uniformly.
\subsection{A specious attempt}
\label{sec:cps-shallow-flawed}
%
Initially it is tempting to try to extend the interpretation of the
continuation representation in the higher-order uncurried CPS
translation for deep handlers to squeeze in shallow handlers. The main
obstacle one encounters is how to decouple a pure continuation from
its handler such that a it can later be picked up by another handler.
To fully uninstall a handler, we must uninstall both the pure
continuation function corresponding to its return clause and the
effect continuation function corresponding to its operation clauses.
%
In the current setup it is impossible to reliably uninstall the former
as due to the translation of $\Let$-expressions it may be embedded
arbitrarily deep within the current pure continuation and the
extensional representation of pure continuations means that we cannot
decompose them.
A quick fix to this problem is to treat pure continuation functions
arising from return clauses separately from pure continuation
functions arising from $\Let$-expressions.
%
Thus we can interpret the continuation as a sequence of triples
consisting of two pure continuation functions followed by an effect
continuation function, where the first pure continuation function
corresponds the continuation induced by some $\Let$-expression and the
second corresponds to the return clause of the current handler.
%
To distinguish between the two kinds of pure continuations, we shall
write $\svhret$ for statically known pure continuations arising from
return clauses, and $\vhret$ for dynamically known ones. Similarly, we
write $\svhops$ and $\vhops$, respectively, for statically and
dynamically, known effect continuations. With this notation in mind,
we may translate operation invocation and handler installation using
the new interpretation of the continuation representation as follows.
%
\begin{equations}
\cps{-} &:& \CompCat \to \SValCat^\ast \to \UCompCat \smallskip\\
\cps{\Do\;\ell\;V} &\defas& \slam \sk \scons \svhret \scons \svhops \scons \sks.
\reify\svhops \ba[t]{@{}l}
\dapp \Record{\ell, \Record{\cps{V}, \reify\svhops \dcons \reify\svhret \dcons \reify\sk \dcons \dnil}}\\
\dapp \reify \sks
\ea\smallskip\\
\cps{\ShallowHandle \; M \; \With \; H} &\defas&
\slam \sks . \cps{M} \sapp (\reflect\kid \scons \reflect\cps{\hret} \scons \reflect\cps{\hops}^\dagger \scons \sks) \medskip\\
\kid &\defas& \dlam x\, \dhk.\Let\; (\vhret \dcons \dhk') = \dhk \;\In\; \vhret \dapp x \dapp \dhk'
\end{equations}
%
The only change to the translation of operation invocation is the
extra bureaucracy induced by the additional pure continuation.
%
The translation of handler installation is a little more interesting
as it must make up an initial pure continuation in order to maintain
the sequence of triples interpretation of the continuation
structure. As the initial pure continuation we use the administrative
function $\kid$, which amounts to a dynamic variation of the
translation rule for the trivial computation term $\Return$: it
invokes the next pure continuation with whatever value it was
provided.
%
Although, I will not demonstrate it here, the translation rules for
$\lambda$-abstractions, $\Lambda$-abstractions, and $\Let$-expressions
must also be adjusted accordingly to account for the extra
bureaucracy. The same is true for the translation of $\Return$-clause,
thus it is rather straightforward to adapt it to the new continuation
interpretation.
%
\begin{equations}
\cps{-} &:& \HandlerCat \to \UValCat\\
\cps{\{\Return \; x \mapsto N\}} &\defas& \dlam x\, \dhk.
\ba[t]{@{}l}
\Let\; (\_ \dcons \dk \dcons \vhret \dcons \vhops \dcons \dhk') = \dhk \;\In\\
\cps{N} \sapp (\reflect \dk \scons \reflect \vhret \scons \reflect \vhops \scons \reflect \dhk')
\ea
\end{equations}
%
As before, the translation ensures that the associated effect
continuation is discarded (the first element of the dynamic
continuation $ks$). In addition the next continuation triple is
extracted and reified as a static continuation triple.
%
The interesting rule is the translation of operation clauses.
%
\begin{equations}
\cps{\{(\ell \; p \; r \mapsto N_\ell)_{\ell \in \mathcal{L}}\}}^\dagger
&\defas&
\bl
\dlam \Record{z,\Record{p,\dhkr}}\,\dhk.\\
\qquad\Case \;z\; \{
\ba[t]{@{}l@{}c@{~}l}
(&\ell &\mapsto
\ba[t]{@{}l}
\Let\;(\dk \dcons \vhret \dcons \vhops \dcons \dhk') = \dhk\;\In\\
\Let\;(\_ \dcons \_ \dcons \dhkr') = \dhkr \;\In\\
\Let\;r = \Res\,(\hid \dcons \rid \dcons \dhkr') \;\In \\
\cps{N_{\ell}} \sapp (\reflect\dk \scons \reflect\vhret \scons \reflect\vhops \scons \reflect \dhk'))_{\ell \in \mathcal{L}} \\
\ea \\
&y &\mapsto \hforward((y,p,\dhkr),\dhk) \} \\
\ea
\el \medskip\\
\hforward((y, p, \dhkr), \dhk) &\defas& \bl
\Let\; (\dk \dcons \vhret \dcons \vhops \dcons \dhk') = \dhk \;\In \\
\vhops \dapp \Record{y, \Record{p, \vhops \dcons \vhret \dcons \dk \dcons \dhkr}} \dapp \dhk' \\
\el \smallskip\\
\hid &\defas& \dlam\,\Record{z,\Record{p,\dhkr}}\,\dhk.\hforward((z,p,\dhkr),\dhk) \smallskip\\
\rid &\defas& \dlam x\, \dhk.\Let\; (\vhops \dcons \dk \dcons \dhk') = \dhk \;\In\; \dk \dapp x \dapp \dhk'
% \pcps{-} &:& \CompCat \to \UCompCat\\
% \pcps{M} &\defas& \cps{M} \sapp (\reflect \kid \scons \reflect (\dlam x\,\dhk.x) \scons \reflect (\dlam z\,ks.\Absurd~z) \scons \snil) \\
\end{equations}
%
The main difference between this translation rule and the translation
rule for deep handler operation clauses is the realisation of
resumptions.
%
Recall that a resumption is represented as a reversed slice of a
continuation. Thus the deconstruction of the resumption $\dhkr$
effectively ensures that the current handler gets properly
uninstalled. However, it presents a new problem as the remainder
$\dhkr'$ is not a well-formed continuation slice, because the top
element is a pure continuation without a handler.
%
To rectify this problem, we can insert a dummy identity handler
composed from $\hid$ and $\rid$. The effect continuation $\hid$
forwards every operation, and the pure continuation $\rid$ is an
identity clause. Thus every operation and the return value will
effectively be handled by the next handler.
%
Unfortunately, insertion of such identity handlers lead to memory
leaks~\cite{Kiselyov12,HillerstromL18}.
%
% \dhil{Give the counting example}
%
The use of identity handlers is symptomatic for the need of a more
general notion of resumptions. During resumption invocation the
dangling pure continuation should be composed with the current pure
continuation which suggests the need for a shallow variation of the
resumption construction primitive $\Res$.
%
\[
\bl
\Let\; r = \Res^\dagger (\_ \dcons \_ \dcons \dk \dcons h_n^{\mathrm{ops}} \dcons h_n^{\mathrm{ret}} \dcons \dk_n \dcons \cdots \dcons h_1^{\mathrm{ops}} \dcons h_1^{\mathrm{ret}} \dcons \dk_1 \dcons \dnil)\;\In\;N \reducesto\\
\quad N[(\dlam x\,\dhk.
\ba[t]{@{}l}
\Let\; (\dk' \dcons \dhk') = \dhk\;\In\\
\dk_1 \dapp x \dapp (h_1^{\mathrm{ret}} \dcons h_1^{\mathrm{ops}} \cdots \dcons \dk_n \dcons h_n^{\mathrm{ret}} \dcons h_n^{\mathrm{ops}} \dcons (\dk' \circ \dk) \dcons \dhk'))/r]
\ea
\el
\]
%
where $\circ$ is defined to be function composition in continuation
passing style.
%
\[
g \circ f \defas \lambda x\,\dhk.
\ba[t]{@{}l}
\Let\;(\dk \dcons \dhk') = \dhk\; \In\\
f \dapp x \dapp ((\lambda x\,\dhk. g \dapp x \dapp (\dk \dcons \dhk)) \dcons \dhk')
\ea
\]
%
The idea is that $\Res^\dagger$ uninstalls the appropriate handler and
composes the dangling pure continuation $\dk$ with the next
\emph{dynamically determined} pure continuation $\dk'$, and reverses
the remainder of the resumption and composes it with the modified
dynamic continuation ($(\dk' \circ \dk) \dcons ks'$).
%
While the underlying idea is correct, this particular realisation of
the idea is problematic as the use of function composition
reintroduces a variation of the dynamic administrative redexes that we
dealt with in Section~\ref{sec:first-order-explicit-resump}.
%
In order to avoid generating these administrative redexes we need a
more intensional continuation representation.
%
Another telltale sign that we require a more intensional continuation
representation is the necessary use of the administrative function
$\kid$ in the translation of $\Handle$ as a placeholder for the empty
pure continuation.
%
In terms of aesthetics, the non-uniform continuation deconstructions
also suggest that we could benefit from a more structured
interpretation of continuations.
%
Although it is seductive to program with lists, it quickly gets
unwieldy.
\subsection{Generalised continuations}
\label{sec:generalised-continuations}
One problem is that the continuation representation used by the
higher-order uncurried translation for deep handlers is too
extensional to support shallow handlers efficiently. Specifically, the
representation of pure continuations needs to be more intensional to
enable composition of pure continuations without having to materialise
administrative continuation functions.
%
Another problem is that the continuation representation integrates the
return clause into the pure continuations, but the semantics of
shallow handlers demands that this return clause is discarded when any
of the operations is invoked.
The solution to the first problem is to reuse the key idea of
Section~\ref{sec:first-order-explicit-resump} to avoid administrative
continuation functions by representing a pure continuation as an
explicit list consisting of pure continuation functions. As a result
the composition of pure continuation functions can be realised as a
simple cons-operation.
%
The solution to the second problem is to pair the continuation
functions corresponding to the $\Return$-clause and operation clauses
in order to distinguish the pure continuation function induced by a
$\Return$-clause from those induced by $\Let$-expressions.
%
Plugging these two solutions yields the notion of \emph{generalised
continuations}. A generalised continuation is a list of
\emph{continuation frames}. A continuation frame is a triple
$\Record{fs, \Record{\vhret, \vhops}}$, where $fs$ is list of stack
frames representing the pure continuation for the computation
occurring between the current execution and the handler, $\vhret$ is
the (translation of the) return clause of the enclosing handler, and
$\vhops$ is the (translation of the) operation clauses.
%
The change of representation of pure continuations does mean that we
can no longer invoke them by simple function application. Instead, we
must inspect the structure of the pure continuation $fs$ and act
appropriately. To ease notation it is convenient introduce a new
computation form for pure continuation application $\kapp\;V\;W$ that
feeds a value $W$ into the continuation represented by $V$. There are
two reduction rules.
%
\begin{reductions}
\usemlab{KAppNil}
& \kapp\;(\dRecord{\dnil, \dRecord{\vhret, \vhops}} \dcons \dhk)\,W
& \reducesto
& \vhret\,W\,\dhk
\\
\usemlab{KAppCons}
& \kapp\;(\dRecord{f \cons fs, h} \dcons \dhk)\,W
& \reducesto
& f\,W\,(\dRecord{fs, h} \dcons \dhk)
\end{reductions}
%
%\dhil{Say something about skip frames?}
%
The first rule describes what happens when the pure continuation is
exhausted and the return clause of the enclosing handler is
invoked. The second rule describes the case when the pure continuation
has at least one element: this pure continuation function is invoked
and the remainder of the continuation is passed in as the new
continuation.
We must also change how resumptions (i.e. reversed continuations) are
converted into functions that can be applied. Resumptions for deep
handlers ($\Res\,V$) are similar to
Section~\ref{sec:first-order-explicit-resump}, except that we now use
$\kapp$ to invoke the continuation. Resumptions for shallow handlers
($\Res^\dagger\,V$) are more complex. Instead of taking all the frames
and reverse appending them to the current stack, we remove the current
handler $h$ and move the pure continuation
($f_1 \dcons \dots \dcons f_m \dcons \dnil$) into the next frame. This
captures the intended behaviour of shallow handlers: they are removed
from the stack once they have been invoked. The following two
reduction rules describe their behaviour.
%
\[
\ba{@{}l@{\quad}l}
\usemlab{Res}
& \Let\;r=\Res\,(V_n \dcons \dots \dcons V_1 \dcons \dnil)\;\In\;N
\reducesto N[\dlam x\, \dhk.\kapp\;(V_1 \dcons \dots V_n \dcons \dhk)\,x/r] \\
\usemlab{Res^\dagger}
& \Let\;r=\Res^\dagger\,(\dRecord{f_1 \dcons \dots \dcons f_m \dcons \nil, h} \dcons V_n \dcons \dots \dcons V_1 \dcons \dnil)\;\In\;N \reducesto \\
& \qquad N[\dlam x\,\dhk.\bl
\Let\,\dRecord{fs',h'} \dcons \dhk' = \dhk\;\In\;\\
\kapp\,(V_1 \dcons \dots \dcons V_n \dcons \dRecord{f_1 \dcons \dots \dcons f_m \dcons fs', h'} \dcons \dhk')\,x/r]
\el
\ea
\]
%
These constructs along with their reduction rules are
macro-expressible in terms of the existing constructs.
%
I choose here to treat them as primitives in order to keep the
presentation relatively concise.
Essentially, a generalised continuation amounts to a sort of
\emph{defunctionalised} continuation, where $\kapp$ acts as an
interpreter for the continuation
structure~\cite{Reynolds98a,DanvyN01}.
\subsection{Dynamic terms: the target calculus revisited}
\label{sec:target-calculus-revisited}
\begin{figure}[t]
\textbf{Syntax}
\begin{syntax}
\slab{Values} &V, W \in \UValCat &::= & x \mid \dlam x\,\dhk.M \mid \Rec\,g\,x\,\dhk.M \mid \ell \mid \dRecord{V, W}
\smallskip \\
\slab{Computations} &M,N \in \UCompCat &::= & V \mid U \dapp V \dapp W \mid \Let\; \dRecord{x, y} = V \; \In \; N \\
& &\mid& \Case\; V\, \{\ell \mapsto M; x \mapsto N\} \mid \Absurd\;V\\
& &\mid& \kapp\,V\,W \mid \Let\;r=\Res^\depth\;V\;\In\;M
\end{syntax}
\textbf{Syntactic sugar}
\begin{displaymath}
\bl
\begin{eqs}
\Let\; x = V \;\In\; N &\equiv& N[V/x] \\
\ell\;V &\equiv& \dRecord{\ell, V} \\
\end{eqs}
\qquad
\begin{eqs}
\Record{} &\equiv& \ell_{\Record{}} \\
\Record{\ell=V; W} &\equiv& \ell\;\dRecord{V, W} \\
\end{eqs}
\qquad
\begin{eqs}
\dnil &\equiv& \ell_{\dnil} \\
V \dcons W &\equiv& \ell_{\dcons}\;\dRecord{V, W} \\
\end{eqs}
\smallskip \\
\ba{@{}c@{\quad}c@{}}
\Case\;V\;\{\ell\,x \mapsto M; y \mapsto N\} \equiv \\
\qquad \bl
\Let\; y=V \;\In\;
\Let\; \dRecord{z, x} = y \;\In \\
\Case\;z\;\{\ell \mapsto M; z \mapsto N\} \\
\el \\
\ea
\qquad
\ba{@{}l@{\quad}l@{}}
\Let\;\Record{\ell=x; y} = V \;\In\; N \equiv \\
\qquad \bl
\Let\; \dRecord{z, z'} = V \;\In\;
\Let\; \dRecord{x, y} = z' \;\In \\
\Case\; z \;\{\ell \mapsto N; z \mapsto \ell_{\bot}\} \\
\el \\
\ea \\
\el
\end{displaymath}
%
\textbf{Standard reductions}
%
\begin{reductions}
%% Standard reductions
\usemlab{App} & (\dlam x\,\dhk.M) \dapp V \dapp W &\reducesto& M[V/x, W/\dhk] \\
\usemlab{Rec} & (\Rec\,g\,x\,\dhk.M) \dapp V \dapp W &\reducesto& M[\Rec\,g\,x\,\dhk.M/g, V/x, W/\dhk] \smallskip\\
\usemlab{Split} & \Let \; \dRecord{x, y} = \dRecord{V, W} \; \In \; N &\reducesto& N[V/x, W/y] \\
\usemlab{Case_1} &
\Case \; \ell \,\{ \ell \; \mapsto M; x \mapsto N\} &\reducesto& M \\
\usemlab{Case_2} &
\Case \; \ell \,\{ \ell' \; \mapsto M; x \mapsto N\} &\reducesto& N[\ell/x], \hfill\quad \text{if } \ell \neq \ell' \smallskip\\
\end{reductions}
%
\textbf{Continuation reductions}
%
\begin{reductions}
\usemlab{KAppNil} &
\kapp \; (\dRecord{\dnil, \dRecord{v, e}} \dcons \dhk) \, V &\reducesto& v \dapp V \dapp \dhk \\
\usemlab{KAppCons} &
\kapp \; (\dRecord{\dlf \dcons \dlk, h} \dcons \dhk) \, V &\reducesto& \dlf \dapp V \dapp (\dRecord{\dlk, h} \dcons \dhk) \\
\end{reductions}
%
\textbf{Resumption reductions}
%
\[
\ba{@{}l@{\quad}l@{}}
\usemlab{Res} &
\Let\;r=\Res(V_n \dcons \dots \dcons V_1 \dcons \dnil)\;\In\;N \reducesto \\
&\quad N[\dlam x\,\dhk. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons \dhk' = \dhk\;\In\\
\kapp\;(V_1 \dcons \dots \dcons V_n \dcons \dRecord{fs, \dRecord{\vhret, \vhops}} \dcons \dhk')\;x/r]\el
\\
\usemlab{Res^\dagger} &
\Let\;r=\Res^\dagger(\dRecord{\dlf_1 \dcons \dots \dcons \dlf_m \dcons \dnil, h} \dcons V_n \dcons \dots \dcons V_1 \dcons \dnil)\;\In\;N \reducesto\\
& \quad N[\dlam x\,k. \bl
\Let\;\dRecord{\dlk', h'} \dcons \dhk' = \dhk \;\In \\
\kapp\;(V_1 \dcons \dots \dcons V_n \dcons \dRecord{\dlf_1 \dcons \dots \dcons \dlf_m \dcons \dlk', h'} \dcons \dhk')\;x/r] \\
\el
\ea
\]
%
\caption{Untyped target calculus supporting generalised continuations.}
\label{fig:cps-target-gen-conts}
\end{figure}
Let us revisit the target
calculus. Figure~\ref{fig:cps-target-gen-conts} depicts the untyped
target calculus with support for generalised continuations.
%
This is essentially the same as the target calculus used for the
higher-order uncurried CPS translation for deep effect handlers in
Section~\ref{sec:higher-order-uncurried-deep-handlers-cps}, except for
the addition of recursive functions. The calculus also includes the
$\kapp$ and $\Let\;r=\Res^\depth\;V\;\In\;N$ constructs described in
Section~\ref{sec:generalised-continuations}. There is a small
difference in the reduction rules for the resumption constructs: for
deep resumptions we do an additional pattern match on the current
continuation ($\dhk$). This is required to make the simulation proof
for the CPS translation with generalised continuations
(Section~\ref{sec:cps-gen-conts}) go through, because it makes the
functions that resumptions get converted to have the same shape as the
translation of source level functions -- this is required because the
operational semantics does not treat resumptions as distinct
first-class objects, but rather as a special kinds of functions.
\subsection{Translation with generalised continuations}
\label{sec:cps-gen-conts}
%
\begin{figure}
%
\textbf{Values}
%
\begin{equations}
\cps{-} &:& \ValCat \to \UValCat\\
\cps{x} &\defas& x\\
\cps{\lambda x.M} &\defas& \dlam x\,\dhk.\Let\;(\dk \dcons \dhk') = \dhk\;\In\;\cps{M} \sapp (\reflect\dk \scons \reflect \dhk') \\
\cps{\Lambda \alpha.M} &\defas& \dlam \Unit\,\dhk.\Let\;(\dk \dcons \dhk') = \dhk\;\In\;\cps{M} \sapp (\reflect\dk \scons \reflect \dhk') \\
\cps{\Rec\,g\,x.M} &\defas& \Rec\,g\,x\,\dhk.\Let\;(\dk \dcons \dhk') = \dhk\;\In\;\cps{M} \sapp (\reflect\dk \scons \reflect \dhk') \\
\multicolumn{3}{c}{
\cps{\Record{}} \defas \Record{}
\qquad
\cps{\Record{\ell = \!\!V; W}} \defas \Record{\ell = \!\cps{V}; \cps{W}}
\qquad
\cps{\ell\,V} \defas \ell\,\cps{V}
}
\end{equations}
%
\textbf{Computations}
%
\begin{equations}
\cps{-} &:& \CompCat \to \SValCat^\ast \to \UCompCat\\
\cps{V\,W} &\defas& \slam \shk.\cps{V} \dapp \cps{W} \dapp \reify \shk \\
\cps{V\,T} &\defas& \slam \shk.\cps{V} \dapp \Record{} \dapp \reify \shk \\
\cps{\Let\; \Record{\ell=x;y} = V \; \In \; N} &\defas& \slam \shk.\Let\; \Record{\ell=x;y} = \cps{V} \; \In \; \cps{N} \sapp \shk \\
\cps{\Case~V~\{\ell~x \mapsto M; y \mapsto N\}} &\defas&
\slam \shk.\Case~\cps{V}~\{\ell~x \mapsto \cps{M} \sapp \shk; y \mapsto \cps{N} \sapp \shk\} \\
\cps{\Absurd~V} &\defas& \slam \shk.\Absurd~\cps{V} \\
\end{equations}
\begin{equations}
\cps{\Return\,V} &\defas& \slam \shk.\kapp\;(\reify \shk)\;\cps{V} \\
\cps{\Let~x \revto M~\In~N} &\defas&
\bl\slam \sRecord{\shf, \sv} \scons \shk.
\ba[t]{@{}l}
\cps{M} \sapp (\sRecord{\bl\reflect((\dlam x\,\dhk.\bl\Let\;(\dk \dcons \dhk') = \dhk\;\In\\
\cps{N} \sapp (\reflect\dk \scons \reflect \dhk')) \el\\
\dcons \reify\shf), \sv} \scons \shk)\el
\ea
\el\\
\cps{\Do\;\ell\;V} &\defas&
\slam \sRecord{\shf, \sRecord{\svhret, \svhops}} \scons \shk.\,
\reify\svhops \bl\dapp \dRecord{\ell,\dRecord{\cps{V}, \dRecord{\reify \shf, \dRecord{\reify\svhret, \reify\svhops}} \dcons \dnil}}\\
\dapp \reify \shk\el \\
\cps{\Handle^\depth \, M \; \With \; H} &\defas&
\slam \shk . \cps{M} \sapp (\sRecord{\snil, \sRecord{\reflect \cps{\hret}, \reflect \cps{\hops}^\depth}} \scons \shk) \\
\end{equations}
%
\textbf{Handler definitions}
%
\begin{equations}
\cps{-} &:& \HandlerCat \to \UValCat\\
% \cps{H}^\depth &=& \sRecord{\reflect \cps{\hret}, \reflect \cps{\hops}^\depth}\\
\cps{\{\Return \; x \mapsto N\}} &\defas& \dlam x\,\dhk.\Let\;(\dk \dcons \dhk') = \dhk\;\In\;\cps{N} \sapp (\reflect\dk \scons \reflect \dhk') \\
\cps{\{(\ell \; p \; r \mapsto N_\ell)_{\ell \in \mathcal{L}}\}}^\depth
&\defas&
\dlam \dRecord{z,\dRecord{p,\dhkr}}\,\dhk.
\Case \;z\; \{
\ba[t]{@{}l@{}c@{~}l}
(&\ell &\mapsto
\ba[t]{@{}l}
\Let\;r=\Res^\depth\,\dhkr\;\In\; \\
\Let\;(\dk \dcons \dhk') = \dhk\;\In\\
\cps{N_{\ell}} \sapp (\reflect\dk \scons \reflect \dhk'))_{\ell \in \mathcal{L}}
\ea\\
&y &\mapsto \hforward((y, p, \dhkr), \dhk) \} \\
\ea \\
\hforward((y, p, \dhkr), \dhk) &\defas& \bl
\Let\; \dRecord{fs, \dRecord{\vhret, \vhops}} \dcons \dhk' = \dhk \;\In \\
\vhops \dapp \dRecord{y,\dRecord{p, \dRecord{fs, \dRecord{\vhret, \vhops}} \dcons \dhkr}} \dapp \dhk' \\
\el
\end{equations}
\textbf{Top-level program}
%
\begin{equations}
\pcps{-} &:& \CompCat \to \UCompCat\\
\pcps{M} &\defas& \cps{M} \sapp (\sRecord{\snil, \sRecord{\reflect \dlam x\,\dhk. x, \reflect \dlam \dRecord{z,\dRecord{p,\dhkr}}\,\dhk.\Absurd~z}} \scons \snil) \\
\end{equations}
%
\caption{Higher-order uncurried CPS translation for effect handlers.}
\label{fig:cps-higher-order-uncurried-simul}
\end{figure}
%
The CPS translation is given in
Figure~\ref{fig:cps-higher-order-uncurried-simul}. In essence, it is
the same as the CPS translation for deep effect handlers as described
in Section~\ref{sec:higher-order-uncurried-deep-handlers-cps}, though
it is adjusted to account for generalised continuation
representation. For notational convenience, we write $\chi$ to denote
a statically known effect continuation frame
$\sRecord{\svhret,\svhops}$.
%
The translation of $\Return$ invokes the continuation $\shk$ using the
continuation application primitive $\kapp$.
%
The translations of deep and shallow handlers differ only in their use
of the resumption construction primitive.
A major aesthetic improvement due to the generalised continuation
representation is that continuation construction and deconstruction
are now uniform: only a single continuation frame is inspected at a
time.
\subsubsection{Correctness}
\label{sec:cps-gen-cont-correctness}
%
The correctness of this CPS translation
(Theorem~\ref{thm:ho-simulation-gen-cont}) follows closely the
correctness result for the higher-order uncurried CPS translation for
deep handlers (Theorem~\ref{thm:ho-simulation}). Save for the
syntactic difference, the most notable difference is the extension of
the operation handling lemma (Lemma~\ref{lem:handle-op-gen-cont}) to
cover shallow handling in addition to deep handling. Each lemma is
stated in terms of static continuations, where the shape of the top
element is always known statically, i.e., it is of the form
$\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons
\sW$. Moreover, the static values $\sV_{fs}$, $\sV_{\mret}$, and
$\sV_{\mops}$ are all reflected dynamic terms (i.e., of the form
$\reflect V$). This fact is used implicitly in the proofs. For brevity
we write $\sV_f$ to denote a statically known continuation frame
$\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}}$. The full
proof details are given in Appendix~\ref{sec:proofs-cps-gen-cont}.
%
\begin{lemma}[Substitution]\label{lem:subst-gen-cont}
%
The CPS translation commutes with substitution in value terms
%
\[
\cps{W}[\cps{V}/x] = \cps{W[V/x]},
\]
%
and with substitution in computation terms
\[
\ba{@{}l@{~}l}
% &(\cps{M} \sapp (\sRecord{\sV_{fs},\sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW))[\cps{V}/x]\\
% = &\cps{M[V/x]} \sapp (\sRecord{\sV_{fs},\sRecord{\sV_{\mret},\sV_{\mops}}} \scons\sW)[\cps{V}/x],
&(\cps{M} \sapp (\sV_f \scons \sW))[\cps{V}/x]\\
= &\cps{M[V/x]} \sapp (\sV_f \scons \sW)[\cps{V}/x],
\ea
\]
%
and with substitution in handler definitions
%
\begin{equations}
\cps{\hret}[\cps{V}/x]
&=& \cps{\hret[V/x]},\\
\cps{\hops}[\cps{V}/x]
&=& \cps{\hops[V/x]}.
\end{equations}
\end{lemma}
%
In order to reason about the behaviour of the \semlab{Op} and
\semlab{Op^\dagger} rules, which are defined in terms of evaluation
contexts, we extend the CPS translation to evaluation contexts, using
the same translations as for the corresponding constructs in $\SCalc$.
%
\begin{equations}
\cps{[~]}
&=& \slam \shk. \shk \\
\cps{\Let\; x \revto \EC \;\In\; N}
&=&
\begin{array}[t]{@{}l}
\slam \sRecord{\shf, \sv} \scons \shk.\\
\quad \cps{\EC} \sapp (\bl\sRecord{\reflect((\dlam x\,\dhk.\bl\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons \dhk'=\dhk\;\In\\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect \dhk')) \dcons \reify\shf),\el\\
\sv} \scons \shk)\el
\end{array}
\\
\cps{\Handle^\depth\; \EC \;\With\; H}
&=& \slam \shk.\cps{\EC} \sapp (\sRecord{[], \cps{H}^\depth} \scons \shk)
\end{equations}
%
The following lemma is the characteristic property of the CPS
translation on evaluation contexts.
%
This allows us to focus on the computation within an evaluation
context.
%
\begin{lemma}[Evaluation context decomposition]
\label{lem:decomposition-gen-cont}
\[
% \cps{\EC[M]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW)
% =
% \cps{M} \sapp (\cps{\EC} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW))
\cps{\EC[M]} \sapp (\sV_f \scons \sW)
=
\cps{M} \sapp (\cps{\EC} \sapp (\sV_f \scons \sW))
\]
\end{lemma}
%
By definition, reifying a reflected dynamic value is the identity
($\reify \reflect V = V$), but we also need to reason about the
inverse composition. As a result of the invariant that the translation
always has static access to the top of the handler stack, the
translated terms are insensitive to whether the remainder of the stack
is statically known or is a reflected version of a reified stack. This
is captured by the following lemma. The proof is by induction on the
structure of $M$ (after generalising the statement to stacks of
arbitrary depth), and relies on the observation that translated terms
either access the top of the handler stack, or reify the stack to use
dynamically, whereupon the distinction between reflected and reified
becomes void. Again, this lemma holds when the top of the static
continuation is known.
%
\begin{lemma}[Reflect after reify]
\label{lem:reflect-after-reify-gen-cont}
%
\[
% \cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons \reflect \reify \sW)
% =
% \cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW).
\cps{M} \sapp (\sV_f \scons \reflect \reify \sW)
=
\cps{M} \sapp (\sV_f \scons \sW).
\]
\end{lemma}
The next lemma states that the CPS translation correctly simulates
forwarding. The proof is by inspection of how the translation of
operation clauses treats non-handled operations.
%
\begin{lemma}[Forwarding]\label{lem:forwarding-gen-cont}
If $\ell \notin dom(H_1)$ then:
%
\[
\bl
\cps{\hops_1}^\delta \dapp \dRecord{\ell, \dRecord{V_p, V_{\dhkr}}} \dapp (\dRecord{V_{fs}, \cps{H_2}^\delta} \dcons W)
\reducesto^+ \qquad \\
\hfill
\cps{\hops_2}^\delta \dapp \dRecord{\ell, \dRecord{V_p, \dRecord{V_{fs}, \cps{H_2}^\delta} \dcons V_{\dhkr}}} \dapp W. \\
\el
\]
%
\end{lemma}
The following lemma is central to our simulation theorem. It
characterises the sense in which the translation respects the handling
of operations. Note how the values substituted for the resumption
variable $r$ in both cases are in the image of the translation of
$\lambda$-terms in the CPS translation. This is thanks to the precise
way that the reductions rules for resumption construction works in our
dynamic language, as described above.
%
\begin{lemma}[Handling]\label{lem:handle-op-gen-cont}
Suppose $\ell \notin BL(\EC)$ and $\hell = \{\ell\,p\,r \mapsto N_\ell\}$. If $H$ is deep then
%
% \[
% \bl
% \cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sRecord{\sV_{fs},\sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW)) \reducesto^+ \\
% \quad (\cps{N_\ell} \sapp \sRecord{\sV_{fs},\sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW)\\
% \qquad \quad [\cps{V}/p,
% \dlam x\,\dhk.\bl
% \Let\;\dRecord{fs, \dRecord{\vhret, \vhops}} \dcons \dhk' = \dhk\;\In\;\\
% \cps{\Return\;x} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sRecord{\reflect \dlk, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect\dhk'))/r]. \\
% \el\\
% \el
% \]
\[
\bl
\cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sV_f \scons \sW)) \reducesto^+ \\
\quad (\cps{N_\ell} \sapp (\sV_f \scons \sW))
[\cps{V}/p,
\dlam x\,\dhk.\bl
\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}} \dcons \dhk' = \dhk\;\In\;
\cps{\Return\;x}\\
\sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sRecord{\reflect \dlk, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect\dhk'))/r]. \\
\el\\
\el
\]
%
Otherwise if $H$ is shallow then
%
% \[
% \bl
% \cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}^\dagger} \scons \sRecord{\sV_{fs},\sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW)) \reducesto^+ \\
% \quad (\cps{N_\ell} \sapp \sRecord{\sV_{fs},\sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW)\\
% \qquad [\cps{V}/p, \dlam x\,\dhk. \bl
% \Let\;\dRecord{\dlk, \dRecord{\vhret, \vhops}} \dcons \dhk' = \dhk \;\In \\
% \cps{\Return\;x} \sapp (\cps{\EC} \sapp (\sRecord{\reflect \dlk, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect\dhk'))/r]. \\
% \el \\
% \el
% \]
\[
\bl
\cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}^\dagger} \scons \sV_f \scons \sW)) \reducesto^+ \\
\quad (\cps{N_\ell} \sapp (\sV_f \scons \sW))
[\cps{V}/p, \dlam x\,\dhk. \bl
\Let\;\dRecord{\dlk, \dRecord{\vhret, \vhops}} \dcons \dhk' = \dhk \;\In\;\cps{\Return\;x}\\
\sapp (\cps{\EC} \sapp (\sRecord{\reflect \dlk, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect\dhk'))/r]. \\
\el \\
\el
\]
%
\end{lemma}
\medskip
With the aid of the above lemmas we can state and prove the main
result for the translation: a simulation result in the style of
\citet{Plotkin75}.
%
\begin{theorem}[Simulation]
\label{thm:ho-simulation-gen-cont}
If $M \reducesto N$ then
\[
\cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}}
\scons \sW) \reducesto^+ \cps{N} \sapp (\sRecord{\sV_{fs},
\sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW).
\]
\end{theorem}
\begin{proof}
The proof is by case analysis on the reduction relation using Lemmas
\ref{lem:decomposition-gen-cont}--\ref{lem:handle-op-gen-cont}. In
particular, the \semlab{Op} and \semlab{Op^\dagger} cases follow
from Lemma~\ref{lem:handle-op-gen-cont}.
\end{proof}
In common with most CPS translations, full abstraction does not hold
(a function could count the number of handlers it is invoked within by
examining the continuation, for example). However, as the semantics is
deterministic it is straightforward to show a backward simulation
result.
%
\begin{lemma}[Backwards simulation]
If $\pcps{M} \reducesto^+ V$ then there exists $W$
such that $M \reducesto^\ast W$ and $\pcps{W} = V$.
\end{lemma}
%
\begin{corollary}
$M \reducesto^\ast V$ if and only if $\pcps{M} \reducesto^\ast \pcps{V}$.
\end{corollary}
\section{Transforming parameterised handlers}
\label{sec:cps-param}
%
\begin{figure}
% \textbf{Continuation reductions}
% %
% \begin{reductions}
% \usemlab{KAppNil} &
% \kapp \; (\dRecord{\dnil, \dRecord{q, v, e}} \dcons \dhk) \, V &\reducesto& v \dapp \dRecord{q,V} \dapp \dhk \\
% \usemlab{KAppCons} &
% \kapp \; (\dRecord{\dlf \dcons \dlk, h} \dcons \dhk) \, V &\reducesto& \dlf \dapp V \dapp (\dRecord{\dlk, h} \dcons \dhk) \\
% \end{reductions}
% %
% \textbf{Resumption reductions}
% %
% \[
% \ba{@{}l@{\quad}l@{}}
% \usemlab{Res}^\ddag &
% \Let\;r=\Res(V_n \dcons \dots \dcons V_1 \dcons \dnil)\;\In\;N \reducesto \\
% &\quad N[\dlam x\,\dhk. \bl\Let\;\dRecord{fs, \dRecord{q,\vhret, \vhops}}\dcons \dhk' = \dhk\;\In\\
% \kapp\;(V_1 \dcons \dots \dcons V_n \dcons \dRecord{fs, \dRecord{q,\vhret, \vhops}} \dcons \dhk')\;x/r]\el
% \\
% \ea
% \]
%
\textbf{Computations}
%
\begin{equations}
\cps{-} &:& \CompCat \to \SValCat^\ast \to \UCompCat\\
% \cps{\Let~x \revto M~\In~N} &\defas&
% \bl\slam \sRecord{\shf, \sRecord{\xi, \svhret, \svhops}} \scons \shk.
% \ba[t]{@{}l}
% \cps{M} \sapp (\sRecord{\bl\reflect((\dlam x\,\dhk.\bl\Let\;(\dk \dcons \dhk') = \dhk\;\In\\
% \cps{N} \sapp (\reflect\dk \scons \reflect \dhk')) \el\\
% \dcons \reify\shf), \sRecord{\xi, \svhret, \svhops}} \scons \shk)\el
% \ea
% \el\\
\cps{\Do\;\ell\;V} &\defas&
\slam \sRecord{\shf, \sRecord{\xi, \svhret, \svhops}} \scons \shk.\,
\reify\svhops \bl\dapp \dRecord{\reify\xi, \ell,
\dRecord{\bl
\cps{V}, \dRecord{\reify \shf, \dRecord{\reify\xi,\reify\svhret, \reify\svhops}}\\
\dcons \dnil}}
\dapp \reify \shk\el\el \\
\end{equations}
\begin{equations}
\cps{\ParamHandle \, M \; \With \; (q.H)(W)} &\defas&
\slam \shk . \cps{M} \sapp (\sRecord{\snil, \sRecord{\reflect\cps{W},\reflect \cps{\hret}^{\ddag}_q, \reflect \cps{\hops}^{\ddag}_q}} \scons \shk) \\
\end{equations}
\textbf{Handler definitions}
%
\begin{equations}
\cps{-} &:& \HandlerCat \times \UValCat \to \UValCat\\
% \cps{H}^\depth &=& \sRecord{\reflect \cps{\hret}, \reflect \cps{\hops}^\depth}\\
\cps{\{\Return \; x \mapsto N\}}^{\ddag}_q &\defas& \dlam \dRecord{q,x}\,\dhk.\Let\;(\dk \dcons \dhk') = \dhk\;\In\;\cps{N} \sapp (\reflect\dk \scons \reflect \dhk') \\
\cps{\{(\ell \; p \; r \mapsto N_\ell)_{\ell \in \mathcal{L}}\}}^{\ddag}_q
&\defas&
\dlam \dRecord{q,z,\dRecord{p,\dhkr}}\,\dhk.
\Case \;z\; \{
\ba[t]{@{}l@{}c@{~}l}
(&\ell &\mapsto
\ba[t]{@{}l}
\Let\;r=\Res^\ddag\,\dhkr\;\In\; \\
\Let\;(\dk \dcons \dhk') = \dhk\;\In\\
\cps{N_{\ell}} \sapp (\reflect\dk \scons \reflect \dhk'))_{\ell \in \mathcal{L}}
\ea\\
&y &\mapsto \hforward((y, p, \dhkr), \dhk) \} \\
\ea \\
\hforward((y, p, \dhkr), \dhk) &\defas& \bl
\Let\; \dRecord{fs, \dRecord{q, \vhret, \vhops}} \dcons \dhk' = \dhk \;\In \\
\vhops \dapp \dRecord{q, y,\dRecord{p, \dRecord{fs, \dRecord{q,\vhret, \vhops}} \dcons \dhkr}} \dapp \dhk' \\
\el
\end{equations}
\textbf{Top-level program}
\begin{equations}
\pcps{M} &=& \cps{M} \sapp (\sRecord{\dnil, \sRecord{\reflect\dRecord{},\reflect \dlam \dRecord{q,x}\,\dhk. x, \reflect \dlam \dRecord{q,z}\,\dhk.\Absurd~z}} \scons \snil) \\
\end{equations}
\caption{CPS translation for parameterised handlers.}
\label{fig:param-cps}
\end{figure}
Generalised continuations provide a versatile implementation strategy
for effect handlers as exemplified in the previous section. In this
section we add further emphasis on the versatility of generalised
continuations by demonstrating how to adapt the continuation structure
to accommodate parameterised handlers. In order to support
parameterised handlers, each effect continuation must store the
current value of the handler parameter. Thus, an effect continuation
becomes a triple consisting of the parameter, return clause, and
operation clause(s). Furthermore, the return clause gets transformed
into a binary function, that takes the current value of the handler
parameter as its first argument and the return value of the handled
computation as its second argument. Similarly, the operation clauses
are transformed into a binary function, that takes the handler
parameter first and the operation package second. This strategy
effectively amounts to explicit state passing as the parameter value
gets threaded through every handler continuation
function. Operationally, the pure continuation invocation rule
$\usemlab{KAppNil}$ requires a small adjustment to account for the
handler parameter.
%
\[
\kapp \; (\dRecord{\dnil, \dRecord{q, \vhret, \vhops}} \dcons \dhk) \, V \reducesto \vhret \dapp \Record{q,V} \dapp \dhk
\]
%
The pure continuation $v$ is now applied to a pair consisting of the
current value of the handler parameter $q$ and the return value
$V$. Similarly, the resumption rule $\usemlab{Res}$ must also be
adapted to update the value of the handler parameter.
%
\[
\bl
\Let\;r=\Res^\ddag\,(\dRecord{q,\vhret,\vhops} \dcons \dots \dcons V_1 \dcons \dnil)\;\In\;N\reducesto\\
\qquad N[\dlam \dRecord{q',x}\,\dhk.\kapp\;(V_1 \dcons \dots \dcons \dRecord{q',\vhret,\vhops} \dcons \dhk)\;x/r]
\el
\]
%
The rule is not much different from the original $\usemlab{Res}$
rule. The difference is that this rule unpacks the current handler
parameter $q$ along with the return clause, $\vhret$, and operation
clauses, $\vhops$. The reduction constructs a resumption function,
whose first parameter $q'$ binds the updated value of the handler
parameter. The $q'$ is packaged with the original $\vhret$ and
$\vhops$ such that the next activation of the handler gets the
parameter value $q'$ rather than $q$.
The CPS translation is updated accordingly to account for the triple
effect continuation structure. This involves updating the cases that
scrutinise the effect continuation structure as it now includes the
additional state value. The cases that need to be updated are shown in
Figure~\ref{fig:param-cps}. We write $\xi$ to denote static handler
parameters.
%
% The translation of $\Let$ unpacks and repacks the effect continuation
% to maintain the continuation length invariant.
The translation of $\Do$ invokes the effect continuation
$\reify \svhops$ with a triple consisting of the value of the handler
parameter, the operation, and the operation payload. The parameter is
also pushed onto the reversed resumption stack. This is necessary to
account for the case where the effect continuation $\reify \svhops$
does not handle operation $\ell$.
% An alternative option is to push the parameter back
% on the resumption stack during effect forwarding. However that means
% the resumption stack will be nonuniform as the top element sometimes
% will be a pair.
%
The translation of the return and operation clauses are parameterised
by the name of the binder for the handler parameter. Each translation
yields functions that take a pair as input in addition to the current
continuation. The forwarding case is adjusted in the same way as the
translation for $\Do$. The current continuation $k$ is deconstructed
in order to identify the next effect continuation $\vhops$ and its
parameter $q$. Then $\vhops$ is invoked with the updated resumption
stack and the value of its parameter $q$.
%
The top-level translation adds a `dummy' unit value, which is ignored
by both the pure continuation and effect continuation.% The amended
% CPS translation for parameterised handlers is not a zero cost
% translation for shallow and ordinary deep handlers as they will have
% to thread a ``dummy'' parameter value through.
We can avoid the use of such values entirely if the target language
had proper sums to tag effect continuation frames
accordingly. Obviously, this entails performing a case analysis every
time an effect continuation frame is deconstructed.
\section{Related work}
\label{sec:cps-related-work}
\paragraph{CPS transforms for effect handlers}
%
The one-pass higher-order CPS translation for deep, shallow, and
parameterised handlers draws on insights from the literature on CPS
translations for delimited control operators such as shift and
reset~\citep{DanvyF90,DanvyF92,DanvyN03,MaterzokB12}.
%
% \citet{DybvigJS07} develop a lean monadic framework for implementing
% multi-prompt delimited continuations.
% \paragraph{CPS translations for handlers}
%
Other CPS translations for handlers use a monadic approach. For
example, \citet{Leijen17} implements deep and parameterised handlers
in Koka~\citep{Leijen14} by translating them into a free monad
primitive in the runtime. \citeauthor{Leijen17} uses a selective CPS
translation to lift code into the monad. The selective aspect is
important in practice to avoid overhead in code that does not use
effect handlers.
%
Scala Effekt~\citep{BrachthauserS17,BrachthauserSO20} provides an
implementation of effect handlers as a library for the Scala
programming language. The implementation is based closely on the
monadic delimited control framework of \citet{DybvigJS07}.
%
A variation of the Scala Effekt library is used to implement effect
handlers as an interface for programming with delimited continuations
in Java~\citep{BrachthauserSO18}. The implementation of delimited
continuations depend on special byte code instructions, inserted via a
selective type-driven CPS translation.
The Effekt language (which is distinct from the Effekt library)
implements handlers by a translation into capability-passing style,
which may more informatively be dubbed \emph{handler-passing style} as
handlers are passed downwards to the invocation sites of their
respective operations~\cite{SchusterBO20,BrachthauserSO20b}. The
translation into capability-passing style is realised by way of a
effect-type directed iterated CPS transform, which introduces a
continuation argument per handler in scope~\cite{SchusterBO20}. The
idea of iterated CPS is due to \citet{DanvyF90}, who used it to give
develop a CPS transform for shift and reset.
%
\citet{XieBHSL20} have devised an \emph{evidence-passing translation}
for deep effect handlers. The basic idea is similar to
capability-passing style as evidence for handlers are passed downwards
to their operations in shape of a vector containing the handlers in
scope through computations. \citet{XieL20} have realised handlers by
evidence-passing style as a Haskell library.
There are clear connections between the CPS translations presented in
this chapter and the continuation monad implementation of
\citet{KammarLO13}. Whereas \citeauthor{KammarLO13} present a
practical Haskell implementation depending on sophisticated features
such as type classes, which to some degree obscures the essential
structure, here we have focused on a foundational formal treatment.
%
\citeauthor{KammarLO13} obtain impressive performance results by
taking advantage of the second class nature of type classes in Haskell
coupled with the aggressive fusion optimisations GHC
performs~\citep{WuS15}.
\paragraph{Plotkin's colon translation}
%
The original method for proving the correctness of a CPS
translation is by way of a simulation result. Simulation states that
every reduction sequence in a given source program is mimicked by its
CPS transformation.
%
Static administrative redexes in the image of a CPS translation
provide hurdles for proving simulation, since these redexes do not
arise in the source program.
%
\citet{Plotkin75} uses the so-called \emph{colon translation} to
overcome static administrative reductions.
%
Informally, it is defined such that given some source term $M$ and
some continuation $k$, then the term $M : k$ is the result of
performing all static administrative reductions on $\cps{M}\,k$, that
is to say $\cps{M}\,k \areducesto^* M : k$.
%
Thus this translation makes it possible to bypass administrative
reductions and instead focus on the reductions inherited from the
source program.
%
The colon translation captures precisely the intuition that drives CPS
transforms, namely, that if in the source $M \reducesto^\ast \Return\;V$
then in the image $\cps{M}\,k \reducesto^\ast k\,\cps{V}$.
%\dhil{Check whether the first pass marks administrative redexes}
% CPS The colon translation captures the
% intuition tThe colon translation is itself a CPS translation which
% yields
% In his seminal work, \citet{Plotkin75} devises CPS translations for
% call-by-value lambda calculus into call-by-name lambda calculus and
% vice versa. \citeauthor{Plotkin75} establishes the correctness of his
% translations by way of simulations, which is to say that every
% reduction sequence in a given source program is mimicked by the
% transformed program.
% %
% His translations generate static administrative redexes, and as argued
% previously in this chapter from a practical view point this is an
% undesirable property in practice. However, it is also an undesirable
% property from a theoretical view point as the presence of
% administrative redexes interferes with the simulation proofs.
% To handle the static administrative redexes, \citeauthor{Plotkin75}
% introduced the so-called \emph{colon translation} to bypass static
% administrative reductions, thus providing a means for focusing on
% reductions induced by abstractions inherited from the source program.
% %
% The colon translation is itself a CPS translation, that given a source
% expression, $e$, and some continuation, $K$, produces a CPS term such
% that $\cps{e}K \reducesto e : K$.
% \citet{DanvyN03} used this insight to devise a one-pass CPS
% translation that contracts all administrative redexes at translation
% time.
% \paragraph{Partial evaluation}
\paragraph{ANF vs CPS}
\paragraph{Selective CPS transforms}
\dhil{TODO \citet{Nielsen01} \citet{DanvyH92} \citet{DanvyH93} \citet{Leijen17}}
\chapter{Abstract machine semantics}
\label{ch:abstract-machine}
%\dhil{The text is this chapter needs to be reworked}
Abstract machine semantics are an operational semantics that makes
program control more apparent than context-based reduction
semantics. In a some sense abstract machine semantics are a lower
level semantics than reduction semantics as they provide a model of
computation based on \emph{abstract machines}, which capture some core
aspects of how actual computers might go about executing programs.
%
Abstract machines come in different style and flavours, though, a
common trait is that they are defined in terms of
\emph{configurations}. A configuration includes the essentials to
describe the machine state as it were, i.e. some abstract notion of
call stack, memory, program counter, etc.
In this chapter I will demonstrate an application of generalised
continuations (Section~\ref{sec:generalised-continuations}) to
abstract machines that emphasises the usefulness of generalised
continuations to implement various kinds of effect handlers. The key
takeaway from this application is that it is possible to plug the
generalised continuation structure into a standard framework to
achieve a simultaneous implementation of deep, shallow, and
parameterised effect handlers.
%
Specifically I will change the continuation structure of a standard
\citeauthor{FelleisenF86} style \emph{CEK machine} to fit generalised
continuations.
The CEK machine (CEK is an acronym for Control, Environment,
Kontinuation~\cite{FelleisenF86}) is an abstract machine with an
explicit environment, which models the idea that processor registers
name values as an environment associates names with values. Thus by
using the CEK formalism we depart from the substitution-based model of
computation used in the preceding chapters and move towards a more
`realistic' model of computation (realistic in the sense of emulating
how a computer executes a program). Another significant difference is
that in the CEK formalism evaluation contexts are no longer
syntactically intertwined with the source program. Instead evaluation
contexts are separately managed through the continuation of the CEK
machine.
% In this chapter we will demonstrate an application of generalised
% continuations (Section~\ref{sec:generalised-continuations}) to
% \emph{abstract machines}. An abstract machine is a model of
% computation that makes program control more apparent than standard
% reduction semantics. Abstract machines come in different styles and
% flavours, though, a common trait is that they closely model how an
% actual computer might go about executing a program, meaning they
% embody some high-level abstract models of main memory and the
% instruction fetch-execute cycle of processors~\cite{BryantO03}.
\paragraph{Relation to prior work} The work in this chapter is based
on work in the following previously published papers.
%
\begin{enumerate}[i]
\item \bibentry{HillerstromL16} \label{en:ch-am-HL16}
\item \bibentry{HillerstromL18} \label{en:ch-am-HL18}
\item \bibentry{HillerstromLA20} \label{en:ch-am-HLA20}
\end{enumerate}
%
The particular presentation in this chapter is adapted from
item~\ref{en:ch-am-HLA20}.
\section{Configurations with generalised continuations}
\label{sec:machine-configurations}
Syntactically, the CEK machine consists of three components: 1) the
control component, which focuses the term currently being evaluated;
2) the environment component, which maps free variables to machine
values, and 3) the continuation component, which describes what to
evaluate next (some literature uses the term `control string' in lieu
of continuation to disambiguate it from programmatic continuations in
the source language).
%
Intuitively, the continuation component captures the idea of call
stack from actual programming language implementations.
The abstract machine is formally defined in terms of configurations. A
configuration $\cek{M \mid \env \mid \shk \circ \shk'} \in \MConfCat$
is a triple consisting of a computation term $M \in \CompCat$, an
environment $\env \in \MEnvCat$, and a pair of generalised
continuations $\kappa,\kappa' \in \MGContCat$.
%
The complete abstract machine syntax is given in
Figure~\ref{fig:abstract-machine-syntax-gencont}.
%
The control and environment components are completely standard as they
are similar to the components in \citeauthor{FelleisenF86}'s original
CEK machine modulo the syntax of the source language.
%
However, the structure of the continuation component is new. This
component comprises two generalised continuations, where the latter
continuation $\kappa'$ is an entirely administrative object that
materialises only during operation invocations as it is used to
construct the reified segment of the continuation up to an appropriate
enclosing handler. For the most part $\kappa'$ is empty, therefore we
will write $\cek{M \mid \env \mid \shk}$ as syntactic sugar for
$\cek{M \mid \env \mid \shk \circ []}$ where $[]$ is the empty
continuation (an alternative is to syntactically differentiate between
regular and administrative configurations by having both three-place
and four-place configurations as for example as \citet{BiernackaBD03}
do).
%
An environment is either empty, written $\emptyset$, or an extension
of some other environment $\env$, written $\env[x \mapsto v]$, where
$x$ is the name of a variable and $v$ is a machine value.
%
The machine values consist of function closures, recursive function
closures, type function closures, records, variants, and reified
continuations. The three abstraction forms are paired with an
environment that binds the free variables in the their bodies. The
records and variants are transliterated from the value forms of the
source calculi. Figure~\ref{fig:abstract-machine-val-interp} defines
the value interpretation function, which turns any source language
value into a corresponding machine value.
%
A continuation $\shk$ is a stack of generalised continuation frames
$[\shf_1, \dots, \shf_n]$. As in
Section~\ref{sec:generalised-continuations} each continuation frame
$\shf = (\slk, \chi)$ consists of a pure continuation $\slk$,
corresponding to a sequence of let bindings, interpreted under some
handler, which in this context is represented by the handler closure
$\chi$.
%
A pure continuation is a stack of pure frames. A pure frame
$(\env, x, N)$ closes a let-binding $\Let \;x=[~] \;\In\;N$ over
environment $\env$. The pure continuation structure is similar to the
continuation structure of \citeauthor{FelleisenF86}'s original CEK
machine.
%
There are three kinds of handler closures, one for each kind of
handler. A deep handler closure is a pair $(\env, H)$ which closes a
deep handler definition $H$ over environment $\env$. Similarly, a
shallow handler closure $(\env, H^\dagger)$ closes a shallow handler
definition over environment $\env$. Finally, a parameterised handler
closure $(\env, (q.\,H))$ closes a parameterised handler definition
over environment $\env$. As a syntactic shorthand we write $H^\depth$
to range over deep, shallow, and parameterised handler
definitions. Sometimes $H^\depth$ will range over just two kinds of
handler definitions; it will be clear from the context which handler
definition is omitted.
%
We extend the clause projection notation to handler closures and
generalised continuation frames, i.e.
%
\[
\ba{@{~}l@{~}c@{~}l@{~}c@{~}l@{~}l}
\theta^{\mathrm{ret}} &\defas& (\sigma, \chi^{\mathrm{ret}}) &\defas& \hret, &\quad \text{where } \chi = (\env, H^\depth)\\
\theta^{\ell} &\defas& (\sigma, \chi^{\ell}) &\defas& \hell, &\quad \text{where } \chi = (\env, H^\depth)
\el
\]
%
Values are annotated with types where appropriate to facilitate type
reconstruction in order to make the results of
Section~\ref{subsec:machine-correctness} easier to state.
%
%
\begin{figure}[t]
\flushleft
\begin{syntax}
\slab{Configurations} & \conf \in \MConfCat &::= & \cek{M \mid \env \mid \shk \circ \shk'} \\
\slab{Value\textrm{ }environments} &\env \in \MEnvCat &::= & \emptyset \mid \env[x \mapsto v] \\
\slab{Values} &v, w \in \MValCat &::= & (\env, \lambda x^A . M) \mid (\env, \Rec^{A \to C}\,x.M)\\
& &\mid& (\env, \Lambda \alpha^K . M) \\
& &\mid& \Record{} \mid \Record{\ell = v; w} \mid (\ell\, v)^R \\
& &\mid& \shk^A \mid (\shk, \slk)^A \medskip\\
\slab{Continuations} &\shk \in \MGContCat &::= & \nil \mid \shf \cons \shk \\
\slab{Continuation\textrm{ }frames} &\shf \in \MGFrameCat &::= & (\slk, \chi) \\
\slab{Pure\textrm{ }continuations} &\slk \in \MPContCat &::= & \nil \mid \slf \cons \slk \\
\slab{Pure\textrm{ }continuation\textrm{ }frames} &\slf \in \MPFrameCat &::= & (\env, x, N) \\
\slab{Handler\textrm{ }closures} &\chi \in \MHCloCat &::= & (\env, H) \mid (\env, H^\dagger) \mid (\env, (q.\,H)) \medskip \\
\end{syntax}
\caption{Abstract machine syntax.}
\label{fig:abstract-machine-syntax-gencont}
\end{figure}
%
\begin{figure}
\[
\bl
\multicolumn{1}{c}{\val{-} : \ValCat \times \MEnvCat \to \MValCat}\\[1ex]
\ba[t]{@{}r@{~}c@{~}l@{}}
\val{x}{\env} &\defas& \env(x) \\
\val{\lambda x^A.M}{\env} &\defas& (\env, \lambda x^A.M) \\
\val{\Rec\,g^{A \to C}\,x.M}{\env} &\defas& (\env, \Rec\,g^{A \to C}\,x.M) \\
\val{\Lambda \alpha^K.M}{\env} &\defas& (\env, \Lambda \alpha^K.M) \\
\ea
\qquad
\ba[t]{@{}r@{~}c@{~}l@{}}
\val{\Record{}}{\env} &\defas& \Record{} \\
\val{\Record{\ell = V; W}}{\env} &\defas& \Record{\ell = \val{V}{\env}; \val{W}{\env}} \\
\val{(\ell\, V)^R}{\env} &\defas& (\ell\, \val{V}{\env})^R \\
\ea
\el
\]
\caption{Value interpretation definition.}
\label{fig:abstract-machine-val-interp}
\end{figure}
%
\section{Generalised continuation-based machine semantics}
\label{sec:machine-transitions}
%
\begin{figure}[p]
\rotatebox{90}{
\begin{minipage}{0.99\textheight}%
\[
\bl
%\multicolumn{1}{c}{\stepsto \subseteq \MConfCat \times \MConfCat}\\[1ex]
\ba{@{}l@{\quad}r@{~}c@{~}l@{~~}l@{}}
% \mlab{Init} & \multicolumn{3}{@{}c@{}}{M \stepsto \cek{M \mid \emptyset \mid [(\nil, (\emptyset, \{\Return\;x \mapsto \Return\;x\}))]}} \\[1ex]
% App
&&\multicolumn{2}{@{}l}{\stepsto\, \subseteq\! \MConfCat \times \MConfCat}\\
\mlab{App} & \cek{ V\;W \mid \env \mid \shk}
&\stepsto& \cek{ M \mid \env'[x \mapsto \val{W}{\env}] \mid \shk},
&\text{if }\val{V}{\env} = (\env', \lambda x^A.M) \\
\mlab{AppRec} & \cek{ V\;W \mid \env \mid \shk}
&\stepsto& \cek{ M \mid \env'[g \mapsto (\env', \Rec\,g^{A \to C}\,x.M), x \mapsto \val{W}{\env}] \mid \shk},
&\text{if }\val{V}{\env} = (\env', \Rec\,g^{A \to C}\,x.M) \\
% TyApp
\mlab{AppType} & \cek{ V\,T \mid \env \mid \shk}
&\stepsto& \cek{ M[T/\alpha] \mid \env' \mid \shk},
&\text{if }\val{V}{\env} = (\env', \Lambda \alpha^K . \, M) \\
% Deep resumption application
\mlab{Resume} & \cek{ V\;W \mid \env \mid \shk}
&\stepsto& \cek{ \Return \; W \mid \env \mid \shk' \concat \shk},
&\text{if }\val{V}{\env} = (\shk')^A \\
% Shallow resumption application
\mlab{Resume^\dagger} & \cek{ V\,W \mid \env \mid (\slk, \chi) \cons \shk}
&\stepsto&
\cek{\Return\; W \mid \env \mid \shk' \concat ((\slk' \concat \slk, \chi) \cons \shk)},
&\text{if } \val{V}{\env} = (\shk', \slk')^A \\
% Deep resumption application
\mlab{Resume^\param} & \cek{ V\,\Record{W;W'} \mid \env \mid \shk}
&\stepsto& \cek{ \Return \; W \mid \env \mid \shk' \concat [(\sigma,(\env'[q \mapsto \val{W'}\env],q.\,H))] \concat \shk},&\\
&&&\quad\text{if }\val{V}{\env} = \shk' \concat [(\sigma,(\env',q.\,H))])^A \\
%
\mlab{Split} & \cek{ \Let \; \Record{\ell = x;y} = V \; \In \; N \mid \env \mid \shk}
&\stepsto& \cek{ N \mid \env[x \mapsto v, y \mapsto w] \mid \shk},
&\text{if }\val{V}{\env} = \Record{\ell=v; w} \\
% Case
\mlab{Case} & \cek{ \Case\; V\, \{ \ell~x \mapsto M; y \mapsto N\} \mid \env \mid \shk}
&\stepsto& \left\{\ba{@{}l@{}}
\cek{ M \mid \env[x \mapsto v] \mid \shk}, \\
\cek{ N \mid \env[y \mapsto \ell'\, v] \mid \shk}, \\
\ea \right.
&
\ba{@{}l@{}}
\text{if }\val{V}{\env} = \ell\, v \\
\text{if }\val{V}{\env} = \ell'\, v \text{ and } \ell \neq \ell' \\
\ea \\
% Let - eval M
\mlab{Let} & \cek{ \Let \; x \revto M \; \In \; N \mid \env \mid (\slk, \chi) \cons \shk}
&\stepsto& \cek{ M \mid \env \mid ((\env,x,N) \cons \slk, \chi) \cons \shk} \\
% Handle
\mlab{Handle^\depth} & \cek{ \Handle^\depth \, M \; \With \; H^\depth \mid \env \mid \shk}
&\stepsto& \cek{ M \mid \env \mid (\nil, (\env, H^\depth)) \cons \shk} \\
\mlab{Handle^\param} & \cek{ \Handle^\param \, M \; \With \; (q.\,H)(W) \mid \env \mid \shk}
&\stepsto& \cek{ M \mid \env \mid (\nil, (\env[q \mapsto \val{W}\env], H)) \cons \shk} \\
% Return - let binding
\mlab{PureCont} &\cek{ \Return \; V \mid \env \mid ((\env',x,N) \cons \slk, \chi) \cons \shk}
&\stepsto& \cek{ N \mid \env'[x \mapsto \val{V}{\env}] \mid (\slk, \chi) \cons \shk} \\
% Return - handler
\mlab{GenCont} & \cek{ \Return \; V \mid \env \mid (\nil, (\env',H^\delta)) \cons \shk}
&\stepsto& \cek{ M \mid \env'[x \mapsto \val{V}{\env}] \mid \shk},
&\text{if } \hret = \{\Return\; x \mapsto M\} \\
% Deep
\mlab{Do^\depth} & \cek{ (\Do \; \ell \; V)^E \mid \env \mid ((\slk, (\env', H^\depth)) \cons \shk) \circ \shk'}
&\stepsto& \cek{M \mid \env'[p \mapsto \val{V}{\env},
r \mapsto (\shk' \concat [(\slk, (\env', H^\depth))])^B] \mid \shk},\\
&&&\quad\text{if } \ell : A \to B \in E \text{ and } \hell = \{\OpCase{\ell}{p}{r} \mapsto M\} \\
% Shallow
\mlab{Do^\dagger} & \cek{ (\Do \; \ell \; V)^E \mid \env \mid ((\slk, (\env', H)^\dagger) \cons \shk) \circ \shk'} &\stepsto& \cek{M \mid \env'[p \mapsto \val{V}{\env},
r \mapsto (\shk', \slk)^B] \mid \shk},\\
&&&\quad\text{if } \ell : A \to B \in E \text{ and } \hell = \{\OpCase{\ell}{p}{r} \mapsto M\} \\
% Forward
\mlab{Forward} & \cek{ (\Do \; \ell \; V)^E \mid \env \mid (\theta \cons \shk) \circ \shk'}
&\stepsto& \cek{ (\Do \; \ell \; V)^E \mid \env \mid \shk \circ (\shk' \concat [\theta])},
&\text{if } \gell = \emptyset
\ea
\el
\]
\caption{Abstract machine transitions.}
\label{fig:abstract-machine-semantics-gencont}
\end{minipage}
}
\end{figure}
%
\begin{figure}
\[
\bl
\ba{@{~}l@{\quad}l@{~}l}
\multicolumn{2}{l}{\textbf{Initial continuation}}\\
\multicolumn{3}{l}{\quad\shk_0 \defas [(\nil, (\emptyset, \{\Return\;x \mapsto x\}))]}
\medskip\\
%
\textbf{Initialisation} & \stepsto \subseteq \CompCat \times \MConfCat\\
\quad\mlab{Init} & \multicolumn{2}{l}{\quad M \stepsto \cek{M \mid \emptyset \mid \shk_0}}
\medskip\\
%
\textbf{Finalisation} & \stepsto \subseteq \MConfCat \times \ValCat\\
\quad\mlab{Halt} & \multicolumn{2}{l}{\quad\cek{\Return\;V \mid \env \mid \nil} \stepsto \val{V}\env}
\ea
\el
\]
\caption{Machine initialisation and finalisation.}
\label{fig:machine-init-final}
\end{figure}
%
The semantics of the abstract machine is defined in terms of a
transition relation $\stepsto \subseteq \MConfCat \times \MConfCat$ on
machine configurations. The definition of the transition relation is
given in Figure~\ref{fig:abstract-machine-semantics-gencont}.
%
A fair amount of the transition rules involve manipulating the
continuation. We adopt the same stack notation conventions used in the
CPS translation with generalised continuations
(Section~\ref{sec:cps-gen-conts}) and write $\nil$ for an empty stack,
$x \cons s$ for the result of pushing $x$ on top of stack $s$, and
$s \concat s'$ for the concatenation of stack $s$ on top of $s'$. We
use pattern matching to deconstruct stacks.
The first eight rules enact the elimination of values.
%
The first three rules concern closures (\mlab{App}, \mlab{AppRec},
\mlab{AppType}); they all essentially work the same. For example, the
\mlab{App} uses the value interpretation function $\val{-}$ to
interpret the abstractor $V$ in the machine environment $\env$ to
obtain the closure. The body $M$ of closure gets put into the control
component. Before the closure environment $\env'$ gets installed as
the new machine environment, it gets extended with a binding of the
formal parameter of the abstraction to the interpretation of argument
$W$ in the previous environment $\env$. The rule \mlab{AppRec} behaves
the almost the same, the only difference is that it binds the variable
$g$ to the recursive closure in the environment. The rule
\mlab{AppType} does not extend the environment, instead the type is
substituted directly into the body. In either rule continuation
component remains untouched.
The resumption rules (\mlab{Resume}, \mlab{Resume^\dagger},
\mlab{Resume^\param}), however, manipulate the continuation component
as they implement the context restorative behaviour of deep, shallow,
and parameterised resumption application respectively. The
\mlab{Resume} rule handles deep resumption invocations. A deep
resumption is syntactically a generalised continuation, and therefore
it can be directly composed with the machine continuation. Following a
deep resumption invocation the argument gets placed in the control
component, whilst the reified continuation $\kappa'$ representing the
resumptions gets concatenated with the machine continuation $\kappa$
in order to restore the captured context.
%
The rule \mlab{Resume^\dagger} realises shallow resumption
invocations. Syntactically, a shallow resumption consists of a pair
whose first component is a dangling pure continuation $\sigma'$, which
is leftover after removal of its nearest enclosing handler, and the
second component contains a reified generalised continuation
$\kappa'$. The dangling pure continuation gets adopted by the top-most
handler $\chi$ as $\sigma'$ gets appended onto the pure continuation
$\sigma$ running under $\chi$. The resulting continuation gets
composed with the reified continuation $\kappa'$.
%
The rule \mlab{Resume^\param} implements the behaviour of
parameterised resumption invocations. Syntactically, a parameterised
resumption invocation is generalised continuation just like an
ordinary deep resumption. The primary difference between \mlab{Resume}
and \mlab{Resume^\param} is that in the latter rule the top-most frame
of $\kappa'$ contains a parameterised handler definition, whose
parameter $q$ needs to be updated following an invocation. The handler
closure environment $\env'$ gets extended by a mapping of $q$ to the
interpretation of the argument $W'$ such that this value of $q$ is
available during the next activation of the handler. Following the
environment update the reified continuation gets reconstructed and
appended onto the current machine continuation.
The rules $\mlab{Split}$ and $\mlab{Case}$ concern record destructing
and variant scrutinising, respectively. Record destructing binds both
the variable $x$ to the value $v$ at label $\ell$ in the record $V$
and the variable $y$ to the tail of the record in current environment
$\env$.
%
Case splitting dispatches to the first branch with the variable $x$
bound to the variant payload in the environment if the label of the
variant $V$ matches $\ell$, otherwise it dispatches to the second
branch with the variable $y$ bound to the interpretation of $V$ in the
environment.
The rules \mlab{Let}, \mlab{Handle^\depth}, and \mlab{Handle^\param}
augment the current continuation with let bindings and handlers. The
rule \mlab{Let} puts the computation $M$ of a let expression into the
control component and extends the current pure continuation with the
closure of the (source) continuation of the let expression.
%
The \mlab{Handle^\depth} rule covers both ordinary deep and shallow
handler installation. The computation $M$ is placed in the control
component, whilst the continuation is extended by an additional
generalised frame with an empty pure continuation and the closure of
the handler $H$.
%
The rule \mlab{Handle^\param} covers installation of parameterised
handlers. The only difference here is that the parameter $q$ is
initialised to the interpretation of $W$ in handler environment
$\env'$.
The current continuation gets shrunk by rules \mlab{PureCont} and
\mlab{GenCont}. If the current pure continuation is nonempty then the
rule \mlab{PureCont} binds a returned value, otherwise the rule
\mlab{GenCont} invokes the return clause of a handler if the pure
continuation is empty.
The forwarding continuation is used by rules \mlab{Do^\depth},
\mlab{Do^\dagger}, and \mlab{Forward}. The rule \mlab{Do^\depth}
covers operation invocations under deep and parameterised handlers. If
the top-most handler handles the operation $\ell$, then corresponding
clause computation $M$ gets placed in the control component, and the
handler environment $\env'$ is installed with bindings of the
operation payload and the resumption. The resumption is the forwarding
continuation $\kappa'$ extended by the current generalised
continuation frame.
%
The rule \mlab{Do^\dagger} is much like \mlab{Do^\depth}, except it
constructs a shallow resumption, discarding the current handler but
keeping the current pure continuation.
%
The rule \mlab{Forward} appends the current continuation
frame onto the end of the forwarding continuation.
As a slight abuse of notation, we overload $\stepsto$ to inject
computation terms into an initial machine configuration as well as
projecting values. Figure~\ref{fig:machine-init-final} depicts the
structure of the initial machine continuation and two additional
pseudo transitions. The initial continuation consists of a single
generalised continuation frame with an empty pure continuation running
under an identity handler. The \mlab{Init} rule provides a canonical
way to map a computation term onto a configuration, whilst \mlab{Halt}
provides a way to extract the final value of some computation from a
configuration.
\subsection{Putting the machine into action}
\newcommand{\chiid}{\ensuremath{\chi_{\text{id}}}}
\newcommand{\kappaid}{\ensuremath{\kappa_{\text{id}}}}
\newcommand{\incr}{\dec{incr}}
\newcommand{\Incr}{\dec{Incr}}
\newcommand{\prodf}{\dec{prod}}
\newcommand{\consf}{\dec{cons}}
%
To gain a better understanding of how the abstract machine concretely
transitions between configurations we will consider a small program
consisting of a deep, parameterised, and shallow handler.
%
For the deep handler we will use the $\nondet$ handler from
Section~\ref{sec:tiny-unix-time} which handles invocations of the
operation $\Fork : \UnitType \opto \Bool$; it is reproduced here in
fine-grain call-by-value syntax.
%
\[
\bl
% H_\nondet : \alpha \eff \{\Choose : \UnitType \opto \Bool\} \Harrow \List~\alpha\\
% H_\nondet \defas
% \ba[t]{@{~}l@{~}c@{~}l}
% \Return\;x &\mapsto& [x]\\
% \OpCase{\Choose}{\Unit}{resume} &\mapsto& resume~\True \concat resume~\False
% \ea \smallskip\\
\nondet : (\UnitType \to \alpha \eff \{\Fork : \UnitType \opto \Bool\}) \to \List~\alpha\\
\nondet~m \defas \bl
\Handle\;m\,\Unit\;\With\\
~\ba{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& [x]\\
\OpCase{\Fork}{\Unit}{resume} &\mapsto&
\bl
\Let\;xs \revto resume~\True\;\In\\
\Let\;ys \revto resume~\False\;\In\;
xs \concat ys
\el
\ea
\el
\el
\]
%
As for the parameterised handler we will use a handler, which
implements a simple counter that supports one operation
$\Incr : \UnitType \opto \Int$, which increments the value of the
counter and returns the previous value. It is defined as follows.
%
\[
\bl
% H_\incr : \Record{\Int;\alpha \eff \{\Incr : \UnitType \opto \Int\}} \Harrow^\param \alpha\\
% H_\incr \defas
% i.\,\ba[t]{@{~}l@{~}c@{~}l}
% \Return\;x &\mapsto& x\\
% \OpCase{\Incr}{\Unit}{resume} &\mapsto& resume\,\Record{i+1;i}
% \ea \smallskip\\
\incr : \Record{\Int;\UnitType \to \alpha\eff \{\Incr : \UnitType \opto \Int\}} \to \alpha\\
\incr\,\Record{i_0;m} \defas
\bl
\ParamHandle\;m\,\Unit\;\With\\
~\left(i.\,\ba{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& \Return\;x\\
\OpCase{\Incr}{\Unit}{resume} &\mapsto& \Let\;i' \revto i+1\;\In\;resume\,\Record{i';i}
\ea\right)~i_0
\el
\el
\]
%
We will use the $\Pipe$ and $\Copipe$ shallow handlers from
Section~\ref{sec:pipes} to construct a small pipeline.
%
\[
\bl
\Pipe : \Record{\UnitType \to \alpha \eff \{ \Yield : \beta \opto \UnitType \}; \UnitType \to \alpha\eff\{ \Await : \UnitType \opto \beta \}} \to \alpha \\
\Pipe\, \Record{p; c} \defas
\bl
\ShallowHandle\; c\,\Unit \;\With\; \\
~\ba[m]{@{}l@{~}c@{~}l@{}}
\Return~x &\mapsto& \Return\;x \\
\OpCase{\Await}{\Unit}{resume} &\mapsto& \Copipe\,\Record{resume; p} \\
\ea
\el\medskip\\
\Copipe : \Record{\beta \to \alpha\eff\{ \Await : \UnitType \opto \beta\}; \UnitType \to \alpha\eff\{ \Yield : \beta \opto \UnitType\}} \to \alpha \\
\Copipe\, \Record{c; p} \defas
\bl
\ShallowHandle\; p\,\Unit \;\With\; \\
~\ba[m]{@{}l@{~}c@{~}l@{}}
\Return~x &\mapsto& \Return\;x \\
\OpCase{\Yield}{y}{resume} &\mapsto& \Pipe\,\Record{resume; \lambda \Unit. c\, y} \\
\ea \\
\el \\
\el
\]
%
We use the following the producer and consumer computations for the
pipes.
%
\[
\bl
\prodf : \UnitType \to \alpha \eff \{\Incr : \UnitType \opto \Int; \Yield : \Int \opto \UnitType\}\\
\prodf\,\Unit \defas
\bl
\Let\;j \revto \Do\;\Incr\,\Unit\;\In\;
\Let\;x \revto \Do\;\Yield~j\;
\In\;\dec{prod}\,\Unit
\el\smallskip\\
\consf : \UnitType \to \Int \eff \{\Fork : \UnitType \opto \Bool; \Await : \UnitType \opto \Int\}\\
\consf\,\Unit \defas
\bl
\Let\;b \revto \Do\;\Fork\,\Unit\;\In\;
\Let\;x \revto \Do\;\Await\,\Unit\;\In\\
% \Let\;y \revto \Do\;\Await\,\Unit\;\In\\
\If\;b\;\Then\;x*2\;\Else\;x*x
\el
\el
\]
%
The producer computation $\prodf$ invokes the operation $\Incr$ to
increment and retrieve the previous value of some counter. This value
is supplied as the payload to an invocation of $\Yield$.
%
The consumer computation $\consf$ first performs an invocation of
$\Fork$ to duplicate the stream, and then it performs an invocation
$\Await$ to retrieve some value. The return value of $\consf$ depends
on the instance runs in the original stream or forked stream. The
original stream multiplies the retrieved value by $2$, and the
duplicate squares the value.
%
Finally, the top-level computation plugs all of the above together.
%
\begin{equation}
\nondet\,(\lambda\Unit.\incr\,\Record{1;\lambda\Unit.\Pipe\,\Record{\prodf;\consf}})\label{eq:abs-prog}
\end{equation}
%
%
Function interpretation is somewhat heavy notation-wise as
environments need to be built. To make the notation a bit more
lightweight I will not define the initial environments for closures
explicitly. By convention I will subscript initial environments with
the name of function, e.g. $\env_\consf$ denotes the initial
environment for the closure of $\consf$. Extensions of initial
environments will use superscripts to differentiate themselves,
e.g. $\env_\consf'$ is an extension of $\env_\consf$. As a final
environment simplification, I will take the initial environments to
contain the bindings for parameters of their closures, that is, an
initial environment is really the environment for the body of its
closure. In a similar fashion, I will use superscripts and subscripts
to differentiate handler closures, e.g. $\chi^\dagger_\Pipe$ denotes
the handler closure for the shallow handler definition in $\Pipe$. The
environment of a handler closure is to be understood
implicitly. Furthermore, the definitions above should be understood to
be implicitly $\Let$-sequenced, whose tail computation is
\eqref{eq:abs-prog}. Evaluation of this sequence gives rise to a
`toplevel' environment, which binds the closures for the definition. I
shall use $\env_0$ to denote this environment.
The machine executes the top-level computation in an initial
configuration with the top-level environment $\env_0$. The first
couple of transitions install the three handlers in order: $\nondet$,
$\incr$, and $\Pipe$.
%
\begin{derivation}
&\nondet\,(\lambda\Unit.\incr\,\Record{1;\lambda\Unit.\Pipe\,\Record{\prodf;\consf}})\\
\stepsto& \reason{\mlab{Init} with $\env_0$}\\
&\cek{\nondet\,(\lambda\Unit.\incr\,\Record{0;\lambda\Unit.\Pipe\,\Record{\prodf;\consf}}) \mid \env_0 \mid \sks_0}\\
\stepsto^+& \reason{$3\times$(\mlab{App}, \mlab{Handle^\delta})}\\
&% \bl
\cek{c\,\Unit \mid \env_\Pipe \mid (\nil,\chi^\dagger_\Pipe) \cons (\nil, \chi^\param_\incr) \cons (\nil, \chi_\nondet) \cons \kappa_0}\\
% \text{where }
% \bl
% % \env_\Pipe = \env_0[c \mapsto (\env_0, \consf), p \mapsto (\env_0, \prodf)]\\
% % \chi^\dagger_\Pipe = (\env_\Pipe, H^\dagger_\Pipe)\\
% % \env_\incr = \env_0[m \mapsto (\env_0, \lambda\Unit.\Pipe\cdots),i \mapsto 0]\\
% % \chi^\param_\incr = (\env_\incr, H^\param_\incr)\\
% % \env_\nondet = \env_0[m \mapsto (\env_0, \lambda\Unit.\incr \cdots)]\\
% % \chi_\nondet = (\env_\nondet, H_\nondet)
% \el
% \el\\
\end{derivation}
%
At this stage the continuation consists of four frames. The first
three frames each corresponds to an installed handler, whereas the
last frame is the identity handler. The control component focuses the
application of consumer computation provided as an argument to
$\Pipe$. The next few transitions get us to the first operation
invocation.
%
\begin{derivation}
\stepsto^+& \reason{\mlab{App}, \mlab{Let}}\\
&\bl
\cek{\Do\;\Fork\,\Unit \mid \env_\consf \mid ([(\env_\consf,b,\Let\;x \revto \cdots)],\chi^\dagger_\Pipe) \cons (\nil, \chi^\param_\incr) \cons (\nil, \chi_\nondet) \cons \kappa_0}\\
\el\\
\stepsto^+& \reason{\mlab{Forward}, \mlab{Forward}}\\
&\bl
\cek{\Do\;\Fork\,\Unit \mid \env_\consf \mid [(\nil, \chi_\nondet),(\nil,\chiid)] \circ \kappa'}\\
\text{where } \kappa' = [([(\env_\consf,b,\Let\;x \revto \cdots)],\chi^\dagger_\Pipe),(\nil, \chi^\param_\incr)]
\el\\
\end{derivation}
%
The pure continuation under $\chi^\dagger_\Pipe$ has been augmented
with the pure frame corresponding to $\Let$-binding of the invocation
of $\Fork$. Operation invocation causes the machine to initiate a
search for a suitable handler, as the top-most handler $\Pipe$ does
not handle $\Fork$. The machine performs two $\mlab{Forward}$
transitions, which moves the two top-most frames from the program
continuation onto the forwarding continuation.
%
As a result the, now, top-most frame of the program continuation
contains a suitable handler for $\Fork$. Thus the following
transitions transfer control to $\Fork$-case inside the $\nondet$
handler.
%
\begin{derivation}
\stepsto^+& \reason{\mlab{Do}, \mlab{Let}}\\
&\bl
\cek{resume~\True \mid \env_\nondet' \mid \kappa_0'}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\nondet' &=& \env_\nondet[resume \mapsto \kappa' \concat [(\nil, \chi_\nondet)]]\\
\kappa_0' &=& [([(\env_\nondet',xs,\Let\;ys\revto\cdots)],\chiid)]
\el
\el
\end{derivation}
%
The $\mlab{Do}$ transition is responsible for activating the handler,
and the $\mlab{Let}$ transition focuses the first resumption
invocation. The resumption $resume$ is bound in the environment to the
forwarding continuation $\kappa'$ extended with the frame for the
current handler. The pure continuation running under the identity
handler gets extended with the $\Let$-binding containing the first
resumption invocation. The next transitions reassemble the program
continuation and focuses control on the invocation of $\Await$.
%
\begin{derivation}
\stepsto^+& \reason{\mlab{Resume}, \mlab{PureCont}, \mlab{Let}}\\
&\bl
\cek{\Do\;\Await\,\Unit \mid \env_\consf' \mid ([(\env_\consf',x,\If\;b\cdots)], \chi^\dagger_\Pipe) \cons (\nil,\chi^\param_\incr) \cons (\nil, \chi_\nondet) \cons \kappa_0'}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\consf' &=& \env_\consf[b \mapsto \True]\\
\ea
\el
\end{derivation}
%
At this stage the context of $\consf$ has been restored with $b$ being
bound to the value $\True$. The pure continuation running under
$\Pipe$ has been extended with pure frame corresponding to the
continuation of the $\Let$-binding of the $\Await$
invocation. Handling of this invocation requires no use of the
forwarding continuation as the top-most frame contains a suitable
handler.
%
\begin{derivation}
\stepsto& \reason{\mlab{Do^\dagger}}\\
&\bl
\cek{\Copipe\,\Record{resume;p} \mid \env_\Pipe' \mid (\nil,\chi^\param_\incr) \cons (\nil, \chi_\nondet) \cons \kappa_0'}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\Pipe' &=& \env_\Pipe[resume \mapsto (\nil, [(\env_\consf',x,\If\;b\cdots)])]\\
\ea
\el
\end{derivation}
%
Now the $\Await$-case of the $\Pipe$ handler has been activated. The
resumption $resume$ is bound to the shallow resumption in the
environment. The generalised continuation component of the shallow
resumption is empty, because no forwarding was involved in locating
the handler. The next transitions install the $\Copipe$ handler and
runs the producer computation.
%
\begin{derivation}
\stepsto^+& \reason{\mlab{App}, \mlab{Handle^\dagger}}\\
&\bl
\cek{p\,\Unit \mid \env_\Copipe' \mid (\nil, \chi^\dagger_\Copipe) \cons \kappa'}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\Copipe' &=& \env_\Copipe[c \mapsto (\nil, [(\env_\consf',x,\If\;b\cdots)])]\\
% \chi_\Copipe &=& (\env_\Copipe,H^\dagger_\Copipe)\\
\ea
\el\\
\stepsto^+& \reason{\mlab{AppRec}, \mlab{Let}}\\
&\cek{\Do\;\Incr\,\Unit \mid \env_\prodf \mid ([(\env_\prodf,j,\Let\;x\revto\cdots)],\chi^\dagger_\Copipe) \cons \kappa'}\\
\stepsto^+& \reason{\mlab{Forward}, \mlab{Do^\param}, \mlab{Let}, \mlab{App}, \mlab{PureCont}}\\
&\bl
\cek{resume\,\Record{i';i} \mid \env_\incr' \mid (\nil, \chi_\nondet) \cons \kappa_0'}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\incr' &=& \env_\incr[\bl
i \mapsto 1, i' \mapsto 2,\\
resume \mapsto [([(\env_\prodf,j,\Let\;x\revto\cdots)],\chi^\dagger_\Copipe),(\nil,\chi^\param_\incr)]]
\el
\ea
\el
\end{derivation}
%
The producer computation performs the $\Incr$ operation, which
requires one $\mlab{Forward}$ transition in order to locate a suitable
handler for it. The $\Incr$-case of the $\incr$ handler increments the
counter $i$ by one. The environment binds the current value of the
counter. The following $\mlab{Resume^\param}$ transition updates the
counter value to be that of $i'$ and continues the producer
computation.
%
\begin{derivation}
\stepsto^+&\reason{\mlab{Resume^\param}, \mlab{PureCont}, \mlab{Let}}\\
&\bl
\cek{\Do\;\Yield~j \mid \env_{\prodf}'' \mid ([(\env_{\prodf}'',x,\prodf\,\Unit)],\chi^\dagger_\Copipe) \cons (\nil,\chi^\param_\incr) \cons (\nil, \chi_\nondet) \cons \kappa_0'}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\prodf'' &=& \env_\prodf'[j \mapsto 1]\\
\ea
\el\\
\stepsto& \reason{\mlab{Do^\dagger}}\\
&\bl
\cek{\Pipe\,\Record{resume;\lambda\Unit.c\,y} \mid \env_\Pipe' \mid (\nil,\chi^\param_\incr) \cons (\nil, \chi_\nondet) \cons \kappa_0'}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\Pipe' &=& \env_\Pipe[y \mapsto 1, resume \mapsto (\nil,[(\env_{\prodf}'',x,\prodf\,\Unit)])]
\ea
\el\\
\stepsto^+& \reason{\mlab{App}, \mlab{Handle^\dagger}, \mlab{Resume^\dagger}, \mlab{PureCont}}\\
&\bl
\cek{\If\;b\;\Then\;x * 2\;\Else\;x*x \mid \env_\consf'' \mid (\nil, \chi^\dagger_\Pipe) \cons (\nil,\chi^\param_\incr) \cons (\nil, \chi_\nondet) \cons \kappa_0'}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\consf'' &=& \env_\consf'[x \mapsto 1]
\ea
\el
\end{derivation}
%
The $\Yield$ operation causes another instance of the $\Pipe$ to be
installed in place of the $\Copipe$. The $\mlab{Resume^\dagger}$
transition occurs because the consumer argument provided to $\Pipe$ is
the resumption of captured by the original instance of $\Pipe$, thus
invoking it causes the context of the original consumer computation to
be restored. Since $b$ is $\True$ the $\If$-expression will dispatch
to the $\Then$-branch, meaning the computation will ultimately return
$2$. This return value gets propagated through the handler stack.
%
\begin{derivation}
\stepsto^+& \reason{\mlab{Case}, \mlab{App}, \mlab{GenCont}, \mlab{GenCont}, \mlab{GenCont}}\\
&\cek{\Return\;[x] \mid \env_\nondet[x \mapsto 2] \mid \kappa_0'}
\end{derivation}
%
The $\Return$-clauses of the $\Pipe$ and $\incr$ handlers are
identities, and thus, the return value $x$ passes through
unmodified. The $\Return$-case of $\nondet$ lifts the value into a
singleton list. Next the pure continuation is invoked, which restores
the handling context of the first operation invocation $\Fork$.
%
\begin{derivation}
\stepsto& \reason{\mlab{PureCont}, \mlab{Let}}\\
&\bl
\cek{resume~\False \mid \env_\nondet' \mid \kappa_0''}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\nondet'' &=& \env_\nondet'[xs \mapsto [3]]\\
\kappa_0'' &=& [([(\env_\nondet'',ys,xs \concat ys)],\chiid)]
\ea
\el\\
\stepsto^+& \reason{\mlab{Resume}, \mlab{PureCont}, \mlab{Let}}\\
&\bl
\cek{\Do\;\Await\,\Unit \mid \env_\consf''' \mid ([(\env_\consf''',x,\If\;b\cdots)], \chi^\dagger_\Pipe) \cons (\nil, \chi^\param_\incr) \cons (\nil, \chi_\nondet) \cons \kappa_0''}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\consf''' &=& \env_\consf''[b \mapsto \False]\\
\ea
\el\\
\end{derivation}
%
The second invocation of the resumption $resume$ interprets $\Fork$ as
$\False$. The consumer computation is effectively restarted with $b$
bound to $\False$. The previous transitions will be repeated.
%
\begin{derivation}
\stepsto^+ & \reason{same reasoning as above}\\
&\bl
\cek{resume\,\Record{i';i} \mid \env_\incr'' \mid (\nil, \chi_\nondet) \cons \kappa_0''}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\incr' &=& \env_\incr[\bl
i \mapsto 2, i' \mapsto 3,\\
resume \mapsto [([(\env_\prodf'',i,\Let\;x\revto\cdots)],\chi^\dagger_\Copipe),(\nil,\chi^\param_\incr)]]\el
\ea
\el
\end{derivation}
%
After some amount transitions the parameterised handler $\incr$ will
be activated again. The counter variable $i$ is bound to the value
computed during the previous activation of the handler. The machine
proceeds as before and eventually reaches concatenation application
inside the $\Fork$-case.
%
\begin{derivation}
\stepsto^+& \reason{same reasoning as above}\\
&\bl
\cek{xs \concat ys \mid \env_\nondet'' \mid \kappa_0}\\
\text{where }
\ba[t]{@{~}r@{~}c@{~}l}
\env_\nondet'' &=& \env_\nondet'[ys \mapsto [4]]
\ea
\el\\
\stepsto& \reason{\mlab{App}, \mlab{GenCont}, \mlab{Halt}}\\
& [3,4]
\end{derivation}
%
\section{Realisability and efficiency implications}
\label{subsec:machine-realisability}
A practical benefit of the abstract machine semantics over the
context-based small-step reduction semantics with explicit
substitutions is that it provides either a blueprint for a high-level
interpreter-based implementation or an outline for how stacks should
be manipulated in a low-level implementation along with a more
practical and precise cost model. The cost model is more practical in
the sense of modelling how actual hardware might go about executing
instructions, and it is more precise as it eliminates the declarative
aspect of the contextual semantics induced by the \semlab{Lift}
rule. For example, the asymptotic cost of handler lookup is unclear in
the contextual semantics, whereas the abstract machine clearly tells
us that handler lookup involves a linear search through the machine
continuation.
The abstract machine is readily realisable using standard persistent
functional data structures such as lists and
maps~\cite{Okasaki99}. The concrete choice of data structures required
to realise the abstract machine is not set in stone, although, its
definition is suggestive about the choice of data structures it leaves
space for interpretation.
%
For example, generalised continuations can be implemented using lists,
arrays, or even heaps. However, the concrete choice of data structure
is going to impact the asymptotic time and space complexity of the
primitive operations on continuations: continuation augmentation
($\cons$) and concatenation ($\concat$).
%
For instance, a linked list provides a fast constant time
implementations of either operation, whereas a fixed-size array can
only provide implementations of either operation that run in linear
time due to the need to resize and copy contents in the extreme case.
%
An implementation based on a singly-linked list admits constant time
for both continuation augmentation as this operation corresponds
directly to list cons. However, it admits only a linear time
implementation for continuation concatenation. Alternatively, an
implementation based on a \citeauthor{Hughes86} list~\cite{Hughes86}
reverses the cost as a \citeauthor{Hughes86} list uses functions to
represent cons cells, thus meaning concatenation is simply function
composition, but accessing any element, including the head, always
takes linear time in the size of the list. In practice, this
difference in efficiency means we can either trade-off fast
interpretation of $\Let$-bindings and $\Handle$-computations for
`slow' handling and context restoration or vice versa depending on
what we expect to occur more frequently.
The pervasiveness of $\Let$-bindings in fine-grain call-by-value means
that the top-most pure continuation is likely to be augmented and
shrunk repeatedly, thus it is a sensible choice to simply represent
generalisation continuations as singly linked list in order to provide
constant time pure continuation augmentation (handler installation
would be constant time too). However, the continuation component
contains two generalisation continuations. In the rule \mlab{Forward}
the forwarding continuation is extended using concatenation, thus we
may choose to represent the forwarding continuation as a
\citeauthor{Hughes86} list for greater efficiency. A consequence of
this choice is that upon resumption invocation we must convert the
forwarding continuation into singly linked list such that it can be
concatenated with the program continuation. Both the conversion and
the concatenation require a full linear traversal of the forwarding
continuation.
%
A slightly clever choice is to represent both continuations using
\citeauthor{Huet97}'s Zipper data structure~\cite{Huet97}, which
essentially boils down to using a pair of singly linked lists, where
the first component contains the program continuation, and the second
component contains the forwarding continuation. We can make a
non-asymptotic improvement by representing the forwarding continuation
as a reversed continuation such that we may interpret the
concatenation operation ($\concat$) in \mlab{Forward} as regular cons
($\cons$). In the \mlab{Resume^\delta} rules we must then interpret
concatenation as reverse append, which needs to traverse the
forwarding continuation only once.
\paragraph{Continuation copying}
A convenient consequence of using persistent functional data structure
to realise the abstract machine is that multi-shot resumptions become
efficiency as continuation copying becomes a constant time
operation. However, if we were only interested one-shot or linearly
used resumptions, then we may wish to use in-place mutations to
achieve greater efficiency. In-place mutations do not exclude support
for multi-shot resumptions, however, with mutable data structures the
resumptions needs to be copied before use. One possible way to copy
resumptions is to expose an explicit copy instruction in the source
language. Alternatively, if the source language is equipped with a
linear type system, then the linear type information can be leveraged
to provide an automatic insertion of copy instructions prior to
resumption invocations.
% The structure of a generalised continuation lends itself to a
% straightforward implementation using a persistent singly-linked
% list. A particularly well-suited data structure for the machine
% continuation is \citeauthor{Huet97}'s Zipper data
% structure~\cite{Huet97}, which is essentially a pair of lists.
% copy-on-write environments
% The definition of abstract machine in this chapter is highly
% suggestive of the choice of data structures required for a realisation
% of the machine. The machine presented in this chapter can readily be
% realised using standard functional data structures such as lists and
% maps~\cite{Okasaki99}.
\section{Simulation of the context-based reduction semantics}
\label{subsec:machine-correctness}
\begin{figure}[t]
\flushleft
\newcommand{\contapp}[2]{#1 #2}
\newcommand{\contappp}[2]{#1(#2)}
%% \newcommand{\contapp}[2]{#1[#2]}
%% \newcommand{\contapp}[2]{#1\mathbin{@}#2}
%% \newcommand{\contappp}[2]{#1\mathbin{@}(#2)}
%
\textbf{Configurations}
\begin{displaymath}
\inv{\cek{M \mid \env \mid \shk \circ \shk'}} \defas \contappp{\inv{\shk' \concat \shk}}{\inv{M}\env}
\defas \contappp{\inv{\shk'}}{\contapp{\inv{\shk}}{\inv{M}\env}}
\end{displaymath}
%
\textbf{Pure continuations}
\begin{displaymath}
\contapp{\inv{[]}}{M} \defas M \qquad \contapp{\inv{((\env, x, N) \cons \slk)}}{M}
\defas \contappp{\inv{\slk}}{\Let\; x \revto M \;\In\; \inv{N}(\env \res \{x\})}
\end{displaymath}
%
\textbf{Continuations}
\begin{displaymath}
\contapp{\inv{[]}}{M}
\defas M \qquad
\contapp{\inv{(\slk, \chi) \cons \shk}}{M}
\defas \contapp{\inv{\shk}}{(\contappp{\inv{\chi}}{\contappp{\inv{\slk}}{M}})}
\end{displaymath}
%
\textbf{Handler closures}
\begin{displaymath}
\contapp{\inv{(\env, H^\depth)}}{M}
\defas \Handle^\depth\;M\;\With\;\inv{H^\depth}\env
\end{displaymath}
%
\textbf{Computation terms}
\begin{equations}
\inv{V\,W}\env &\defas& \inv{V}\env\,\inv{W}{\env} \\
\inv{V\,T}\env &\defas& \inv{V}\env\,T \\
\inv{\Let\;\Record{\ell = x; y} = V \;\In\;N}\env
&\defas& \Let\;\Record{\ell = x; y} =\inv{V}\env \;\In\; \inv{N}(\env \res \{x, y\}) \\
\inv{\Case\;V\,\{\ell\;x \mapsto M; y \mapsto N\}}\env
&\defas& \Case\;\inv{V}\env \,\{\ell\;x \mapsto \inv{M}(\env \res \{x\}); y \mapsto \inv{N}(\env \res \{y\})\} \\
\inv{\Return\;V}\env &\defas& \Return\;\inv{V}\env \\
\inv{\Let\;x \revto M \;\In\;N}\env
&\defas& \Let\;x \revto\inv{M}\env \;\In\; \inv{N}(\env \res \{x\}) \\
\inv{\Do\;\ell\;V}\env
&\defas& \Do\;\ell\;\inv{V}\env \\
\inv{\Handle^\depth\;M\;\With\;H}\env
&\defas& \Handle^\depth\;\inv{M}\env\;\With\;\inv{H}\env \\
\end{equations}
\textbf{Handler definitions}
\begin{equations}
\inv{\{\Return\;x \mapsto M\}}\env
&\defas& \{\Return\;x \mapsto \inv{M}(\env \res \{x\})\} \\
\inv{\{\OpCase{\ell}{p}{r} \mapsto M\} \uplus H^\depth}\env
&\defas& \{\OpCase{\ell}{p}{r} \mapsto \inv{M}(\env \res \{p, r\}\} \uplus \inv{H^\depth}\env \\
\inv{(q.\,H)}\env &\defas& \inv{H}(\env \res \{q\})
\end{equations}
\textbf{Value terms and values}
\begin{displaymath}
\ba{@{}c@{}}
\begin{eqs}
\inv{x}\env &\defas& \inv{v}, \quad \text{ if }\env(x) = v \\
\inv{x}\env &\defas& x, \quad \text{ if }x \notin \dom(\env) \\
\inv{\lambda x^A.M}\env &\defas& \lambda x^A.\inv{M}(\env \res \{x\}) \\
\inv{\Lambda \alpha^K.M}\env &\defas& \Lambda \alpha^K.\inv{M}\env \\
\inv{\Record{}}\env &\defas& \Record{} \\
\inv{\Record{\ell=V; W}}\env &\defas& \Record{\ell=\inv{V}\env; \inv{W}\env} \\
\inv{(\ell\;V)^R}\env &\defas& (\ell\;\inv{V}\env)^R \\
\end{eqs}
\quad
\begin{eqs}
\inv{\shk^A} &\defas& \lambda x^A.\inv{\shk}(\Return\;x) \\
\inv{(\shk, \slk)^A} &\defas& \lambda x^A.\inv{\slk}(\inv{\shk}(\Return\;x)) \\
\inv{(\env, \lambda x^A.M)} &\defas& \lambda x^A.\inv{M}(\env \res \{x\}) \\
\inv{(\env, \Lambda \alpha^K.M)} &\defas& \Lambda \alpha^K.\inv{M}\env \\
\inv{\Record{}} &\defas& \Record{} \\
\inv{\Record{\ell=v; w}} &\defas& \Record{\ell=\inv{v}; \inv{w}} \\
\inv{(\ell\;v)^R} &\defas& (\ell\;\inv{v})^R \\
\end{eqs} \smallskip\\
\inv{\Rec\,g^{A \to C}\,x.M}\env \defas \Rec\,g^{A \to C}\,x.\inv{M}(\env \res \{g, x\})
\defas \inv{(\env, \Rec\,g^{A \to C}\,x.M)} \\
\ea
\end{displaymath}
\caption{Mapping from abstract machine configurations to terms.}
\label{fig:config-to-term}
\end{figure}
%
We now show that the base abstract machine is correct with respect to
the combined context-based small-step semantics of $\HCalc$, $\SCalc$,
and $\HPCalc$ via a simulation result.
Initial states provide a canonical way to map a computation term onto
the abstract machine.
%
A more interesting question is how to map an arbitrary configuration
to a computation term.
%
Figure~\ref{fig:config-to-term} describes such a mapping $\inv{-}$
from configurations to terms via a collection of mutually recursive
functions defined on configurations, continuations, handler closures,
computation terms, handler definitions, value terms, and machine
values. The mapping makes use of a domain operation and a restriction
operation on environments.
%
\begin{definition}
The domain of an environment is defined recursively as follows.
%
\[
\bl
\dom : \MEnvCat \to \VarCat\\
\ba{@{}l@{~}c@{~}l}
\dom(\emptyset) &\defas& \emptyset\\
\dom(\env[x \mapsto v]) &\defas& \{x\} \cup \dom(\env)
\ea
\el
\]
%
We write $\env \res \{x_1, \dots, x_n\}$ for the restriction of
environment $\env$ to $\dom(\env) \res \{x_1, \dots, x_n\}$.
\end{definition}
%
The $\inv{-}$ function enables us to classify the abstract machine
reduction rules according to how they relate to the context-based
small-step semantics.
%
Both the rules \mlab{Let} and \mlab{Forward} are administrative in the
sense that $\inv{-}$ is invariant under either rule.
%
This leaves the $\beta$-rules \mlab{App}, \mlab{AppRec}, \mlab{TyApp},
\mlab{Resume^\delta}, \mlab{Split}, \mlab{Case}, \mlab{PureCont}, and
\mlab{GenCont}. Each of these corresponds directly with performing a
reduction in the small-step semantics. We extend the notion of
transition to account for administrative steps.
%
\begin{definition}[Auxiliary reduction relations]
We write $\stepsto_{\textrm{a}}$ for administrative steps and
$\simeq_{\textrm{a}}$ for the symmetric closure of
$\stepsto_{\textrm{a}}^*$. We write $\stepsto_\beta$ for
$\beta$-steps and $\Stepsto$ for a sequence of steps of the form
$\stepsto_{\textrm{a}}^\ast \stepsto_\beta$.
\end{definition}
%
The following lemma describes how we can simulate each reduction in
the small-step reduction semantics by a sequence of administrative
steps followed by one $\beta$-step in the abstract machine.
%
\begin{lemma}
\label{lem:machine-simulation}
Suppose $M$ is a computation and $\conf$ is configuration such that
$\inv{\conf} = M$, then if $M \reducesto N$ there exists $\conf'$ such
that $\conf \Stepsto \conf'$ and $\inv{\conf'} = N$, or if
$M \not\reducesto$ then $\conf \not\Stepsto$.
\end{lemma}
%
\begin{proof}
By induction on the derivation of $M \reducesto N$.
\end{proof}
%
The correspondence here is rather strong: there is a one-to-one
mapping between $\reducesto$ and the quotient relation of $\Stepsto$
and $\simeq_{\textrm{a}}$. % The inverse of the lemma is straightforward
% as the semantics is deterministic.
%
Notice that Lemma~\ref{lem:machine-simulation} does not require that
$M$ be well-typed. This is mostly a convenience to simplify the
lemma. The lemma is used in the following theorem where it is being
applied only on well-typed terms.
%
\begin{theorem}[Simulation]\label{thm:handler-simulation}
If $\typc{}{M : A}{E}$ and $M \reducesto^+ N$ such that $N$ is
normal with respect to $E$, then
$\cek{M \mid \emptyset \mid \kappa_0} \stepsto^+ \conf$ such that
$\inv{\conf} = N$, or $M \not\reducesto$ then
$\cek{M \mid \emptyset \mid \kappa_0} \not\stepsto$.
\end{theorem}
%
\begin{proof}
By repeated application of Lemma~\ref{lem:machine-simulation}.
\end{proof}
\section{Related work}
The literature on abstract machines is vast and rich. I describe here
the basic structure of some selected abstract machines from the
literature.
\paragraph{Handler machines} Chronologically, the machine presented in
this chapter was the first abstract machine specifically designed for
effect handlers to appear in the literature. Subsequently, this
machine has been extended and used to explain the execution model for
the Multicore OCaml
implementation~\cite{SivaramakrishnanDWKJM21}. Their primary extension
captures the finer details of the OCaml runtime as it models the
machine continuation as a heterogeneous sequence consisting of
interleaved OCaml and C frames.
An alternative machine has been developed by \citet{BiernackiPPS19}
for the Helium language. Although, their machine is based on
\citeauthor{BiernackaBD05}'s definitional abstract machine for the
control operators shift and reset~\cite{BiernackaBD05}, the
continuation structure of the resulting machine is essentially the
same as that of a generalised continuation. The primary difference is
that in their presentation a generalised frame is either pair
consisting of a handler closure and a pure continuation (as in the
presentation in this chapter) or a coercion paired with a pure
continuation.
\paragraph{SECD machine} \citeauthor{Landin64}'s SECD machine was the
first abstract machine for $\lambda$-calculus viewed as a programming
language~\cite{Landin64,Danvy04}. The machine is named after its
structure as it consists of a \emph{stack} component,
\emph{environment} component, \emph{control} component, and a
\emph{dump} component. The stack component maintains a list of
intermediate value. The environment maps free variables to values. The
control component holds a list of directives that manipulate the stack
component. The dump acts as a caller-saved register as it maintains a
list of partial machine state snapshots. Prior to a closure
application, the machine snapshots the state of the stack,
environment, and control components such that this state can be
restored once the stack has been reduced to a single value and the
control component is empty. The structure of the SECD machine lends
itself to a simple realisation of the semantics of
\citeauthor{Landin98}'s the J operator as its behaviour can realised
by reifying the dump the as value.
%
\citet{Plotkin75} proved the correctness of the machine in style of a
simulation result with respect to a reduction
semantics~\cite{AgerBDM03}.
The SECD machine is a precursor to the CEK machine as the latter can
be viewed as a streamlined variation of the SECD machine, where the
continuation component unifies stack and dump components of the SECD
machine.
%
For a deep dive into the operational details of
\citeauthor{Landin64}'s SECD machine, the reader may consult
\citet{Danvy04}, who dissects the SECD machine, and as a follow up on
that work \citet{DanvyM08} perform several rational deconstructions
and reconstructions of the SECD machine with the J operator.
\paragraph{Krivine machine} The \citeauthor{Krivine07} machine takes
its name after its designer \citet{Krivine07}. It is designed for
call-by-name $\lambda$-calculus computation as it performs reduction
to weak head normal form~\cite{Krivine07,Leroy90}.
%
The structure of the \citeauthor{Krivine07} machine is similar to that
of the CEK machine as it features a control component, which focuses
the current term under evaluation; an environment component, which
binds variables to closures; and a stack component, which contains a
list of closures.
%
Evaluation of an application term pushes the argument along with the
current environment onto the stack and continues to evaluate the
abstractor term. Dually evaluation of a $\lambda$-abstraction places
the body in the control component, subsequently it pops the top most
closure from the stack and extends the current environment with this
closure~\cite{DouenceF07}.
\citet{Krivine07} has also designed a variation of the machine which
supports a call-by-name variation of the callcc control operator. In
this machine continuations have the same representation as the stack
component, and they can be stored on the stack. Then the continuation
capture mechanism of callcc can be realised by popping and installing
the top-most closure from the stack, and then saving the tail of the
stack as the continuation object, which is to be placed on top of the
stack. An application of a continuation can be realised by replacing
the current stack with the stack embedded inside the continuation
object~\cite{Krivine07}.
\paragraph{ZINC machine} The ZINC machine is a strict variation of
\citeauthor{Krivine07}'s machine, though it was designed independently
by \citet{Leroy90}. The machine is used as the basis for the OCaml
byte code interpreter~\cite{Leroy90,LeroyDFGRV20}.
%
There are some cosmetic difference between \citeauthor{Krivine07}'s
machine and the ZINC machine. For example, the latter decomposes the
stack component into an argument stack, holding arguments to function
calls, and a return stack, which holds closures.
%
A peculiar implementation detail of the ZINC machine that affects the
semantics of the OCaml language is that for $n$-ary function
application to be efficient, function arguments are evaluated
right-to-left rather than left-to-right as customary in call-by-value
language~\cite{Leroy90}. The OCaml manual leaves the evaluation order
for function arguments unspecified~\cite{LeroyDFGRV20}. However, for a
long time the native code compiler for OCaml would emit code utilised
left-to-right evaluation order for function arguments, consequently
the compilation method could affect the semantics of a program, as the
evaluation order could be observed using effects, e.g. by raising an
exception~\cite{CartwrightF92}. Anecdotally, Damien Doligez told me in
person at ICFP 2017 that unofficially the compiler has been aligned
with the byte code interpreter such that code running on either
implementation exhibits the same semantics. Even though the evaluation
order remains unspecified in the manual any other observable order
than right-to-left evaluation order is now considered a bug (subject
to some exceptions, notably short-circuiting logical and/or
functions).
\paragraph{Mechanical machine derivations}
%
There are deep mathematical connections between environment-based
abstract machine semantics and standard reduction semantics with
explicit substitutions.
%
For example, \citet{AgerBDM03,AgerDM04,AgerBDM03a} relate abstract
machines and functional evaluators by way of a two-way derivation that
consists of closure conversion, transformation into CPS, and
defunctionalisation of continuations.
%
\citet{BiernackaD07} demonstrate how to formally derive an abstract
machine from a small-step reduction strategy. Their presentation has
been formalised by \citet{Swierstra12} in the dependently-typed
programming language Agda.
%
\citet{HuttonW04} demonstrate how to calculate a
correct-by-construction abstract machine from a given specification
using structural induction. Notably, their example machine supports
basic computational effects in the form of exceptions.
%
\citet{AgerDM05} also extended their technique to derive abstract
machines from monadic-style effectful evaluators.
\part{Expressiveness}
\label{p:expressiveness}
\chapter{Interdefinability of effect handlers}
\label{ch:deep-vs-shallow}
On the surface, shallow handlers seem to offer more flexibility than
deep handlers as they do not enforce a particular recursion scheme
over effectful computations. An interesting hypothesis worth
investigating is whether this flexibility is a mere programming
convenience or whether it enables shallow handlers to implement
programs that would otherwise be impossible to implement with deep
handlers. Put slightly different, the hypothesis to test is whether
handlers can implement one another. To test this sort of hypothesis we
first need to pin down what it means for `something to be able to
implement something else'.
For example in Section~\ref{sec:pipes} I asserted that shallow
handlers provide the natural basis for implementing pipes, suggesting
that an implementation based on deep handlers would be fiddly. If we
were to consider the wider design space of programming language
features, then it turns out that deep handlers offer a direct
implementation of pipes by shifting recursion from terms to the level
of types (the interested reader may consult either \citet{KammarLO13}
or \citet{HillerstromL18} for the precise details). Thus in some sense
pipes are implementable with deep handlers, however, this particular
implementation strategy is not realisable in the $\HCalc$-calculus
since it has no notion of recursive types, meaning we cannot use this
strategy to argue that deep handlers can implement pipes in our
setting.
%
We will restrict our attention to the calculi $\HCalc$, $\SCalc$, and
$\HPCalc$ and use the notion of \emph{typeability-preserving
macro-expressiveness} to determine whether handlers are
interdefinable~\cite{ForsterKLP19}. In our particular setting,
typeability-preserving macro-expressiveness asks whether there exists
a \emph{local} transformation that can transform one kind of handler
into another kind of handler, whilst preserving typeability in the
image of the transformation. By mandating that the transform is local
we rule out the possibility of rewriting the entire program in, say,
CPS notation to implement deep and shallow handlers as in
Chapter~\ref{ch:cps}.
%
In this chapter we use the notion of typeability-preserving
macro-expressiveness to show that shallow handlers and general
recursion can simulate deep handlers up to congruence, and that deep
handlers can simulate shallow handlers up to administrative
reductions. % The
% latter construction generalises the example of pipes implemented
% using deep handlers that we gave in Section~\ref{sec:pipes}.
%
\paragraph{Relation to prior work} The results in this chapter has
been published previously in the following papers.
%
\begin{enumerate}[i]
\item \bibentry{HillerstromL18} \label{en:ch-def-HL18}
\item \bibentry{HillerstromLA20} \label{en:ch-def-HLA20}
\end{enumerate}
%
The results of Sections~\ref{sec:deep-as-shallow} and
\ref{sec:shallow-as-deep} appear in item \ref{en:ch-def-HL18}, whilst
the result of Section~\ref{sec:param-desugaring} appear in item
\ref{en:ch-def-HLA20}.
\section{Deep as shallow}
\label{sec:deep-as-shallow}
\newcommand{\dstrans}[1]{\mathcal{S}\llbracket #1 \rrbracket}
The implementation of deep handlers using shallow handlers (and
recursive functions) is by a direct local translation, similar to how
one would implement a fold (catamorphism) in terms of general
recursion. Each handler is wrapped in a recursive function and each
resumption has its body wrapped in a call to this recursive function.
%
Formally, the translation $\dstrans{-}$ is defined as the homomorphic
extension of the following equations to all terms and substitutions.
%
\[
\bl
\dstrans{-} : \CompCat \to \CompCat\\
\dstrans{\Handle \; M \; \With \; H} \defas
(\Rec~h~f.\ShallowHandle\; f\,\Unit \; \With \; \dstrans{H}_h)\,(\lambda \Unit{}.\dstrans{M}) \medskip\\
% \dstrans{H}h &=& \dstrans{\hret}h \uplus \dstrans{\hops}h \\
\dstrans{-} : \HandlerCat \times \ValCat \to \HandlerCat\\
\ba{@{}l@{~}c@{~}l}
\dstrans{\{ \Return \; x \mapsto N\}}_h &\defas&
\{ \Return \; x \mapsto \dstrans{N} \}\\
\dstrans{\{ \OpCase{\ell}{p}{r} \mapsto N_\ell \}_{\ell \in \mathcal{L}}}_h &\defas&
\{ \OpCase{\ell}{p}{r} \mapsto
\bl
\Let \; r \revto \Return \; \lambda x.h\,(\lambda \Unit{}.r\,x)\\
\In\;\dstrans{N_\ell} \}_{\ell \in \mathcal{L}}
\el
\ea
\el
\]
%
The translation of $\Handle$ uses a $\Rec$-abstraction to introduce a
fresh name $h$ for the handler $H$. This name is used by the
translation of the handler definitions. The translation of
$\Return$-clauses is the identity, and thus ignores the handler
name. However, the translation of operation clauses uses the name to
simulate a deep resumption by guarding invocations of the shallow
resumption $r$ with $h$.
In order to exemplify the translation, let us consider a variation of
the $\environment$ handler from Section~\ref{sec:tiny-unix-env}, which
handles an operation $\Ask : \UnitType \opto \Int$.
%
\[
\ba{@{~}l@{~}l}
&\mathcal{D}\left\llbracket
\ba[m]{@{}l}
\Handle\;\Do\;\Ask\,\Unit + \Do\;\Ask\,\Unit\;\With\\
\quad\ba[m]{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& \Return\;x\\
\OpCase{\Ask}{\Unit}{r} &\mapsto& r~42
\ea
\ea
\right\rrbracket \medskip\\
=& \bl
(\Rec\;env~f.
\bl
\ShallowHandle\;f\,\Unit\;\With\\
\quad\ba[t]{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& \Return\;x\\
\OpCase{\Ask}{\Unit}{r} &\mapsto&
\bl
\Let\;r \revto \Return\;\lambda x.env~(\lambda\Unit.r~x)\;\In\\
r~42)~(\lambda\Unit.\Do\;\Ask\,\Unit + \Do\;\Ask\,\Unit)
\el
\ea
\el
\el
\ea
\]
%
The deep semantics are simulated by generating the name $env$ for the
shallow handlers and recursively apply the handler under the modified
resumption.
The translation commutes with substitution and preserves typeability.
%
\begin{lemma}\label{lem:dstrans-subst}
Let $\sigma$ denote a substitution. The translation $\dstrans{-}$
commutes with substitution, i.e.
%
\[
\dstrans{V}\dstrans{\sigma} = \dstrans{V\sigma},\quad
\dstrans{M}\dstrans{\sigma} = \dstrans{M\sigma},\quad
\dstrans{H}\dstrans{\sigma} = \dstrans{H\sigma}.
\]
%
\end{lemma}
%
\begin{proof}
By induction on the structures of $V$, $M$, and $H$.
\end{proof}
\begin{theorem}
If $\Delta; \Gamma \vdash M : C$ then $\Delta; \Gamma \vdash
\dstrans{M} : \dstrans{C}$.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
In order to obtain a simulation result, we allow reduction in the
simulated term to be performed under lambda abstractions (and indeed
anywhere in a term), which is necessary because of the redefinition of
the resumption to wrap the handler around its body.
%
Nevertheless, the simulation proof makes minimal use of this power,
merely using it to rename a single variable.
%
% We write $R_{\Cong}$ for the compatible closure of relation
% $R$, that is the smallest relation including $R$ and closed under term
% constructors for $\SCalc$.
%% , otherwise known as \emph{reduction up to
%% congruence}.
\begin{theorem}[Simulation up to congruence]
If $M \reducesto N$ then $\dstrans{M} \reducesto_{\Cong}^+
\dstrans{N}$.
\end{theorem}
\begin{proof}
By case analysis on $\reducesto$ using
Lemma~\ref{lem:dstrans-subst}. The interesting case is
$\semlab{Op}$, which is where we apply a single $\beta$-reduction,
renaming a variable, under the $\lambda$-abstraction representing
the resumption. The proof of this case is as follows.
%
\begin{derivation}
& \dstrans{\Handle\;\EC[\Do\;\ell~V]\;\With\;H}\\
=& \reason{definition of $\dstrans{-}$}\\
& (\Rec\;h\;f.\ShallowHandle\;f\,\Unit\;\With\;\dstrans{H}_h)\,(\lambda\Unit.\dstrans{\EC}[\Do\;\ell~\dstrans{V}])\\
\reducesto^+& \reason{\semlab{Rec}, \semlab{App}, \semlab{Op^\dagger} with $\hell = \{\OpCase{\ell}{p}{r} \mapsto N\}$}\\
& (\Let\;r \revto \Return\;\lambda x.h\,(\lambda\Unit.r~x)\;\In\;\dstrans{N})[\lambda y.\dstrans{\EC}[\Return\;y]/r,\dstrans{V}/p]\\
=& \reason{definition of [-]}\\
&(\Let\;r \revto \Return\;\lambda x.h\,(\lambda\Unit.(\lambda y.\dstrans{\EC}[\Return\;y])~x)\;\In\;\dstrans{N}[\dstrans{V}/p]\\
\reducesto_\Cong & \reason{\semlab{App} reduction under $\lambda x.\cdots$}\\
&(\Let\;r \revto \Return\;\lambda x.h\,(\lambda\Unit.\dstrans{\EC}[\Return\;x])\;\In\;\dstrans{N}[\dstrans{V}/p]\\
\reducesto& \reason{\semlab{Let} and Lemma~\ref{lem:dstrans-subst}}\\
% & \dstrans{N}[\dstrans{V}/p,\lambda x.h\,(\lambda\Unit.\dstrans{\EC}[\Return\;x])/r]\\
% =& \reason{}\\
&\dstrans{N[V/p,\lambda x.h\,(\lambda\Unit.\EC[\Return\;x])/r]}
\end{derivation}
\end{proof}
\section{Shallow as deep}
\label{sec:shallow-as-deep}
\newcommand{\sdtrans}[1]{\mathcal{D}\llbracket #1 \rrbracket}
Implementing shallow handlers in terms of deep handlers is slightly
more involved than the other way round.
%
It amounts to the encoding of a case split by a fold and involves a
translation on handler types as well as handler terms.
%
Formally, the translation $\sdtrans{-}$ is defined as the homomorphic
extension of the following equations to all types, terms, type
environments, and substitutions.
%
\[
\bl
\sdtrans{-} : \HandlerTypeCat \to \HandlerTypeCat\\
\sdtrans{A\eff E_1 \Rightarrow B\eff E_2} \defas
\sdtrans{A\eff E_1} \Rightarrow \Record{\UnitType \to \sdtrans{C \eff E_1}; \UnitType \to \sdtrans{B \eff E_2}} \eff \sdtrans{E_2} \medskip \medskip\\
% \sdtrans{C \Rightarrow D} \defas
% \sdtrans{C} \Rightarrow \Record{\UnitType \to \sdtrans{C}; \UnitType \to \sdtrans{D}} \medskip \medskip\\
\sdtrans{-} : \CompCat \to \CompCat\\
\sdtrans{\ShallowHandle \; M \; \With \; H} \defas
\ba[t]{@{}l}
\Let\;z \revto \Handle \; \sdtrans{M} \; \With \; \sdtrans{H} \; \In\\
\Let\;\Record{f; g} = z \;\In\;
g\,\Unit
\ea \medskip\\
\sdtrans{-} : \HandlerCat \to \HandlerCat\\
% \sdtrans{H} &=& \sdtrans{\hret} \uplus \sdtrans{\hops} \\
\ba{@{}l@{~}c@{~}l}
\sdtrans{\{\Return \; x \mapsto N\}} &\defas&
\{\Return \; x \mapsto \Return\; \Record{\lambda \Unit.\Return\; x; \lambda \Unit.\sdtrans{N}}\} \\
\sdtrans{\{\OpCase{\ell}{p}{r} \mapsto N\}_{\ell \in \mathcal{L}}} &\defas& \{
\bl
\OpCase{\ell}{p}{r} \mapsto\\
\qquad\bl\Let \;r \revto
\lambda x. \Let\; z \revto r~x\;\In\;
\Let\; \Record{f; g} = z\; \In\; f\,\Unit\;\In\\
\Return\;\Record{\lambda \Unit.\Let\; x \revto \Do\;\ell~p\; \In\; r~x; \lambda \Unit.\sdtrans{N}}\}_{\ell \in \mathcal{L}}
\el
\el
\ea
\el
\]
%
As evident from the translation of handler types, each shallow handler
is encoded as a deep handler that returns a pair of thunks. It is
worth noting that the handler construction is actually pure, yet we
need to annotate the pair with the translated effect signature
$\sdtrans{E_2}$, because the calculus has no notion of effect
subtyping. Technically we could insert an administrative identity
handler to coerce the effect signature. There are practical reasons
for avoiding administrative handlers, though, as we shall discuss
momentarily the inordinate administrative overhead of this
transformation might conceal the additional overhead incurred by the
introduction of administrative identity handlers. The first component
of the pair forwards all operations, acting as the identity on
computations. The second component interprets a single operation
before reverting to forwarding.
%
The following example illustrates the translation on an instance of
the $\Pipe$ operator from Section~\ref{sec:pipes} using the consumer
computation $\Do\;\Await\,\Unit + \Do\;\Await\,\Unit$ and the
suspended producer computation
$\Rec\;ones\,\Unit.\Do\;\Yield~1;ones\,\Unit$.
%
\[
\ba{@{~}l@{~}l}
&\mathcal{D}\left\llbracket
\ba[m]{@{}l}
\ShallowHandle\;\Do\;\Await\,\Unit + \Do\;\Await\,\Unit\;\With\\
\quad\ba[m]{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& \Return\;x\\
\OpCase{\Await}{\Unit}{r} &\mapsto& \Copipe\,\Record{r;\Rec\;ones\,\Unit.\Do\;\Yield~1;ones\,\Unit}
\ea
\ea
\right\rrbracket \medskip\\
=& \bl
\Let\;z \revto \bl
\Handle\;(\lambda\Unit.\Do\;\Await\,\Unit + \Do\;\Await\,\Unit)\,\Unit\;\With\\
\quad\ba[t]{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& \Return\;\Record{\lambda\Unit.\Return\;x;\lambda\Unit.\Return\;x}\\
\OpCase{\Await}{\Unit}{r} &\mapsto&\\
\multicolumn{3}{l}{
\quad\bl
\Let\;r \revto \lambda x.\Let\;z \revto r~x\;\In\;\Let\;\Record{f;g} = z\;\In\;f\,\Unit\;\In\\
\Return\;\Record{\bl
\lambda\Unit.\Let\;x \revto \Do\;\ell~p\;\In\;r~x;\\
\lambda\Unit. \sdtrans{\Copipe}\,\Record{r;\Rec\;ones\,\Unit.\Do\;\Yield~1;ones\,\Unit}}\el
\el}
\ea
\el\\
\In\;\Let\;\Record{f;g} = z\;\In\;g\,\Unit
\el
\ea
\]
%
Evaluation of both the left hand side and right hand side of the
equals sign yields the value $2 : \Int$. The $\Return$-case in the
image contains a redundant pair, because the $\Return$-case of $\Pipe$
is the identity. The translation of the $\Await$-case sets up the
forwarding component and handling component of the pair of thunks.
The distinction between deep and shallow handlers is that the latter
is discharged after handling a single operation, whereas the former is
persistent and apt for continual operation interpretations. The
persistence of deep handlers means that any handler in the image of
the translation remains in place for the duration of the handled
computation after handling a single operation, which has noticeable
asymptotic performance implications. Each activation of a handler in
the image introduces another layer of indirection that any subsequent
operation invocation have to follow. Supposing some source program
contains $n$ handlers and performs $k$ operation invocations, then the
image introduces $k$ additional handlers, meaning the total amount of
handlers in the image is $n+k$. Viewed through the practical lens of
the CPS translation (Chapter~\ref{ch:cps}) or abstract machine
(Chapter~\ref{ch:abstract-machine}) it means that in the worst case
handler lookup takes $\BigO(n+k)$ time. For example, consider the
extreme case where $n = 1$, that is, the handler lookup takes
$\BigO(1)$ time in the source, but in the image it takes $\BigO(k)$
time.
%
Thus this translation is more of theoretical significance than
practical interest. It also demonstrates that typeability-preserving
macro-expressiveness is rather coarse-grained notion of
expressiveness, as it blindly considers whether some construct is
computable using another construct without considering the
computational cost.
The translation commutes with substitution and preserves typeability.
%
\begin{lemma}\label{lem:sdtrans-subst}
Let $\sigma$ denote a substitution. The translation $\sdtrans{-}$
commutes with substitution, i.e.
%
\[
\sdtrans{V}\sdtrans{\sigma} = \sdtrans{V\sigma},\quad
\sdtrans{M}\sdtrans{\sigma} = \sdtrans{M\sigma},\quad
\sdtrans{H}\sdtrans{\sigma} = \sdtrans{H\sigma}.
\]
%
\end{lemma}
%
\begin{proof}
By induction on the structures of $V$, $M$, and $H$.
\end{proof}
%
\begin{theorem}
If $\Delta; \Gamma \vdash M : C$ then $\sdtrans{\Delta};
\sdtrans{\Gamma} \vdash \sdtrans{M} : \sdtrans{C}$.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
\newcommand{\admin}{admin}
\newcommand{\approxa}{\gtrsim}
As with the implementation of deep handlers as shallow handlers, the
implementation is again given by a typeability-preserving local
translation. However, this time the administrative overhead is more
significant. Reduction up to congruence is insufficient and we require
a more semantic notion of administrative reduction.
\begin{definition}[Administrative evaluation contexts]\label{def:admin-eval}
An evaluation context $\EC \in \EvalCat$ is administrative,
$\admin(\EC)$, when the following two criteria hold.
\begin{enumerate}
\item For all values $V \in \ValCat$, we have: $\EC[\Return\;V] \reducesto^\ast
\Return\;V$
\item For all evaluation contexts $\EC' \in \EvalCat$, operations
$\ell \in \BL(\EC) \backslash \BL(\EC')$, and values
$V \in \ValCat$:
%
\[
\EC[\EC'[\Do\;\ell\;V]] \reducesto_\Cong^\ast \Let\; x \revto \Do\;\ell\;V \;\In\; \EC[\EC'[\Return\;x]].
\]
\end{enumerate}
\end{definition}
%
The intuition is that an administrative evaluation context behaves
like the empty evaluation context up to some amount of administrative
reduction, which can only proceed once the term in the context becomes
sufficiently evaluated.
%
Values annihilate the evaluation context and handled operations are
forwarded.
%
%% The forwarding handler is a technical device which allows us to state
%% the second property in a uniform way by ensuring that the operation is
%% handled at least once.
\begin{definition}[Approximation up to administrative reduction]\label{def:approx-admin}
Define $\approxa$ as the compatible closure of the following inference
rules.
%
\begin{mathpar}
\inferrule*
{ }
{M \approxa M}
\inferrule*
{M \reducesto M' \\ M' \approxa N}
{M \approxa N}
\inferrule*
{\admin(\EC) \\ M \approxa N}
{\EC[M] \approxa N}
\end{mathpar}
%
We say that $M$ approximates $N$ up to administrative reduction if $M
\approxa N$.
\end{definition}
%
Approximation up to administrative reduction captures the property
that administrative reduction may occur anywhere within a term.
%
The following lemma states that the forwarding component of the
translation is administrative.
%
\begin{lemma}\label{lem:sdtrans-admin}
For all shallow handlers $H$, the following context is administrative
%
\[
\Let\; z \revto
\Handle\; [~] \;\With\; \sdtrans{H}\;
\In\;
\Let\; \Record{f;\_} = z\; \In\; f\,\Unit.
\]
%
\end{lemma}
%
\begin{proof}
We have to check both conditions of Definition~\ref{def:admin-eval}.
\begin{enumerate}
\item Follows by direct calculation.
% \item We need to show that for all $V \in \ValCat$
% %
% \[
% \Let\; z \revto
% \Handle\; \Return\;V \;\With\; \sdtrans{H}\;
% \In\;
% \Let\; \Record{f;\_} = z\; \In\; f\,\Unit \reducesto^\ast \Return\;V.
% \]
% %
% We show this by direct calculation using the assumption $\hret = \{\Return\;x \mapsto N\}$.
% \begin{derivation}
% &\Let\; z \revto
% \Handle\; \Return\;V \;\With\; \sdtrans{H}\;
% \In\;
% \Let\; \Record{f;\_} = z\; \In\; f\,\Unit\\
% \reducesto^+& \reason{\semlab{Lift}, \semlab{Ret}, \semlab{Let}, \semlab{Split}}\\
% % &\Let\;\Record{f;\_} = \Record{\lambda\Unit.\Return\;x;\lambda\Unit.\sdtrans{N}}\;\In\;f\,\Unit
% & (\lambda\Unit.\Return\;V)\,\Unit \reducesto \Return\;V
% \end{derivation}
\item % We need to show that for all evaluation contexts
% $\EC' \in \EvalCat$, operations
% $\ell \in \BL(\EC) \backslash \BL(\EC')$, and values
% $V \in \ValCat$
% %
% \[
% \ba{@{~}l@{~}l}
% &\Let\; z \revto
% \Handle\; \EC'[\Do\;\ell\;V]\;\With\;\sdtrans{H} \;\In\;\Record{f;\_} = z\;\In\;f\,\Unit\\
% \reducesto_\Cong^\ast& \Let\; x \revto \Do\;\ell\;V \;\In\; \Let\; z \revto \Handle\; \EC'[\Return\;x]\;\With\;\sdtrans{H}\;\In\;\Let\;\Record{f;\_} = z\;\In\;f\,\Unit
% \ea
% \]
%
Follows by direct calculation using the assumption that
$\ell \notin \BL(\EC')$.
\begin{derivation}
&\Let\; z \revto
\Handle\; \EC'[\Do\;\ell\;V]\;\With\;\sdtrans{H}\;\In\;
\Let\;\Record{f;\_} = z\;\In\;f\,\Unit\\
\reducesto& \reason{\semlab{Op} using assumption $\ell \notin \BL(\EC')$}\\
&\bl \Let\; z \revto
\bl
\Let\;r\revto
\lambda x.\bl\Let\;z \revto (\lambda x.\Handle\;\EC'[\Return\;x]\;\With\;\sdtrans{H})~x\\
\In\;\Let\;\Record{f;g} = z\;\In\;f\,\Unit
\el\\
\In\;\Return\;\Record{\lambda\Unit.\Let\;x\revto \Do\;\ell~V\;\In\;r~x;\lambda\Unit.\sdtrans{N}}
\el\\
\In\;\Record{f;\_} = z\;\In\;f\,\Unit
\el\\
\reducesto^+& \reason{\semlab{Let}, \semlab{Split}, \semlab{App}}\\
&(\Let\;x\revto \Do\;\ell~V\;\In\;r~x)[(\lambda x.\bl
\Let\;z \revto (\lambda x.\Handle\;\EC'[\Return\;x]\;\With\;\sdtrans{H})\,x\\
\In\;\Let\;\Record{f;g} = z\;\In\;f\,\Unit)/r]\el\\
\reducesto_\Cong &\reason{\semlab{App} tail position reduction}\\
&\bl
\Let\;x \revto \Do\;\ell~V\;\In\;\Let\;z \revto (\lambda x.\Handle\;\EC'[\Return\;x]\;\With\;\sdtrans{H})\,x\; \In\\\Let\;\Record{f;g} = z\;\In\;f\,\Unit
\el\\
\reducesto_\Cong& \reason{\semlab{App} reduction under binder}\\
&\bl \Let\;x \revto \Do\;\ell~V\;\In\;\Let\;z \revto \Handle\;\EC'[\Return\;x]\;\With\;\sdtrans{H}\; \In\\\Let\;\Record{f;g} = z\;\In\;f\,\Unit\el
\end{derivation}
\end{enumerate}
\end{proof}
%
\begin{theorem}[Simulation up to administrative reduction]
If $M' \approxa \sdtrans{M}$ and $M \reducesto N$ then there exists
$N'$ such that $N' \approxa \sdtrans{N}$ and $M' \reducesto^+ N'$.
\end{theorem}
%
\begin{proof}
% By case analysis on $\reducesto$ and induction on $\approxa$ using
% Lemma~\ref{lem:sdtrans-subst} and Lemma~\ref{lem:sdtrans-admin}.
%
By induction on $M' \approxa \sdtrans{M}$ and case analysis on
$M \reducesto N$ using Lemma~\ref{lem:sdtrans-subst} and
Lemma~\ref{lem:sdtrans-admin}.
%
The interesting case is reflexivity of $\approxa$ where
$M \reducesto N$ is an application of $\semlab{Op^\dagger}$, which
we will show.
In the reflexivity case we have $M' \approxa \sdtrans{M}$, where
$M = \ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H$ and
$N = N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]$ such that
$M \reducesto N$ where $\ell \notin \BL(\EC)$ and
$H^\ell = \{\OpCase{\ell}{p}{r} \mapsto N_\ell\}$.
%
Hence by reflexivity of $\approxa$ we have
$M' = \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}$. Now we
can compute $N'$ by direct calculation starting from $M'$ yielding
\begin{derivation}
& \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}\\
=& \reason{definition of $\sdtrans{-}$}\\
&\bl
\Let\;z \revto \Handle\;\sdtrans{\EC}[\Do\;\ell~\sdtrans{V}]\;\With\;\sdtrans{H}\;\In\\
\Let\;\Record{f;g} = z\;\In\;g\,\Unit
\el\\
\reducesto^+& \reason{\semlab{Op} using assumption $\ell \notin \BL(\sdtrans{\EC})$, \semlab{Let}, \semlab{Let}}\\
&\bl
\Let\;\Record{f;g} = \Record{
\bl
\lambda\Unit.\Let\;x \revto \Do\;\ell~\sdtrans{V}\;\In\;r~x;\\
\lambda\Unit.\sdtrans{N_\ell}}[\lambda x.
\bl
\Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
\Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,\sdtrans{V}/p]\;\In\; g\,\Unit
\el
\el\\
\el\\
\reducesto^+ &\reason{\semlab{Split}, \semlab{App}}\\
&\sdtrans{N_\ell}[\lambda x.
\bl
\Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
\Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,\sdtrans{V}/p]
\el\\
=& \reason{by Lemma~\ref{lem:sdtrans-subst}}\\
&\sdtrans{N_\ell[\lambda x.
\bl
\Let\;z \revto (\lambda y.\Handle\;\EC[\Return\;y]\;\With\;H)~x\;\In\\
\Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,V/p]}
\el
\end{derivation}
%
Take the final term to be $N'$. If the resumption
$r \notin \FV(N_\ell)$ then the two terms $N'$ and
$\sdtrans{N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]}$ are the
identical, and thus the result follows immediate by reflexivity of
the $\approxa$-relation. Otherwise the proof reduces to showing that
the larger resumption term simulates the smaller resumption term,
i.e (note we lift the $\approxa$-relation to value terms).
%
\[
(\bl
\lambda x.\Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
\Let\;\Record{f;g} = z \;\In\;f\,\Unit) \approxa (\lambda y.\sdtrans{\EC}[\Return\;y]).
\el
\]
%
We use the congruence rules to apply a single $\semlab{App}$ on the
left hand side to obtain
%
\[
(\bl
\lambda x.\Let\;z \revto \Handle\;\sdtrans{\EC}[\Return\;x]\;\With\;\sdtrans{H}\;\In\\
\Let\;\Record{f;g} = z \;\In\;f\,\Unit) \approxa (\lambda y.\sdtrans{\EC}[\Return\;y]).
\el
\]
%
Now the trick is to define the following context
%
\[
\EC' \defas \Let\;z \revto \Handle\; [\,]\;\With\;\sdtrans{H}\;\In\;\Let\;\Record{f;g} = z \;\In\;f\,\Unit.
\]
%
The context $\EC'$ is an administrative evaluation context by
Lemma~\ref{lem:sdtrans-admin}. Now it follows by
Defintion~\ref{def:approx-admin} that
$(\lambda x.\EC'[\sdtrans{\EC}[\Return\;x]]) \approxa
(\lambda y.\sdtrans{\EC}[\Return\;y])$.
%
% \noindent\textbf{Case} $\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H \reducesto
% N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]$ where
% $\ell \notin \BL(\EC)$ and
% $H^\ell = \{\OpCase{\ell}{p}{r} \mapsto N_\ell\}$. \smallskip\\
% There are three subcases to consider.
% \begin{enumerate}
% \item Base step:
% $M' = \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}$. We
% can compute $N'$ by direct calculation starting from $M'$ yielding
% %
% % \begin{derivation}
% % & \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}\\
% % =\reducesto^+& \reason{\semlab{Op} ($\ell \notin \BL(\sdtrans{\EC})$), $2\times$\semlab{Let},\semlab{Split},\semlab{App}, Lemma~\ref{lem:sdtrans-subst}}\\
% % &\sdtrans{N_\ell[\lambda x.
% % \bl
% % \Let\;z \revto (\lambda y.\Handle\;\EC[\Return\;y]\;\With\;H)~x\;\In\\
% % \Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,V/p]}
% % \el\\
% % \end{derivation}
% \begin{derivation}
% & \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}\\
% =& \reason{definition of $\sdtrans{-}$}\\
% &\bl
% \Let\;z \revto \Handle\;\sdtrans{\EC}[\Do\;\ell~\sdtrans{V}]\;\With\;\sdtrans{H}\;\In\\
% \Let\;\Record{f;g} = z\;\In\;g\,\Unit
% \el\\
% \reducesto^+& \reason{\semlab{Op} using assumption $\ell \notin \BL(\sdtrans{\EC})$, \semlab{Let}, \semlab{Let}}\\
% % &\bl
% % \Let\;z \revto
% % (\bl
% % \Let\;r \revto \lambda x.\Let\;z \revto r~x\;\In\;\Let\;\Record{f;g} = z\;\In\;f\,\Unit\;\In\\
% % \Return\;\Record{
% % \bl
% % \lambda\Unit.\Let\;x \revto \Do\;\ell~p\;\In\;r~x;\\
% % \lambda\Unit.\sdtrans{N_\ell}})[
% % \bl
% % \sdtrans{V}/p,\\
% % \lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H}/r]
% % \el
% % \el
% % \el\\
% % \In\;\Let\;\Record{f;g} = z\;\In\;g\,\Unit
% % \el\\
% % \reducesto^+& \reason{\semlab{Let}, \semlab{Let}}\\
% &\bl
% \Let\;\Record{f;g} = \Record{
% \bl
% \lambda\Unit.\Let\;x \revto \Do\;\ell~\sdtrans{V}\;\In\;r~x;\\
% \lambda\Unit.\sdtrans{N_\ell}}[\lambda x.
% \bl
% \Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
% \Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,\sdtrans{V}/p]\;\In\; g\,\Unit
% \el
% \el\\
% \el\\
% \reducesto^+ &\reason{\semlab{Split}, \semlab{App}}\\
% &\sdtrans{N_\ell}[\lambda x.
% \bl
% \Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
% \Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,\sdtrans{V}/p]
% \el\\
% =& \reason{by Lemma~\ref{lem:sdtrans-subst}}\\
% &\sdtrans{N_\ell[\lambda x.
% \bl
% \Let\;z \revto (\lambda y.\Handle\;\EC[\Return\;y]\;\With\;H)~x\;\In\\
% \Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,V/p]}
% \el
% \end{derivation}
% %
% Take the final term to be $N'$. If the resumption
% $r \notin \FV(N_\ell)$ then the two terms $N'$ and
% $\sdtrans{N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]}$ are the
% identical, and thus the result follows immediate by reflexivity of
% the $\approxa$-relation. Otherwise $N'$ approximates
% $N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]$ at least up to a use of
% $r$. We need to show that the approximation remains faithful during
% any application of $r$. Specifically, we proceed to show that for
% any value $W \in \ValCat$
% %
% \[
% (\bl
% \lambda x.\Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
% \Let\;\Record{f;g} = z \;\In\;f\,\Unit)~W \approxa (\lambda y.\sdtrans{\EC}[\Return\;y])~W.
% \el
% \]
% %
% The right hand side reduces to $\sdtrans{\EC}[\Return\;W]$. Two
% applications of \semlab{App} on the left hand side yield the term
% %
% \[
% \Let\;z \revto \Handle\;\sdtrans{\EC}[\Return\;W]\;\With\;\sdtrans{H}\;\In\;\Let\;\Record{f;g} = z \;\In\;f\,\Unit.
% \]
% %
% Define
% %
% $\EC' \defas \Let\;z \revto \Handle\; [\,]\;\With\;\sdtrans{H}\;\In\;\Let\;\Record{f;g} = z \;\In\;f\,\Unit$
% %
% such that $\EC'$ is an administrative evaluation context by
% Lemma~\ref{lem:sdtrans-admin}. Then it follows by
% Defintion~\ref{def:approx-admin} that
% $\EC'[\sdtrans{\EC}[\Return\;W]] \approxa
% \sdtrans{\EC}[\Return\;W]$.
% \item Inductive step: Assume $M' \reducesto M''$ and
% $M'' \approxa \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}$. Using a similar argument to above we get that
% \[
% \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}
% \reducesto^+ \sdtrans{N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]}.
% \]
% Take $N' = M''$ then by the first induction hypothesis
% $M' \reducesto N'$ and by the second induction hypothesis
% $N' \approxa \sdtrans{N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]}$ as
% desired.
% \item Inductive step: Assume $admin(\EC')$ and
% $M' \approxa \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}$.
% %
% By the induction the hypothesis $M' \reducesto N''$. Take
% $N' = \EC'[N'']$. The result follows by an application of the
% admin rule.
% \item Compatibility step: We check every syntax constructor,
% however, since the relation is compositional\dots
% \end{enumerate}
\end{proof}
% \begin{proof}
% By case analysis on $\reducesto$ using Lemma~\ref{lem:sdtrans-subst}
% and Lemma~\ref{lem:sdtrans-admin}. We show only the interesting case
% $\semlab{Op^\dagger}$, which uses Lemma~\ref{lem:sdtrans-admin} to
% approximate the body of the resumption up to administrative
% reduction.\smallskip
% \noindent\textbf{Case} $\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H \reducesto
% N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]$ where
% $\ell \notin \BL(\EC)$ and
% $H^\ell = \{\OpCase{\ell}{p}{r} \mapsto N_\ell\}$. \smallskip\\
% %
% Pick $M' =
% \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}$. Clearly
% $M' \approxa \sdtrans{M}$. We compute $N'$ via reduction as follows.
% \begin{derivation}
% & \sdtrans{\ShallowHandle\;\EC[\Do\;\ell~V]\;\With\;H}\\
% =& \reason{definition of $\sdtrans{-}$}\\
% &\bl
% \Let\;z \revto \Handle\;\sdtrans{\EC}[\Do\;\ell~\sdtrans{V}]\;\With\;\sdtrans{H}\;\In\\
% \Let\;\Record{f;g} = z\;\In\;g\,\Unit
% \el\\
% \reducesto^+& \reason{\semlab{Op} using assumption $\ell \notin \BL(\sdtrans{\EC})$, \semlab{Let}, \semlab{Let}}\\
% % &\bl
% % \Let\;z \revto
% % (\bl
% % \Let\;r \revto \lambda x.\Let\;z \revto r~x\;\In\;\Let\;\Record{f;g} = z\;\In\;f\,\Unit\;\In\\
% % \Return\;\Record{
% % \bl
% % \lambda\Unit.\Let\;x \revto \Do\;\ell~p\;\In\;r~x;\\
% % \lambda\Unit.\sdtrans{N_\ell}})[
% % \bl
% % \sdtrans{V}/p,\\
% % \lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H}/r]
% % \el
% % \el
% % \el\\
% % \In\;\Let\;\Record{f;g} = z\;\In\;g\,\Unit
% % \el\\
% % \reducesto^+& \reason{\semlab{Let}, \semlab{Let}}\\
% &\bl
% \Let\;\Record{f;g} = \Record{
% \bl
% \lambda\Unit.\Let\;x \revto \Do\;\ell~\sdtrans{V}\;\In\;r~x;\\
% \lambda\Unit.\sdtrans{N_\ell}}[\lambda x.
% \bl
% \Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
% \Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,\sdtrans{V}/p]\;\In\; g\,\Unit
% \el
% \el\\
% \el\\
% \reducesto^+ &\reason{\semlab{Split}, \semlab{App}}\\
% &\sdtrans{N_\ell}[\lambda x.
% \bl
% \Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
% \Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,\sdtrans{V}/p]
% \el\\
% =& \reason{by Lemma~\ref{lem:sdtrans-subst}}\\
% &\sdtrans{N_\ell[\lambda x.
% \bl
% \Let\;z \revto (\lambda y.\Handle\;\EC[\Return\;y]\;\With\;H)~x\;\In\\
% \Let\;\Record{f;g} = z\;\In\;f\,\Unit/r,V/p]}
% \el
% \end{derivation}
% %
% We take the above computation term to be our $N'$. If
% $r \notin \FV(N_\ell)$ then the two terms $N'$ and
% $\sdtrans{N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]}$ are the
% identical, and thus by reflexivity of the $\approxa$-relation it
% follows that
% $N' \approxa \sdtrans{N_\ell[V/p,\lambda
% y.\EC[\Return\;y]/r]}$. Otherwise $N'$ approximates
% $N_\ell[V/p,\lambda y.\EC[\Return\;y]/r]$ at least up to a use of
% $r$. We need to show that the approximation remains faithful during
% any application of $r$. Specifically, we proceed to show that for
% any value $W \in \ValCat$
% %
% \[
% (\bl
% \lambda x.\Let\;z \revto (\lambda y.\Handle\;\sdtrans{\EC}[\Return\;y]\;\With\;\sdtrans{H})~x\;\In\\
% \Let\;\Record{f;g} = z \;\In\;f\,\Unit)~W \approxa (\lambda y.\sdtrans{\EC}[\Return\;y])~W.
% \el
% \]
% %
% The right hand side reduces to $\sdtrans{\EC}[\Return\;W]$. Two
% applications of \semlab{App} on the left hand side yield the term
% %
% \[
% \Let\;z \revto \Handle\;\sdtrans{\EC}[\Return\;W]\;\With\;\sdtrans{H}\;\In\;\Let\;\Record{f;g} = z \;\In\;f\,\Unit.
% \]
% %
% Define
% %
% $\EC' \defas \Let\;z \revto \Handle\; [\,]\;\With\;\sdtrans{H}\;\In\;\Let\;\Record{f;g} = z \;\In\;f\,\Unit$
% %
% such that $\EC'$ is an administrative evaluation context by
% Lemma~\ref{lem:sdtrans-admin}. Then it follows by
% Defintion~\ref{def:approx-admin} that
% $\EC'[\sdtrans{\EC}[\Return\;W]] \approxa
% \sdtrans{\EC}[\Return\;W]$.
% \end{proof}
\section{Parameterised handlers as ordinary deep handlers}
\label{sec:param-desugaring}
\newcommand{\PD}[1]{\mathcal{P}\cps{#1}}
%
As mentioned in Section~\ref{sec:unary-parameterised-handlers},
parameterised handlers codify the parameter-passing idiom. They may be
seen as an optimised form of parameter-passing deep handlers. We now
show formally that parameterised handlers are special instances of
ordinary deep handlers.
%
We define a local transformation $\PD{-}$ which translates
parameterised handlers into ordinary deep handlers. Formally, the
translation is defined on terms, types, environments, and
substitutions. We omit the homomorphic cases and show only the
interesting cases.
%
\[
\bl
\PD{-} : \HandlerTypeCat \to \HandlerTypeCat\\
\PD{\Record{C; A} \Rightarrow^\param B \eff E} \defas \PD{C} \Rightarrow (\PD{A} \to \PD{B \eff E})\eff \PD{E} \medskip\\
\PD{-} : \CompCat \to \CompCat\\
\PD{\ParamHandle\;M\;\With\;(q.\,H)(W)} \defas \left(\Handle\; \PD{M}\; \With\; \PD{H}_q\right)~\PD{W} \medskip\\
\PD{-} : \HandlerCat \times \ValCat \to \HandlerCat\\
\ba{@{}l@{~}c@{~}l}
\PD{\{\Return\;x \mapsto M\}}_q &\defas& \{\Return\;x \mapsto \lambda q. \PD{M}\}\\
\PD{\{\OpCase{\ell}{p}{r} \mapsto M\}}_q &\defas&
\{\OpCase{\ell}{p}{r} \mapsto \lambda q.
\Let\; r \revto \Return\;\lambda \Record{x;q'}. r~x~q'\;
\In\; \PD{M}\}
\ea
\el
\]
%
The parameterised $\ParamHandle$ construct becomes an application of a
$\Handle$ construct to the translation of the parameter. The
translation of $\Return$ and operation clauses are parameterised by
the name of the handler parameter as each clause body is enclosed in a
$\lambda$-abstraction whose formal parameter is the handler parameter
$q$. As a result the ordinary deep resumption $r$ is a curried
function. However, the uses of $r$ in $M$ expects a binary
function. To repair this discrepancy, we construct an uncurried
interface of $r$ by embedding it under a binary $\lambda$-abstraction.
%
To illustrate the translation in action consider the following example
program that adds the results obtained by performing two invocations
of some stateful operation $\dec{Incr} : \UnitType \opto \Int$, which
increments some global counter and returns its prior value.
%
\[
\ba{@{~}l@{~}l}
&\mathcal{P}\left\llbracket
\ba[m]{@{}l}
\ParamHandle\;\Do\;\dec{Incr}\,\Unit + \Do\;\dec{Incr}\,\Unit\;\With\\
\left( q.\ba[m]{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& \Return\;\Record{x;q}\\
\OpCase{\dec{Incr}}{\Unit}{r} &\mapsto& r\,\Record{q;q+1}
\ea\right)~40
\ea
\right\rrbracket \medskip\\
=& \left(
\ba[m]{@{}l}
\Handle\;\Do\;\dec{Incr}\,\Unit + \Do\;\dec{Incr}\,\Unit\;\With\\
\quad\ba[t]{@{~}l@{~}c@{~}l}
\Return\;x &\mapsto& \lambda q.\Return\;\Record{x;q}\\
\OpCase{\dec{Incr}}{\Unit}{r} &\mapsto& \lambda q. \Let\;r \revto \Return\;\lambda\Record{x;q}.r~x~q\;\In\;r\,\Record{q;q+1}
\ea
\ea\right)~40
\ea
\]
%
Evaluation of the program on either side of the equals sign yields
$\Record{81;42} : \Int$. The translation desugars the parameterised
handler into an ordinary deep handler that makes use of the
parameter-passing idiom to maintain the state of the handled
computation~\cite{Pretnar15}.
The translation commutes with substitution and preserves typeability.
%
\begin{lemma}\label{lem:pd-subst}
Let $\sigma$ denote a substitution. The translation $\PD{-}$
commutes with substitution, i.e.
%
\[
\PD{V}\PD{\sigma} = \PD{V\sigma},\quad
\PD{M}\PD{\sigma} = \PD{M\sigma},\quad
\PD{(q.\,H)}\PD{\sigma} = \PD{(q.\,H)\sigma}.
\]
%
\end{lemma}
%
\begin{proof}
By induction on the structures of $V$, $M$, and $q.H$.
\end{proof}
%
\begin{theorem}
If $\Delta; \Gamma \vdash M : C$ then $\PD{\Delta};
\sdtrans{\Gamma} \vdash \PD{M} : \PD{C}$.
\end{theorem}
%
\begin{proof}
By induction on the typing derivations.
\end{proof}
%
This translation of parameterised handlers simulates the native
semantics. As with the simulation of deep handlers via shallow
handlers in Section~\ref{sec:deep-as-shallow}, this simulation is not
quite on the nose as the image simulates the source only up to
congruence due to the need for an application of a pure function to a
variable to be reduced.
%
\begin{theorem}[Simulation up to congruence]
\label{thm:param-simulation}
If $M \reducesto N$ then $\PD{M} \reducesto^+_{\Cong} \PD{N}$.
\end{theorem}
%
\begin{proof}
By case analysis on the relation $\reducesto$ using
Lemma~\ref{lem:pd-subst}. The interesting case is
\semlab{Op^\param}, which is where we need to reduce under the
$\lambda$-abstraction representing the parameterised resumption.
% \begin{description}
% \item[Case]
% $M = \ParamHandle\; \Return\;V\;\With\;(q.~H)(W) \reducesto
% N[V/x,W/q]$, where $\hret = \{ \Return\; x \mapsto N \}$.
% %
% \begin{derivation}
% &\PD{\ParamHandle\;\Return\;V\;\With\;(q.~H)(W)}\\
% =& \reason{definition of $\PD{-}$}\\
% &(\Handle\;\Return\;\PD{V}\;\With\;\PD{H}_q)~\PD{W}\\
% \reducesto& \reason{$\semlab{Ret}$ with $\PD{\hret}_q = \{\Return~x \mapsto \lambda q. \PD{N}\}$}\\
% &(\lambda q. \PD{N}[\PD{V}/x])~\PD{W}\\
% \reducesto& \reason{$\semlab{App}$}\\
% &\PD{N}[\PD{V}/x,\PD{W}/q]\\
% =& \reason{lemma~\ref{lem:pd-subst}}\\
% &\PD{N[V/x,W/q]}
% \end{derivation}
% \item[Case] $M = \ParamHandle\; \EC[\Do\;\ell~V]\;\With\;(q.~H)(W) \reducesto \\
% \qquad
% N[V/p,W/q,\lambda \Record{x,q'}. \ParamHandle\;\EC[\Return\;x]\;\With\;(q.~H)(q')/r]$, where $\hell = \{ \OpCase{\ell}{p}{r} \mapsto N \}$.
\begin{derivation}
&\PD{\ParamHandle\;\EC[\Do\;\ell~V]\;\With\;(q.~H)(W)}\\
=& \reason{definition of $\PD{-}$}\\
&(\Handle\;\PD{\EC}[\Do\;\ell~\PD{V}]\;\With\;\PD{H}_q)~\PD{W}\\
\reducesto& \reason{$\semlab{Op}$ with $\hell = \{\OpCase{\ell}{p}{r} \mapsto N\}$}\\
&((\lambda q. \bl
\Let\;r \revto \lambda \Record{x;q'}. r~x~q\;\In \\
\PD{N})[\PD{V}/p,\lambda x.\Handle\;\PD{\EC}[\Return\;x]\;\With\;\PD{H}_q/r])~\PD{W}\\
\el \\
=& \reason{definition of $[-]$}\\
&(\lambda q. \bl
\Let\; r \revto \lambda \Record{x,q'}. (\lambda x. \Handle\;\PD{\EC}[\Return\;x]\;\With\; \PD{H}_q)~x~q'\;\In \\
\PD{N}[\PD{V}/p])\,\PD{W}\\
\el \\
\reducesto& \reason{\semlab{App}}\\
&\bl
\Let\; r \revto \lambda \Record{x;q'}. (\lambda x. \Handle\;\PD{\EC}[\Return\;x]\;\With\; \PD{H}_q)~x~q'\;\In\\
\PD{N}[\PD{V}/p,\PD{W}/q]
\el\\
\reducesto_\Cong& \reason{\semlab{App} under $\lambda\Record{x;q'}.\cdots$}\\
&\bl
\Let\; r \revto \lambda \Record{x;q'}.(\Handle\;\PD{\EC}[\Return\;x]\;\With\; \PD{H}_q)~q'\;\In\\
\PD{N}[\PD{V}/p,\PD{W}/q]
\el\\
\reducesto& \reason{\semlab{Let}}\\
&\PD{N}[\bl
\PD{V}/p,\PD{W}/q, \\
\lambda \Record{x,q'}. (\Handle\;\PD{\EC}[\Return\;x]\;\With\; \PD{H}_q)~q'/r]\\
\el \\
=& \reason{definition of $\PD{-}$ and Lemma~\ref{lem:pd-subst}}\\
&\PD{N[V/p,W/q,\lambda \Record{x,q'}. \ParamHandle\;\EC[\Return\;x]\;\With\; (q.~H)(q')/r]}
\end{derivation}
% \end{description}
\end{proof}
\section{Related work}
Precisely how effect handlers fit into the landscape of programming
language features is largely unexplored in the literature. The most
relevant related work in this area is due to my collaborators and
myself on the inherited efficiency of effect handlers
(c.f. Chapter~\ref{ch:handlers-efficiency}) and \citet{ForsterKLP17}, who
investigate various relationships between effect handlers, delimited
control in the form of shift/reset, and monadic reflection using the
notions of typeability-preserving macro-expressiveness and untyped
macro-expressiveness~\cite{ForsterKLP17,ForsterKLP19}. They show that
in an untyped setting all three are interdefinable, whereas in a
simply typed setting effect handlers cannot macro-express
either. \citet{PirogPS19} build upon the work of
\citeauthor{ForsterKLP17} as they show that with sufficient
polymorphism effect handlers and delimited control can simulate one
another.
The work of \citet{Shan04,Shan07} is related in spirit to the work
presented in this chapter. \citeauthor{Shan04} shows that static and
dynamic notions of delimited control are interdefinable in an untyped
setting. The work in this chapter has a similar flavour to
\citeauthor{Shan04}'s work as we can view deep handlers as a kind of
static control facility and shallow handlers as a kind of dynamic
control facility. In order to simulate dynamic control using static
control, \citeauthor{Shan04}'s translation makes use of recursive
delimited continuations to construct the dynamic context surrounding
and including the invocation context. A recursive continuation allows
the captured context and continuation invocation context to coincide.
% \chapter{Computability, complexity, and expressivness}
% \label{ch:expressiveness}
% \section{Notions of expressiveness}
% Felleisen's macro-expressiveness, Longley's type-respecting
% expressiveness, Kammar's typeability-preserving expressiveness.
\chapter{Asymptotic speedup with effect handlers}
\label{ch:handlers-efficiency}
%
When extending some programming language $\LLL \subset \LLL'$ with
some new feature it is desirable to know exactly how the new feature
impacts the language. At a bare minimum it is useful to know whether
the extended language $\LLL'$ is unsound as a result of inhabiting the
new feature (although, some languages are designed deliberately to be
unsound~\cite{BiermanAT14}). More fundamentally, it may be useful for
theoreticians and practitioners alike to know whether the extended
language is more expressive than the base language as it may inform
programming practice.
%
Specifically, it may be of interest to know whether the extended
language $\LLL'$ exhibits any \emph{essential} expressivity when
compared to the base language $\LLL$. Questions about essential
expressivity fall under three different headings.
% There are various ways in which we can consider how some new feature
% impacts the expressiveness of its host language. For instance,
% \citet{Felleisen91} considers the question of whether a language
% $\LLL$ admits a translation into a sublanguage $\LLL'$ in a way which
% respects not only the behaviour of programs but also aspects of their
% global or local syntactic structure. If the translation of some
% $\LLL$-program into $\LLL'$ requires a complete global restructuring,
% we may say that $\LLL'$ is in some way less expressive than $\LLL$.
% Effect handlers are capable of codifying a wealth of powerful
% programming constructs and features such as exceptions, state,
% backtracking, coroutines, await/async, inversion of control, and so
% on.
%
% Partial continuations as the difference of continuations a duumvirate of control operators
% Thus, effect handlers are expressive enough to implement a wide
% variety of other programming abstractions.
% %
% We may wonder about the exact nature of this expressiveness, i.e. do
% effect handlers exhibit any \emph{essential} expressivity?
% In today's programming languages we find a wealth of powerful
% constructs and features --- exceptions, higher-order store, dynamic
% method dispatch, coroutines, explicit continuations, concurrency
% features, Lisp-style `quote' and so on --- which may be present or
% absent in various combinations in any given language. There are of
% course many important pragmatic and stylistic differences between
% languages, but here we are concerned with whether languages may differ
% more essentially in their expressive power, according to the selection
% of features they contain.
% One can interpret this question in various ways. For instance,
% \citet{Felleisen91} considers the question of whether a language
% $\LLL$ admits a translation into a sublanguage $\LLL'$ in a way which
% respects not only the behaviour of programs but also aspects of their
% (global or local) syntactic structure. If the translation of some
% $\LLL$-program into $\LLL'$ requires a complete global restructuring,
% we may say that $\LLL'$ is in some way less expressive than $\LLL$.
% Start with talking about the power of backtracking (folklore)
% giving a precise and robust mathematical characterisation of this phenomenon
% However, in this chapter we will look at even more fundamental
% expressivity differences that would not be bridged even if
% whole-program translations were admitted. These fall under two
% headings.
% Questions regarding essential expressivity differences fall under two
% headings.
%
\begin{description}
\item[Programmability] Are there programmable operations that can be
done more easily in $\LLL'$ than in $\LLL$?
\item[Computability] Are there operations of a given type
that are programmable in $\LLL'$ but not expressible at all in $\LLL$?
\item[Complexity] Are there operations programmable in $\LLL'$
with some asymptotic runtime bound (e.g. `$\BigO(n^2)$') that cannot be
achieved in $\LLL$?
\end{description}
%
% We may also ask: are there examples of \emph{natural, practically
% useful} operations that manifest such differences? If so, this
% might be considered as a significant advantage of $\LLL$ over $\LLL'$.
% If the `operations' we are asking about are ordinary first-order
% functions, that is both their inputs and outputs are of ground type
% (strings, arbitrary-size integers etc), then the situation is easily
% summarised. At such types, all reasonable languages give rise to the
% same class of programmable functions, namely the Church-Turing
% computable ones. As for complexity, the runtime of a program is
% typically analysed with respect to some cost model for basic
% instructions (e.g.\ one unit of time per array access). Although the
% realism of such cost models in the asymptotic limit can be questioned
% (see, e.g., \citet[Section~2.6]{Knuth97}), it is broadly taken as read
% that such models are equally applicable whatever programming language
% we are working with, and moreover that all respectable languages can
% represent all algorithms of interest; thus, one does not expect the
% best achievable asymptotic run-time for a typical algorithm (say in
% number theory or graph theory) to be sensitive to the choice of
% programming language, except perhaps in marginal cases.
% The situation changes radically, however, if we consider
% \emph{higher-order} operations: programmable operations whose inputs
% may themselves be programmable operations. Here it turns out that
% both what is computable and the efficiency with which it can be
% computed can be highly sensitive to the selection of language features
% present. This is in fact true more widely for \emph{abstract data
% types}, of which higher-order types can be seen as a special case: a
% higher-order value will be represented within the machine as ground
% data, but a program within the language typically has no access to
% this internal representation, and can interact with the value only by
% applying it to an argument.
% Most work in this area to date has focused on computability
% differences. One of the best known examples is the \emph{parallel if}
% operation which is computable in a language with parallel evaluation
% but not in a typical `sequential' programming
% language~\cite{Plotkin77}. It is also well known that the presence of
% control features or local state enables observational distinctions
% that cannot be made in a purely functional setting: for instance,
% there are programs involving call/cc that detect the order in which a
% (call-by-name) `+' operation evaluates its arguments
% \citep{CartwrightF92}. Such operations are `non-functional' in the
% sense that their output is not determined solely by the extension of
% their input (seen as a mathematical function
% $\N_\bot \times \N_\bot \rightarrow \N_\bot$);
% %%
% however, there are also programs with `functional' behaviour that can
% be implemented with control or local state but not without them
% \citep{Longley99}. More recent results have exhibited differences
% lower down in the language expressivity spectrum: for instance, in a
% purely functional setting \textit{\`a la} Haskell, the expressive
% power of \emph{recursion} increases strictly with its type level
% \citep{Longley18a}, and there are natural operations computable by
% low-order recursion but not by high-order iteration
% \citep{Longley19}. Much of this territory, including the mathematical
% theory of some of the natural notions of higher-order computability
% that arise in this way, is mapped out by \citet{LongleyN15}.
% Relatively few results of this character have so far been established
% on the complexity side. \citet{Pippenger96} gives an example of an
% `online' operation on infinite sequences of atomic symbols
% (essentially a function from streams to streams) such that the first
% $n$ output symbols can be produced within time $\BigO(n)$ if one is
% working in an `impure' version of Lisp (in which mutation of `cons'
% pairs is admitted), but with a worst-case runtime no better than
% $\Omega(n \log n)$ for any implementation in pure Lisp (without such
% mutation). This example was reconsidered by \citet{BirdJdM97} who
% showed that the same speedup can be achieved in a pure language by
% using lazy evaluation. Another candidate is the familiar $\log n$
% overhead involved in implementing maps (supporting lookup and
% extension) in a pure functional language \cite{Okasaki99}, although to
% our knowledge this situation has not yet been subjected to theoretical
% scrutiny. \citet{Jones01} explores the approach of manifesting
% expressivity and efficiency differences between certain languages by
% artificially restricting attention to `cons-free' programs; in this
% setting, the classes of representable first-order functions for the
% various languages are found to coincide with some well-known
% complexity classes.
The purpose of this chapter is to give a clear example of an essential
complexity difference. Specifically, we will show that if we take a
typical PCF-like base language, $\BPCF$, and extend it with effect
handlers, $\HPCF$, then there exists a class of programs that have
asymptotically more efficient realisations in $\HPCF$ than possible in
$\BPCF$, hence establishing that effect handlers enable an asymptotic
speedup for some programs.
%
% The purpose of this chapter is to give a clear example of such an
% inherent complexity difference higher up in the expressivity spectrum.
To this end, we consider the following \emph{generic count} problem,
parametric in $n$: given a boolean-valued predicate $P$ on the space
$\mathbb{B}^n$ of boolean vectors of length $n$, return the number of
such vectors $q$ for which $P\,q = \True$. We shall consider boolean
vectors of any length to be represented by the type $\Nat \to \Bool$;
thus for each $n$, we are asking for an implementation of a certain
third-order function.
%
\[ \Count_n : ((\Nat \to \Bool) \to \Bool) \to \Nat \]
%
A \naive implementation strategy is simply to apply $P$ to each of the
$2^n$ vectors in turn. However, one can do better with a curious
approach due to \citet{Berger90}, which achieves the effect of `pruned
search' where the predicate allows it. This should be taken as a
warning that counter-intuitive phenomena can arise in this territory.
Nonetheless, under the mild condition that $P$ must inspect all $n$
components of the given vector before returning, both these approaches
will have a $\Omega(n 2^n)$ runtime. Moreover, we shall show that in
$\BPCF$, a typical call-by-value language without advanced control
features, one cannot improve on this: \emph{any} implementation of
$\Count_n$ must necessarily take time $\Omega(n2^n)$ on \emph{any}
predicate $P$. Conversely, in the extended language $\HPCF$ it
becomes possible to bring the runtime down to $\BigO(2^n)$: an
asymptotic gain of a factor of $n$.
% The \emph{generic search} problem is just like the generic count
% problem, except rather than counting the vectors $q$ such that $P\,q =
% \True$, it returns the list of all such vectors.
% %
% The $\Omega(n 2^n)$ runtime for purely functional implementations
% transfers directly to generic search, as generic count reduces to
% generic search composed with computing the length of the resulting
% list.
% %
% In Section~\ref{sec:count-vs-search} we illustrate that the
% $\BigO(2^n)$ runtime for generic count with effect handlers also
% transfers to generic search.
The key to enabling the speedup is \emph{backtracking} via multi-shot
resumptions. The idea is to memorise the control state at each
component inspection to make it possible to quickly backtrack to a
prior inspection and make a different decision as soon as one
possible result has been computed.
%
Concretely, suppose for example $n = 3$, and suppose that the predicate
$P$ always inspects the components of its argument in the order
$0,1,2$.
%
A \naive implementation of $\Count_3$ might start by applying the
given predicate $P$ to $q_0 = (\True,\True,\True)$, and then to
$q_1 = (\True,\True,\False)$. Note that there is some duplication
here: the computations of $P\,q_0$ and $P\,q_1$ will proceed
identically up to the point where the value of the final component is
requested. Ideally, we would record the state of the computation of
$P\,q_0$ at just this point, so that we can later resume this
computation with $\False$ supplied as the final component value in
order to obtain the value of $P\,q_1$. Of course, a bespoke search
function implementation would apply this backtracking behaviour in a
standard manner for some \emph{particular} choice of $P$ (e.g. the
$n$-queens problem); but to apply this idea of resuming previous
subcomputations in the \emph{generic} setting (i.e. uniformly in $P$)
requires some special control feature such as effect handlers with
multi-shot resumptions.
%
Obviously, one can remove the need a special control feature by a
change of type for the predicate $P$, but this such a change shifts
the perspective. The intention is precisely to show that the languages
differ in an essential way as regards to their power to manipulate
data of type $(\Nat \to \Bool) \to \Bool$.
The idea of using first-class control achieve backtracking is fairly
well-known in the literature~\cite{KiselyovSFA05}, and there is a
clear programming intuition that this yields a speedup unattainable in
languages without such control features.
% Our main contribution in this paper is to
% provide, for the first time, a precise mathematical theorem that pins
% down this fundamental efficiency difference, thus giving formal
% substance to this intuition. Since our goal is to give a realistic
% analysis of the efficiency achievable in various settings without
% getting bogged down in inessential implementation details, we shall
% work concretely and operationally with the languages in question,
% using a CEK-style abstract machine semantics as our basic model of
% execution time, and with some specific programs in these languages.
% In the first instance, we formulate our results as a comparison
% between a purely functional base language (a version of call-by-value
% PCF) and an extension with first-class control; we then indicate how
% these results can be extended to base languages with other features
% such as mutable state.
\paragraph{Relation to prior work} This chapter is based entirely on
the following previously published paper.
\begin{enumerate}
\item[~] \bibentry{HillerstromLL20}
\end{enumerate}
The contents of Sections~\ref{sec:calculi},
\ref{sec:abstract-machine-semantics}, \ref{sec:generic-search},
\ref{sec:pure-counting}, \ref{sec:robustness}, and
\ref{sec:experiments} are almost verbatim copies of Sections 3, 4, 5,
6, 7, and 8 of the above paper. I have made a few stylistic
adjustments to make the Sections fit with the rest of this
dissertation.
% In summary, our purpose is to exhibit an efficiency gap which, in our
% view, manifests a fundamental feature of the programming language
% landscape, challenging a common assumption that all real-world
% programming languages are essentially `equivalent' from an asymptotic
% point of view. We believe that such results are important not only
% for a rounded understanding of the relative merits of existing
% languages, but also for informing future language design.
% For their convenience as structured delimited control operators we
% adopt effect handlers as our universal control abstraction of choice,
% but our results adapt mutatis mutandis to other first-class control
% abstractions such as `call/cc'~\cite{AbelsonHAKBOBPCRFRHSHW85}, `control'
% ($\mathcal{F}$) and 'prompt' ($\textbf{\#}$)~\citep{Felleisen88}, or
% `shift' and `reset'~\citep{DanvyF90}.
% The rest of the paper is structured as follows.
% \begin{itemize}
% \item Section~\ref{sec:handlers-primer} provides an introduction to
% effect handlers as a programming abstraction.
% \item Section~\ref{sec:calculi} presents a PCF-like language
% $\BPCF$ and its extension $\HPCF$ with effect handlers.
% \item Section~\ref{sec:abstract-machine-semantics} defines abstract
% machines for $\BPCF$ and $\HPCF$, yielding a runtime cost model.
% \item Section~\ref{sec:generic-search} introduces generic count and
% some associated machinery, and presents an implementation in
% $\HPCF$ with runtime $\BigO(2^n)$.
% \item Section~\ref{sec:pure-counting} establishes that any generic
% count implementation in $\BCalc$ must have runtime $\Omega(n2^n)$.
% \item Section~\ref{sec:robustness} shows that our results scale to
% richer settings including support for a wider class of predicates,
% the adaptation from generic count to generic search, and an
% extension of the base language with state.
% \item Section~\ref{sec:experiments} evaluates implementations of
% generic search based on $\BPCF$ and $\HPCF$ in Standard ML.
% \item Section \ref{sec:conclusions} concludes.
% \end{itemize}
% %
% The languages $\BPCF$ and $\HPCF$ are rather minimal versions of
% previously studied systems --- we only include the machinery needed
% for illustrating the generic search efficiency phenomenon.
% %
% Auxiliary results are included in the appendices of the extended
% version of the paper~\citep{HillerstromLL20}.
%%
%% Base calculus
%%
\section{Simply-typed base and handler calculi}
\label{sec:calculi}
In this section, we present a base language $\BPCF$ and its extension
with effect handlers $\HPCF$, both of which amounts to simply-typed
variations of $\BCalc$ and $\HCalc$,
respectively. Sections~\ref{sec:base-calculus}--\ref{sec:handler-machine}
essentially recast the developments of
Chapters~\ref{ch:base-language}, \ref{ch:unary-handlers}, and
\ref{ch:abstract-machine} to fit the calculi $\BPCF$ and $\HPCF$. I
reproduce the details here, even though, they are mostly the same as
in the previous chapters save for a few tricks such as a crucial
design decision in Section~\ref{sec:handlers-calculus} which makes it
possible to implement continuation reification as a constant time
operation.
\subsection{Base calculus}
\label{sec:base-calculus}
The base calculus $\BPCF$ is a fine-grain
call-by-value~\cite{LevyPT03} variation of PCF~\cite{Plotkin77}.
%
Fine-grain call-by-value is similar to A-normal
form~\cite{FlanaganSDF93} in that every intermediate computation is
named, but unlike A-normal form is closed under reduction.
\begin{figure}
\begin{syntax}
\slab{Types} &A,B,C,D\in\TypeCat &::= & \Nat \mid \One \mid A \to B \mid A \times B \mid A + B \\
\slab{Type\textrm{ }environments} &\Gamma\in\TyEnvCat &::= & \cdot \mid \Gamma, x:A \\
\slab{Values} &V,W\in\ValCat &::= & x \mid k \mid c \mid \lambda x^A .\, M \mid \Rec \; f^{A \to B}\, x.M \\
& &\mid& \Unit \mid \Record{V, W} \mid (\Inl\, V)^B \mid (\Inr\, W)^A\\
% & & &
\slab{Computations} &M,N\in\CompCat
&::= & V\,W
\mid \Let\; \Record{x,y} = V \; \In \; N \\
& &\mid&\Case \; V \;\{ \Inl \; x \mapsto M; \Inr \; y \mapsto N\}\\
& &\mid& \Return\; V
\mid \Let \; x \revto M \; \In \; N \\
\end{syntax}
\caption{Syntax of $\BPCF$.}\label{fig:bpcf}
\end{figure}
%
Figure~\ref{fig:bpcf} depicts the type syntax, type environment
syntax, and term syntax of $\BPCF$.
%
The ground types are $\Nat$ and $\One$ which classify natural number
values and the unit value, respectively. The function type $A \to B$
classifies functions that map values of type $A$ to values of type
$B$. The binary product type $A \times B$ classifies pairs of values
whose first and second components have types $A$ and $B$
respectively. The sum type $A + B$ classifies tagged values of either
type $A$ or $B$.
%
Type environments $\Gamma$ map variables to their types.
We let $k$ range over natural numbers and $c$ range over primitive
operations on natural numbers ($+, -, =$).
%
We let $x, y, z$ range over term variables.
%
For convenience, we also use $f$, $g$, and $h$ for variables of
function type, $i$ and $j$ for variables of type $\Nat$, and $r$ to
denote resumptions.
%
% The value terms are standard.
Value terms comprise variables ($x$), the unit value ($\Unit$),
natural number literals ($n$), primitive constants ($c$), lambda
abstraction ($\lambda x^A . \, M$), recursion
($\Rec \; f^{A \to B}\, x.M$), pairs ($\Record{V, W}$), left
($(\Inl~V)^B$) and right $((\Inr~W)^A)$ injections.
%
% We will occasionally blur the distinction between object and meta
% language by writing $A$ for the meta level type of closed value terms
% of type $A$.
%
All elimination forms are computation terms. Abstraction is eliminated
using application ($V\,W$).
%
The product eliminator $(\Let \; \Record{x,y} = V \; \In \; N)$ splits
a pair $V$ into its constituents and binds them to $x$ and $y$,
respectively. Sums are eliminated by a case split ($\Case\; V\;
\{\Inl\; x \mapsto M; \Inr\; y \mapsto N\}$).
%
A trivial computation $(\Return\;V)$ returns value $V$. The sequencing
expression $(\Let \; x \revto M \; \In \; N)$ evaluates $M$ and binds
the result value to $x$ in $N$.
\begin{figure*}
\raggedright\textbf{Values}
\begin{mathpar}
% Variable
\inferrule*[Lab=\tylab{Var}]
{x : A \in \Gamma}
{\typv{\Gamma}{x : A}}
% Unit
\inferrule*[Lab=\tylab{Unit}]
{ }
{\typv{\Gamma}{\Unit : \One}}
% n : Nat
\inferrule*[Lab=\tylab{Nat}]
{ k \in \mathbb{N} }
{\typv{\Gamma}{k : \Nat}}
% c : A
\inferrule*[Lab=\tylab{Const}]
{c : A \to B}
{\typv{\Gamma}{c : A \to B}}
\\
% Abstraction
\inferrule*[Lab=\tylab{Lam}]
{\typ{\Gamma, x : A}{M : B}}
{\typv{\Gamma}{\lambda x^A .\, M : A \to B}}
% Recursion
\inferrule*[Lab=\tylab{Rec}]
{\typ{\Gamma, f : A \to B, x : A}{M : B}}
{\typv{\Gamma}{\Rec\; f^{A \to B}\,x .\, M : A \to B}}
\\
% Products
\inferrule*[Lab=\tylab{Prod}]
{ \typv{\Gamma}{V : A} \\
\typv{\Gamma}{W : B}
}
{\typv{\Gamma}{\Record{V,W} : A \times B}}
% Left injection
\inferrule*[Lab=\tylab{Inl}]
{\typv{\Gamma}{V : A}}
{\typv{\Gamma}{(\Inl\,V)^B : A + B}}
% Right injection
\inferrule*[Lab=\tylab{Inr}]
{\typv{\Gamma}{W : B}}
{\typv{\Gamma}{(\Inr\,W)^A : A + B}}
\end{mathpar}
\textbf{Computations}
\begin{mathpar}
% Application
\inferrule*[Lab=\tylab{App}]
{\typv{\Gamma}{V : A \to B} \\
\typv{\Gamma}{W : A}
}
{\typ{\Gamma}{V\,W : B}}
% Split
\inferrule*[Lab=\tylab{Split}]
{\typv{\Gamma}{V : A \times B} \\
\typ{\Gamma, x : A, y : B}{N : C}
}
{\typ{\Gamma}{\Let \; \Record{x,y} = V\; \In \; N : C}}
% Case
\inferrule*[Lab=\tylab{Case}]
{ \typv{\Gamma}{V : A + B} \\
\typ{\Gamma,x : A}{M : C} \\
\typ{\Gamma,y : B}{N : C}
}
{\typ{\Gamma}{\Case \; V \;\{\Inl\; x \mapsto M; \Inr \; y \mapsto N \} : C}}
\\
% Return
\inferrule*[Lab=\tylab{Return}]
{\typv{\Gamma}{V : A}}
{\typ{\Gamma}{\Return \; V : A}}
% Let
\inferrule*[Lab=\tylab{Let}]
{\typ{\Gamma}{M : A} \\
\typ{\Gamma, x : A}{N : C}
}
{\typ{\Gamma}{\Let \; x \revto M\; \In \; N : C}}
\end{mathpar}
\caption{Typing rules for $\BPCF$.}
\label{fig:typing}
\end{figure*}
The typing rules are given in Figure~\ref{fig:typing}.
%
We require two typing judgements: one for values and the other for
computations.
%
The judgement $\typ{\Gamma}{\square : A}$ states that a $\square$-term
has type $A$ under type environment $\Gamma$, where $\square$ is
either a value term ($V$) or a computation term ($M$).
%
The constants have the following types.
%
{
\begin{mathpar}
\{(+), (-)\} : \Nat \times \Nat \to \Nat
(=) : \Nat \times \Nat \to \One + \One
\end{mathpar}}
%
\begin{figure*}
\begin{reductions}
\semlab{App} & (\lambda x^A . \, M) V &\reducesto& M[V/x] \\
\semlab{AppRec} & (\Rec\; f^A \,x.\, M) V &\reducesto& M[(\Rec\;f^A\,x .\,M)/f,V/x]\\
\semlab{Const} & c~V &\reducesto& \Return\;(\const{c}\,(V)) \\
\semlab{Split} & \Let \; \Record{x,y} = \Record{V,W} \; \In \; N &\reducesto& N[V/x,W/y] \\
\semlab{Case\textrm{-}inl} &
\Case \; (\Inl\, V)^B \; \{\Inl \; x \mapsto M;\Inr \; y \mapsto N\} &\reducesto& M[V/x] \\
\semlab{Case\textrm{-}inr} &
\Case \; (\Inr\, V)^A \; \{\Inl \; x \mapsto M; \Inr \; y \mapsto N\} &\reducesto& N[V/y]\\
\semlab{Let} &
\Let \; x \revto \Return \; V \; \In \; N &\reducesto& N[V/x] \\
\semlab{Lift} &
\EC[M] &\reducesto& \EC[N], \hfill \text{if }M \reducesto N \\
\end{reductions}
\begin{syntax}
\slab{Evaluation\textrm{ }contexts} & \mathcal{E} \in \EvalCat &::=& [\,] \mid \Let \; x \revto \mathcal{E} \; \In \; N
\end{syntax}
\caption{Contextual small-step operational semantics.}
\label{fig:small-step}
\end{figure*}
%
We give a small-step operational semantics for $\BPCF$ with
\emph{evaluation contexts} in the style of \citet{Felleisen87}. The
reduction rules are given in Figure~\ref{fig:small-step}.
%
We write $M[V/x]$ for $M$ with $V$ substituted for $x$ and $\const{c}$
for the usual interpretation of constant $c$ as a meta-level function
on closed values. The reduction relation $\reducesto$ is defined on
computation terms. The statement $M \reducesto N$ reads: term $M$
reduces to term $N$ in one step.
%
% We write $R^+$ for the transitive closure of relation $R$ and $R^*$
% for the reflexive, transitive closure of relation $R$.
\paragraph{Notation}
%
We elide type annotations when clear from context.
%
For convenience we often write code in direct-style assuming the
standard left-to-right call-by-value elaboration into fine-grain
call-by-value~\citep{Moggi91, FlanaganSDF93}.
%
For example, the expression $f\,(h\,w) + g\,\Unit$ is syntactic sugar
for:
%
{
\[
\ba[t]{@{~}l}
\Let\; x \revto h\,w \;\In\;
\Let\; y \revto f\,x \;\In\;
\Let\; z \revto g\,\Unit \;\In\;
y + z
\ea
\]}%
%
We define sequencing of computations in the standard way.
%
{
\[
M;N \defas \Let\;x \revto M \;\In\;N, \quad \text{where $x \notin FV(N)$}
\]}%
%
We make use of standard syntactic sugar for pattern matching. For
instance, we write
%
{
\[
\lambda\Unit.M \defas \lambda x^{\One}.M, \quad \text{where $x \notin FV(M)$}
\]}%
%
for suspended computations, and if the binder has a type other than
$\One$, we write:
%
{
\[
\lambda\_^A.M \defas \lambda x^A.M, \quad \text{where $x \notin FV(M)$}
\]}%
%
We use the standard encoding of booleans as a sum:
{
\begin{mathpar}
\Bool \defas \One + \One
\True \defas \Inl~\Unit
\False \defas \Inr~\Unit
\If\;V\;\Then\;M\;\Else\;N \defas \Case\;V\;\{\Inl~\Unit \mapsto M; \Inr~\Unit \mapsto N\}
\end{mathpar}}%
%
% Handlers extension
%
\subsection{Handler calculus}
\label{sec:handlers-calculus}
We now define $\HPCF$ as an extension of $\BPCF$.
%
{
\begin{syntax}
\slab{Operation\textrm{ }symbols} &\ell \in \mathcal{L} & & \\
\slab{Signatures} &\Sigma\in\CatName{Sig} &::=& \cdot \mid \{\ell : A \to B\} \cup \Sigma\\
\slab{Handler\textrm{ }types} &F \in \HandlerTypeCat &::=& C \Rightarrow D\\
\slab{Computations} &M, N \in \CompCat &::=& \dots \mid \Do \; \ell \; V
\mid \Handle \; M \; \With \; H \\
\slab{Handlers} &H&::=& \{ \Return \; x \mapsto M \}
\mid \{ \OpCase{\ell}{p}{r} \mapsto N \} \uplus H\\
\end{syntax}}%
%
We assume a countably infinite set $\mathcal{L}$ of operation symbols
$\ell$.
%
An effect signature $\Sigma$ is a map from operation symbols to their
types, thus we assume that each operation symbol in a signature is
distinct. An operation type $A \to B$ classifies operations that take
an argument of type $A$ and return a result of type $B$.
%
We write $\dom(\Sigma) \subseteq \mathcal{L}$ for the set of operation
symbols in a signature $\Sigma$.
%
A handler type $C \Rightarrow D$ classifies effect handlers that
transform computations of type $C$ into computations of type $D$.
%
Following \citet{Pretnar15}, we assume a global signature for every
program.
%
Computations are extended with operation invocation ($\Do\;\ell\;V$)
and effect handling ($\Handle\; M \;\With\; H$).
%
Handlers are constructed from one success clause $(\{\Return\; x \mapsto
M\})$ and one operation clause $(\{ \OpCase{\ell}{p}{r} \mapsto N \})$ for
each operation $\ell$ in $\Sigma$.
%
Following \citet{PlotkinP13}, we adopt the convention that a handler
with missing operation clauses (with respect to $\Sigma$) is syntactic
sugar for one in which all missing clauses perform explicit
forwarding:
%
\[
\{\OpCase{\ell}{p}{r} \mapsto \Let\; x \revto \Do \; \ell \, p \;\In\; r \, x\}.
\]
%
\paragraph{Remark}
This convention makes effect forwarding explicit, whereas in
$\HCalc$ effect forwarding was implicit. As we shall see soon, an
important semantic consequence of making effect forwarding explicit
is that the abstract machine model in
Section~\ref{sec:handler-machine} has no rule for effect forwarding
as it instead happens as a sequence of explicit $\Do$ invocations in
the term language. As a result, we become able to reason about
continuation reification as a constant time operation, because a
$\Do$ invocation will just reify the top-most continuation frame.\medskip
\begin{figure*}
\raggedright
\textbf{Computations}
\begin{mathpar}
\inferrule*[Lab=\tylab{Do}]
{(\ell : A \to B) \in \Sigma \\ \typ{\Gamma}{V : A} }
{\typ{\Gamma}{\Do \; \ell \; V : B}}
\inferrule*[Lab=\tylab{Handle}]
{\typ{\Gamma}{M : C} \\
\Gamma \vdash H : C \Rightarrow D}
{\typ{\Gamma}{\Handle \; M \; \With \; H : D}}
\end{mathpar}
\textbf{Handlers}
\begin{mathpar}
\inferrule*[Lab=\tylab{Handler}]
{ \hret = \{\Return \; x \mapsto M\} \\
[\hell = \{\OpCase{\ell}{p}{r} \mapsto N_\ell\}]_{\ell \in dom(\Sigma)} \\\\
\typ{\Gamma, x : C}{M : D} \\
[\typ{\Gamma, p : A_\ell, r : B_\ell \to D}{N_\ell : D}]_{(\ell : A_\ell \to B_\ell) \in \Sigma}
}
{{\Gamma} \vdash {H : C \Rightarrow D}}
\end{mathpar}
\caption{Additional typing rules for $\HPCF$.}
\label{fig:typing-handlers}
\end{figure*}
The typing rules for $\HPCF$ are those of $\BPCF$
(Figure~\ref{fig:typing}) plus three additional rules for operations,
handling, and handlers given in Figure~\ref{fig:typing-handlers}.
%
The \tylab{Do} rule ensures that an operation invocation is only
well-typed if the operation $\ell$ appears in the effect signature
$\Sigma$ and the argument type $A$ matches the type of the provided
argument $V$. The result type $B$ determines the type of the
invocation.
%
The \tylab{Handle} rule types handler application.
%
The \tylab{Handler} rule ensures that the bodies of the success clause
and the operation clauses all have the output type $D$. The type of
$x$ in the success clause must match the input type $C$. The type of
the parameter $p$ ($A_\ell$) and resumption $r$ ($B_\ell \to D$) in
operation clause $\hell$ is determined by the type of $\ell$; the
return type of $r$ is $D$, as the body of the resumption will itself
be handled by $H$.
%
We write $\hell$ and $\hret$ for projecting success and operation
clauses.
{
\[
\ba{@{~}r@{~}c@{~}l@{~}l}
\hret &\defas& \{\Return\, x \mapsto N \}, &\quad \text{where } \{\Return\, x \mapsto N \} \in H\\
\hell &\defas& \{\OpCase{\ell}{p}{r} \mapsto N \}, &\quad \text{where } \{\OpCase{\ell}{p}{r} \mapsto N \} \in H
\ea
\]}%
We extend the operational semantics to $\HPCF$. Specifically, we add
two new reduction rules: one for handling return values and another
for handling operation invocations.
%
{
\begin{reductions}
\semlab{Ret} & \Handle \; (\Return \; V) \; \With \; H &\reducesto& N[V/x], \qquad
\text{where } \hret = \{ \Return \; x \mapsto N \} \smallskip\\
\semlab{Op} & \Handle \; \EC[\Do \; \ell \, V] \; \With \; H &\reducesto& N[V/p,(\lambda y.\Handle \; \EC[\Return \; y] \; \With \; H)/r],\\
\multicolumn{4}{@{}r@{}}{
\hfill\text{where } \hell = \{ \OpCase{\ell}{p}{r} \mapsto N \}
}
\end{reductions}}%
%
The first rule invokes the success clause.
%
The second rule handles an operation via the corresponding operation
clause.
%
If we were \naively to extend evaluation contexts with the handle
construct then our semantics would become nondeterministic, as it may
pick an arbitrary handler in scope.
%
In order to ensure that the semantics is deterministic, we instead add
a distinct form of evaluation context for effectful computation, which
we call handler contexts.
%
{
\begin{syntax}
\slab{Handler\textrm{ }contexts} & \HC \in \CatName{HCtx} &::= & [\,] \mid \Handle \; \HC \; \With \; H
\mid \Let\;x \revto \HC\; \In\; N\\
\end{syntax}}%
%
We replace the $\semlab{Lift}$ rule with a corresponding rule for
handler contexts.
{
\[
\HC[M] ~\reducesto~ \HC[N], \qquad\hfill\text{if } M \reducesto N
\]}%
%
The separation between pure evaluation contexts $\EC$ and handler
contexts $\HC$ ensures that the $\semlab{Op}$ rule always selects the
innermost handler.
We now characterise normal forms and state the standard type soundness
property of $\HPCF$.
%
\begin{definition}[Computation normal forms]
A computation term $N$ is normal with respect to $\Sigma$, if $N =
\Return\;V$ for some $V$ or $N = \EC[\Do\;\ell\,W]$ for some $\ell
\in dom(\Sigma)$, $\EC$, and $W$.
\end{definition}
%
\begin{theorem}[Type Soundness]
If $\typ{}{M : C}$, then either there exists $\typ{}{N : C}$ such
that $M \reducesto^* N$ and $N$ is normal with respect to $\Sigma$,
or $M$ diverges.
\end{theorem}
%%
%% Abstract machine semantics
%%
\subsection{The role of types}
Readers familiar with backtracking search algorithms may wonder where
types come into the expressiveness picture.
%
Types will not play a direct role in our proofs but rather in the
characterisation of which programs can be meaningfully compared. In
particular, types are used to rule out global approaches such as
continuation passing style (CPS): without types one could obtain an
efficient pure generic count program by CPS transforming the entire
program.
Readers familiar with effect handlers may wonder why our handler
calculus does not include an effect type system.
%
As types frame the comparison of programs between languages, we
require that types be fixed across languages; hence $\HPCF$ does not
include effect types.
%
Future work includes reconciling effect typing with our approach to
expressiveness.
\section{A practical model of computation}
\label{sec:abstract-machine-semantics}
Thus far we have introduced the base calculus $\BPCF$ and its
extension with effect handlers $\HPCF$.
%
For each calculus we have given a \emph{small-step operational
semantics} which uses a substitution model for evaluation. Whilst
this model is semantically pleasing, it falls short of providing a
realistic account of practical computation as substitution is an
expensive operation. Instead we shall use a slightly simpler variation
of the abstract machine from Chapter~\ref{ch:abstract-machine} as it
provides a more practical model of computation (it is simpler, because
the source language is simpler).
\subsection{Base machine}
\label{sec:base-abstract-machine}
\newcommand{\Conf}{\dec{Conf}}
\newcommand{\EConf}{\dec{EConf}}
\newcommand{\MVal}{\dec{MVal}}
The base machine operates on configurations of the form
$\cek{M \mid \gamma \mid \sigma}$. The first component contains the
computation currently being evaluated. The second component contains
the environment $\gamma$ which binds free variables. The third
component contains the continuation which instructs the machine how to
proceed once evaluation of the current computation is complete.
%
The syntax of abstract machine states is as follows.
{
\begin{syntax}
\slab{Configurations} & \conf \in \Conf &::=& \cek{M \mid \env \mid \sigma} \\
% & &\mid& \cekop{M \mid \env \mid \kappa \mid \kappa'} \\
\slab{Environments} &\env \in \Env &::=& \emptyset \mid \env[x \mapsto v] \\
\slab{Machine\textrm{ }values} &v, w \in \MValCat &::= & x \mid n \mid c \mid \Unit \mid \Record{v, w} \\
& &\mid& (\env, \lambda x^A .\, M) \mid (\env, \Rec\, f^{A \to B}\,x . \, M)\\
& &\mid& (\Inl\, v)^B \mid (\Inr\,w)^A \\
\slab{Pure\textrm{ }continuations} &\sigma \in \MPContCat &::=& \nil \mid (\env, x, N) \cons \sigma \\
\end{syntax}}%
%
Values consist of function closures, constants, pairs, and left or
right tagged values.
%
We refer to continuations of the base machine as \emph{pure}.
%
A pure continuation is a stack of pure continuation frames. A pure
continuation frame $(\env, x, N)$ closes a let-binding $\Let \;x
\revto [~] \;\In\;N$ over environment $\env$.
%
We write $\nil$ for an empty pure continuation and $\phi \cons \sigma$
for the result of pushing the frame $\phi$ onto $\sigma$. We use
pattern matching to deconstruct pure continuations.
%
\begin{figure*}
\raggedright
\textbf{Transition relation} $\qquad\qquad~~\,\stepsto\, \subseteq\! \MConfCat \times \MConfCat$
\[
\bl
\ba{@{}l@{\quad}r@{~}c@{~}l@{~~}l@{}}
% App
\mlab{App} & \cek{ V\;W \mid \env \mid \sigma}
&\stepsto& \cek{ M \mid \env'[x \mapsto \val{W}{\env}] \mid \sigma},\\
&&& \quad\text{ if }\val{V}{\env} = (\env', \lambda x^A . \, M)\\
% App rec
\mlab{AppRec} & \cek{ V\;W \mid \env \mid \sigma}
&\stepsto& \cek{ M \mid \env'[\bl
f \mapsto (\env', \Rec\,f^{A \to B}\,x. M), \\
x \mapsto \val{W}{\env}] \mid \sigma},\\
\el \\
&&& \quad\text{ if }\val{V}{\env} = (\env', \Rec\, f^{A \to B}\, x. M)\\
% Constant
\mlab{Const} & \cek{ V~W \mid \env \mid \sigma}
&\stepsto& \cek{ \Return\; (\const{c}\,(\val{W}\env)) \mid \env \mid \sigma},\\
&&& \quad\text{ if }\val{V}{\env} = c \\
\ea \medskip\\
\ba{@{}l@{\quad}r@{~}c@{~}l@{~~}l@{}}
\mlab{Split} & \cek{ \Let \; \Record{x,y} = V \; \In \; N \mid \env \mid \sigma}
&\stepsto& \cek{ N \mid \env[x \mapsto v, y \mapsto w] \mid \sigma}, \\
&&& \quad\text{ if }\val{V}{\env} = \Record{v; w} \\
% Case left
\mlab{CaseL} & \ba{@{}l@{}l@{}}
\cekl \Case\; V\, \{&\Inl\, x \mapsto M; \\
&\Inr\, y \mapsto N\} \mid \env \mid \sigma \cekr \\
\ea
&\stepsto& \ba[t]{@{~}l}\cek{ M \mid \env[x \mapsto v] \mid \sigma},\\
\quad\text{ if }\val{V}{\env} = \Inl\, v\ea \\
% Case right
\mlab{CaseR} & \ba{@{}l@{}l@{}}
\cekl \Case\; V\, \{&\Inl\, x \mapsto M; \\
&\Inr\, y \mapsto N\} \mid \env \mid \sigma \cekr \\
\ea
&\stepsto& \ba[t]{@{~}l}\cek{ N \mid \env[y \mapsto v] \mid \sigma},\\
\quad\text{ if }\val{V}{\env} = \Inr\, v \ea\\
% Let - eval M
\mlab{Let} & \cek{ \Let \; x \revto M \; \In \; N \mid \env \mid \sigma}
&\stepsto& \cek{ M \mid \env \mid (\env,x,N) \cons \sigma} \\
% Return - let binding
\mlab{PureCont} &\cek{ \Return \; V \mid \env \mid (\env',x,N) \cons \sigma}
&\stepsto& \cek{ N \mid \env'[x \mapsto \val{V}{\env}] \mid \sigma} \\
\ea
\el
\]
\textbf{Value interpretation} $\qquad\qquad~~\,\val{-} : \ValCat \times \MEnvCat \to \MValCat$
\[
\bl
\begin{eqs}
\val{x}{\env} &\defas& \env(x) \\
\val{\Unit{}}{\env} &\defas& \Unit{} \\
\end{eqs}
\qquad\qquad
\begin{eqs}
\val{n}{\env} &\defas& n \\
\val{c}\env &\defas& c \\
\end{eqs}
\qquad\qquad
\begin{eqs}
\val{\lambda x^A.M}{\env} &\defas& (\env, \lambda x^A.M) \\
\val{\Rec\, f^{A \to B}\, x.M}{\env} &\defas& (\env, \Rec\,f^{A \to B}\, x.M) \\
\end{eqs}
\medskip \\
\begin{eqs}
\val{\Record{V, W}}{\env} &\defas& \Record{\val{V}{\env}, \val{W}{\env}} \\
\end{eqs}
\qquad\qquad
\ba{@{}r@{~}c@{~}l@{}}
\val{(\Inl\, V)^B}{\env} &\defas& (\Inl\; \val{V}{\env})^B \\
\val{(\Inr\, V)^A}{\env} &\defas& (\Inr\; \val{V}{\env})^A \\
\ea
\el
\]
\caption{Abstract machine semantics for $\BPCF$.}
\label{fig:abstract-machine-semantics}
\end{figure*}
The abstract machine semantics is given in
Figure~\ref{fig:abstract-machine-semantics}.
%
The transition relation ($\stepsto$) makes use of the value
interpretation ($\val{-}$) from value terms to machine values.
%
The machine is initialised by placing a term in a configuration
alongside the empty environment ($\emptyset$) and identity
pure continuation ($\nil$).
%
The rules (\mlab{App}), (\mlab{AppRec}), (\mlab{Const}),
(\mlab{Split}), (\mlab{CaseL}), and (\mlab{CaseR}) eliminate values.
%
The (\mlab{Let}) rule extends the current pure continuation with let
bindings.
%
The (\mlab{PureCont}) rule pops the top frame of the pure continuation
and extends the environment with the returned value.
%
Given an input of a well-typed closed computation term $\typ{}{M :
A}$, the machine will either diverge or return a value of type $A$.
%
A final state is given by a configuration of the form $\cek{\Return\;V
\mid \env \mid \nil}$ in which case the final return value is given
by the denotation $\val{V}{\env}$ of $V$ under environment $\gamma$.
%
\paragraph{Correctness}
%
The base machine faithfully simulates the operational semantics for
$\BPCF$; most transitions correspond directly to $\beta$-reductions,
but $\mlab{Let}$ performs an administrative step to bring the
computation $M$ into evaluation position.
%
The proof of correctness is similar to the proof of
Theorem~\ref{thm:handler-simulation} and the required proof gadgetry
is the same. The full details are published in Appendix A of
\citet{HillerstromLL20a}.
% We formally state and prove the correspondence in
% Appendix~\ref{sec:base-machine-correctness}, relying on an
% inverse map $\inv{-}$ from configurations to
% terms~\citep{HillerstromLA20}.
%
\newcommand{\contapp}[2]{#1 #2}
\newcommand{\contappp}[2]{#1(#2)}
\subsection{Handler machine}
\label{sec:handler-machine}
\newcommand{\HClosure}{\dec{HClo}}
%
We now extend the $\BPCF$ machine with functionality to evaluate
$\HPCF$ terms. The resulting machine is almost the same as the machine
in Chapter~\ref{ch:abstract-machine}, though, this machine supports
only deep handlers.
%
The syntax is extended as follows.
%
{
\begin{syntax}
\slab{Configurations} &\conf \in \Conf &::=& \cek{M \mid \env \mid \kappa}\\
\slab{Resumptions} &\rho \in \dec{Res} &::=& (\sigma, \chi)\\
\slab{Continuations} &\kappa \in \MGContCat &::=& \nil \mid \rho \cons \kappa\\
\slab{Handler\textrm{ }closures} &\chi \in \MGFrameCat &::=& (\env, H) \\
\slab{Machine\textrm{ }values} &v, w \in \MValCat &::=& \cdots \mid \rho \\
\end{syntax}}%
%
The notion of configurations changes slightly in that the continuation
component is replaced by a generalised continuation
$\kappa \in \MGContCat$; a continuation is now a list of
resumptions. A resumption is a pair of a pure continuation (as in the
base machine) and a handler closure ($\chi$).
%
A handler closure consists of an environment and a handler definition,
where the former binds the free variables that occur in the latter.
%
The identity continuation is a singleton list containing the identity
resumption, which is an empty pure continuation paired with the
identity handler closure:
%
{
\[
\kappa_0 \defas [(\nil, (\emptyset, \{\Return\;x \mapsto x\}))]
\]}%
%
Machine values are augmented to include resumptions as an operation
invocation causes the topmost frame of the machine continuation to be
reified (and bound to the resumption parameter in the operation
clause).
%
The handler machine adds transition rules for handlers, and modifies
$(\mlab{Let})$ and $(\mlab{PureCont})$ from the base machine to account
for the richer continuation
structure. Figure~\ref{fig:abstract-machine-semantics-handlers}
depicts the new and modified rules.
%
The $(\mlab{Handle})$ rule pushes a handler closure along with an
empty pure continuation onto the continuation stack.
%
The $(\mlab{GenCont})$ rule transfers control to the success clause
of the current handler once the pure continuation is empty.
%
The $(\mlab{Op})$ rule transfers control to the matching
operation clause on the topmost handler, and during the process it
reifies the handler closure. Finally, the $(\mlab{Resume})$ rule
applies a reified handler closure, by pushing it onto the continuation
stack.
%
The handler machine has two possible final states: either it yields a
value or it gets stuck on an unhandled operation.
\begin{figure*}
\raggedright
\textbf{Transition relation}
\[
\bl
\ba{@{}l@{~}r@{~}c@{~}l@{~~}l@{}}
% Resume resumption
\mlab{Resume} & \cek{ V\;W \mid \env \mid \kappa}
&\stepsto& \cek{ \Return \; W \mid \env \mid (\sigma, \chi) \cons \kappa},\\
&&&\quad\text{ if }\val{V}{\env} = (\sigma, \chi) \\
% Let - eval M
\mlab{Let} & \cek{ \Let \; x \revto M \; \In \; N \mid \env \mid (\sigma, \chi) \cons \kappa}
&\stepsto& \cek{ M \mid \env \mid ((\env,x,N) \cons \sigma, \chi) \cons \kappa} \\
% Apply (machine) continuation - let binding
\mlab{PureCont} &\cek{ \Return \; V \mid \env \mid ((\env',x,N) \cons \sigma, \chi) \cons \kappa}
&\stepsto& \cek{ N \mid \env'[x \mapsto \val{V}{\env}] \mid (\sigma, \chi) \cons \kappa} \\
% Handle
\mlab{Handle} & \cek{ \Handle \; M \; \With \; H \mid \env \mid \kappa}
&\stepsto& \cek{ M \mid \env \mid (\nil, (\env, H)) \cons \kappa} \\
% Return - handler
\mlab{GenCont} & \cek{ \Return \; V \mid \env \mid (\nil, (\env',H)) \cons \kappa}
&\stepsto& \cek{ M \mid \env'[x \mapsto \val{V}{\env}] \mid \kappa},\\
&&&\quad\text{ if } \hret = \{\Return\; x \mapsto M\} \\
% Handle op
\mlab{Op} & \cek{ \Do \; \ell~V \mid \env \mid (\sigma, (\env', H)) \cons \kappa }
&\stepsto& \cek{ M \mid \env'[\bl
p \mapsto \val{V}\env, \\
r \mapsto (\sigma, (\env', H))] \mid \kappa }, \\
\el \\
&&&\quad\bl
\text{ if } \ell : A \to B \in \Sigma\\
\text{ and } \hell = \{\OpCase{\ell}{p}{r} \mapsto M\}
\el\\
\ea
\el
\]
\caption{Abstract machine semantics for $\HPCF$.}
\label{fig:abstract-machine-semantics-handlers}
\end{figure*}
\paragraph{Correctness}
%
The handler machine faithfully simulates the operational semantics of
$\HPCF$.
%
The proof of correctness is almost a carbon copy of the proof of
Theorem~\ref{thm:handler-simulation}. The full details are published
in Appendix B of \citet{HillerstromLL20a}.% Extending the result for
% the base machine, we formally state and prove the correspondence in
% Appendix~\ref{sec:handler-machine-correctness}.
\subsection{Realisability and asymptotic complexity}
\label{sec:realisability}
As discussed in Section~\ref{subsec:machine-realisability} the machine
is readily realisable using standard persistent functional data
structures.
%
Pure continuations on the base machine and generalised continuations
on the handler machine can be implemented using linked lists with a
time complexity of $\BigO(1)$ for the extension operation
$(\_\cons\_)$.
%
The topmost pure continuation on the handler machine may also be
extended in time $\BigO(1)$, as extending it only requires reaching
under the topmost handler closure.
%
Environments, $\env$, can be realised using a map, with a time
complexity of $\BigO(\log|\env|)$ for extension and
lookup~\citep{Okasaki99}.
The worst-case time complexity of a single machine transition is
exhibited by rules which involve operations on the environment, since
any other operation is constant time, hence the worst-time complexity
of a transition is $\BigO(\log|\env|)$.
%
The value interpretation function $\val{-}\env$ is defined
structurally on values. Its worst-time complexity is exhibited by a
nesting of pairs of variables $\val{\Record{x_1,\dots,x_n}}\env$ which
has complexity $\BigO(n\log|\env|)$.
\paragraph{Continuation copying} On the handler machine the topmost
continuation frame can be copied in constant time due to the
persistent runtime and the layout of machine continuations. An
alternative design would be to make the runtime non-persistent
%
in which case copying a continuation frame $((\sigma, \_) \cons
\_)$ would be a $\BigO(|\sigma|)$ time operation.
\paragraph{Primitive operations on naturals}
%
Our model assumes that arithmetic operations on arbitrary natural
numbers take $\BigO(1)$ time. This is common practice in the study of
algorithms when the main interest lies
elsewhere~\citep[Section~2.2]{CormenLRS09}. If desired, one could
adopt a more refined cost model that accounted for the bit-level
complexity of arithmetic operations; however, doing so would have the
same impact on both of the situations we are wishing to compare, and
thus would add nothing but noise to the overall analysis.
%%
%% Generic search
%%
\section{Predicates, decision trees, and generic count}
\label{sec:generic-search}
We now come to the crux of the chapter. In this section and the next, we
prove that $\HPCF$ supports implementations of certain operations
with an asymptotic runtime bound that cannot be achieved in $\BPCF$
(Section~\ref{sec:pure-counting}).
%
While the positive half of this claim essentially consolidates a
known piece of folklore, the negative half appears to be new.
%
To establish our result, it will suffice to exhibit a single
`efficient' program in $\HPCF$, then show that no equivalent program
in $\BPCF$ can achieve the same asymptotic efficiency. We take
\emph{generic search} as our example.
Generic search is a modular search procedure that takes as input
a predicate $P$ on some multi-dimensional search space,
and finds all points of the space satisfying $P$.
Generic search is agnostic to the specific instantiation of $P$,
and as a result is applicable across a wide spectrum of domains.
Classic examples such as Sudoku solving~\citep{Bird06}, the
$n$-queens problem~\citep{BellS09} and graph colouring
can be cast as instances of generic search, and similar ideas have
been explored in connection with Nash equilibria and
exact real integration~\citep{Simpson98, Daniels16}.
For simplicity, we will restrict attention to search spaces of the form $\B^n$,
the set of bit vectors of length $n$.
To exhibit our phenomenon in the simplest
possible setting, we shall actually focus on the \emph{generic count} problem:
given a predicate $P$ on some $\B^n$, return the \emph{number of} points
of $\B^n$ satisfying $P$. However, we shall explain why our results
are also applicable to generic search proper.
We shall view $\B^n$ as the set of functions $\N_n \to \B$,
where $\N_n \defas \{0,\dots,n-1\}$.
In both $\BPCF$ and $\HPCF$ we may represent such functions by terms of type $\Nat \to \Bool$.
We will often informally write $\Nat_n$ in place of $\Nat$ to indicate that
only the values $0,\dots,n-1$ are relevant, but this convention has no
formal status since our setup does not support dependent types.
To summarise, in both $\BPCF$ and $\HPCF$ we will be working with the types
%
{
\[
\begin{twoeqs}
\Point & \defas & \Nat \to \Bool & \hspace*{2.0em} &
\Point_n & \defas & \Nat_n \to \Bool \\
\Predicate & \defas & \Point \to \Bool & &
\Predicate_n & \defas & \Point_n \to \Bool
\end{twoeqs}
\]
}
%
and will be looking for programs
%
{
\[
\Count_n : \Predicate_n \to \Nat
\]}%
%
such that for suitable terms $P$ representing semantic predicates $\Pi: \B^n \to \B$,
$\Count_n~P$ finds the number of points of $\B^n$ satisfying $\Pi$.
Before formalising these ideas more closely, let us look at some examples,
which will also illustrate the machinery of \emph{decision trees} that we will be using.
\subsection{Examples of points, predicates and trees}
\label{sec:predicates-points}
Consider first the following terms of type $\Point$:
{
\begin{mathpar}
\dec{q}_0 \defas \lambda \_. \True
\dec{q}_1 \defas \lambda i. i=0
\dec{q}_2 \defas \lambda i.\,
\If\;i = 0\;\Then\;\True\;
\Else\;\If\;i = 1\;\Then\;\False\;
\Else\;\bot
\end{mathpar}}%
(Here $\bot$ is the diverging term $(\Rec\; f\,i.f\,i)\,\Unit$.)
Then $\dec{q}_0$ represents $\langle{\True,\dots,\True}\rangle \in \B^n$ for any $n$;
$\dec{q}_1$ represents $\langle{\True,\False,\dots,\False}\rangle \in \B^n$ for any $n \geq 1$;
and $\dec{q}_2$ represents $\langle{\True,\False}\rangle \in \B^2$.
Next some predicates.
First, the following terms all represent the constant true predicate $\B^2 \to \B$:
{
\begin{mathpar}
\dec{T}_0 \defas \lambda q. \True
\dec{T}_1 \defas \lambda q.(q\,1; q\,0; \True)
\dec{T}_2 \defas \lambda q.(q\,0; q\,0; \True)
\end{mathpar}}%
These illustrate that in the course of evaluating a predicate term $P$ at a point $\dec{q}$,
for each $i<n$ the value of $\dec{q}$ at $i$ may be inspected zero, one or many times.
Likewise, the following all represent the `identity' predicate $\B^1 \to \B$
(here $\&\&$ is shortcut `and'):
{
\begin{mathpar}
\dec{I}_0 \defas \lambda q. q\,0
\dec{I}_1 \defas \lambda q.\, \If\;q\,0\; \Then\; \True \; \Else\; \False
\dec{I}_2 \defas \lambda q. (q\,0) \,\&\&\, (q\,0)
\end{mathpar}}%
Slightly more interestingly, for each $n$ we have the following program which determines
whether a point contains an odd number of $\True$ components:
%
{
\[
\dec{odd}_n \defas \lambda q.\, \dec{fold}\otimes\False~(\dec{map}~q~[0,\dots,n-1])
\]}%
%
Here $\dec{fold}$ and $\dec{map}$ are the standard combinators on lists, and $\otimes$ is exclusive-or.
Applying $\dec{odd}_2$ to $\dec{q}_0$ yields $\False$;
applying it to $\dec{q}_1$ or $\dec{q}_2$ yields $\True$.
%
\medskip
%
\begin{figure}
\centering
\begin{subfigure}{0.1\textwidth}
\begin{center}
% \vspace*{-2.5ex}
\scalebox{1.3}{\TTZeroModel}
\vspace*{13.5ex}
\end{center}
\caption{$\dec{T}_0$}
\label{fig:tt0-tree}
\end{subfigure}
%
\begin{subfigure}{0.3\textwidth}
\begin{center}
\scalebox{1.3}{\ShortConjModel}
\end{center}
\caption{$\dec{I}_2$}
\label{fig:div1-tree}
\end{subfigure}
%
\begin{subfigure}{0.4\textwidth}
\begin{center}
\scalebox{1.3}{\XORTwoModel}
\end{center}
\caption{$\dec{odd}_2$}
\label{fig:xor2-tree}
\end{subfigure}
\caption{Examples of decision trees.}
\label{fig:example-models}
\end{figure}
We can think of a predicate term $P$ as participating in a `dialogue'
with a given point $Q : \Point_n$.
The predicate may \emph{query} $Q$ at some coordinate $k$;
$Q$ may \emph{respond} with $\True$ or $\False$ and this returned value
may influence the future course of the dialogue.
After zero or more such query/response pairs, the predicate may return a
final \emph{answer} ($\True$ or $\False$).
The set of possible dialogues with a given term $P$ may be organised
in an obvious way into an unrooted binary \emph{decision tree}, in
which each internal node is labelled with a query $\query k$ (with
$k<n$), and with left and right branches corresponding to the
responses $\True$, $\False$ respectively. Any point will thus
determine a path through the tree, and each leaf is labelled with an
answer $\ans \True$ or $\ans \False$ according to whether the
corresponding point or points satisfy the predicate.
Decision trees for a sample of the above predicate terms are depicted
in Figure~\ref{fig:example-models}; the relevant formal definitions
are given in the next subsection. In the case of $\dec{I}_2$, one of
the $\ans \False$ leaves will be `unreachable' if we are working in
$\BPCF$ (but reachable in a language supporting mutable state).
We think of the edges in the tree as corresponding to portions of
computation undertaken by $P$ between queries, or before delivering
the final answer. The tree is unrooted (i.e.\ starts with an edge
rather than a node) because in the evaluation of $P\,Q$ there is
potentially some `thinking' done by $P$ even before the first query or
answer is reached. For the purpose of our runtime analysis, we will
also consider \emph{timed} variants of these decision trees, in which
each edge is labelled with the number of computation steps involved.
It is possible that for a given $P$ the construction of a decision
tree may hit trouble, because at some stage $P$ either goes undefined
or gets stuck at an unhandled operation. It is also possible that the
decision tree is infinite because $P$ can keep asking queries forever.
However, we shall be restricting our attention to terms representing
\emph{total} predicates: those with finite decision trees in which
every path leads to a leaf.
In order to present our complexity results in a simple and clear form,
we will give special prominence to certain well-behaved decision
trees. For $n \in \N$, we shall say a tree is \emph{$n$-standard} if
it is total (i.e.\ every maximal path leads to a leaf labelled with an
answer) and along any path to a leaf, each coordinate $k<n$ is queried
once and only once. Thus, an $n$-standard decision tree is a complete
binary tree of depth $n+1$, with $2^n - 1$ internal nodes and $2^n$
leaves. However, there is no constraint on the order of the queries,
which indeed may vary from one path to another. One pleasing property
of this notion is that for a predicate term with an $n$-standard
decision tree, the number of points in $\B^n$ satisfying the predicate
is precisely the number of $\ans \True$ leaves in the tree.
Of the examples we have given, the tree for $\dec{T}_0$ is 0-standard;
those for $\dec{I}_0$ and $\dec{I}_1$ are 1-standard; that for
$\dec{odd}_n$ is $n$-standard; and the rest are not $n$-standard for
any $n$.
\subsection{Formal definitions}
\label{sec:predicate-models}
We now formalise the above notions. We will present our definitions
in the setting of $\HPCF$, but everything can clearly be relativised
to $\BPCF$ with no change to the meaning in the case of $\BPCF$
terms. For the purpose of this subsection we fix $n \in \N$, set
$\N_n \defas \{0,\ldots,n-1\}$, and use $k$ to range over $\N_n$. We
write $\B$ for the set of booleans, which we shall identify with the
(encoded) boolean values of $\HPCF$, and use $b$ to range over $\B$.
As suggested by the foregoing discussion, we will need to work with
both syntax and semantics. For points, the relevant definitions are
as follows.
\begin{definition}[$n$-points]\label{def:semantic-n-point}
A closed value $Q : \Point$ is said to be a \emph{syntactic $n$-point} if:
%
{
\[
\forall k \in \N_n.\,\exists b \in \B.~ Q~k \reducesto^\ast \Return\;b
\]}%
%
A \emph{semantic $n$-point} $\pi$ is simply a mathematical function
$\pi: \N_n \to \B$. (We shall also write $\pi \in \B^n$.) Any
syntactic $n$-point $Q$ is said to \emph{denote} the semantic
$n$-point $\val{Q}$ given by:
%
{
\[
\forall k \in \N_n,\, b \in \B.~ \val{Q}(k) = b \,\Iff\, Q~k \reducesto^\ast \Return\;b
\]}%
%
Any two syntactic $n$-points $Q$ and $Q'$ are said to be
\emph{distinct} if $\val{Q} \neq \val{Q'}$.
%
\end{definition}
By default, the unqualified term \emph{$n$-point} will from now on
refer to syntactic $n$-points.
Likewise, we wish to work with predicates both syntactically and
semantically. By a \emph{semantic $n$-predicate} we shall mean simply
a mathematical function $\Pi: \B^n \to \B$. One slick way to define
syntactic $n$-predicates would be as closed terms $P:\Predicate$ such
that for every $n$-point $Q$, $P\,Q$ evaluates to either
$\Return\;\True$ or $\Return\;\False$. For our purposes, however, we
shall favour an approach to $n$-predicates via \emph{decision trees},
which will yield more information on their behaviour.
We will model decision trees as certain partial functions from
\emph{addresses} to \emph{labels}. An address will specify the
position of a node in the tree via the path that leads to it, while a
label will represent the information present at a node. Formally:
\begin{definition}[untimed decision tree]\label{def:decision-tree}
~
\begin{enumerate}[(i)]
\item The address set $\Addr$ is simply the set $\B^\ast$ of finite
lists of booleans. If $bs,bs' \in \Addr$, we write
$bs \sqsubseteq bs'$ (resp.\ $bs \sqsubset bs'$) to mean that $bs$
is a prefix (resp.\ proper prefix) of $bs'$.
\item The label set $\Lab$ consists of \emph{queries} parameterised
by a natural number and \emph{answers} parameterised by a boolean:
{
\[
\Lab \defas \{\query k \mid k \in \N \} \cup \{\ans b \mid b \in \B \}
\]
}%
\item An (untimed) decision tree is a partial function
$\tree : \Addr \pto \Lab$ such that:
\begin{itemize}
\item The domain of $\tree$ (written $dom(\tree)$) is prefix closed.
\item Answer nodes are always leaves:
if $\tree(bs) = \ans b$ then $\tree(bs')$ is undefined whenever $bs \sqsubset bs'$.
\end{itemize}
\end{enumerate}
\end{definition}
As our goal is to reason about the time complexity of generic count
programs and their predicates, it is also helpful to decorate decision
trees with timing data that records the number of machine steps taken
for each piece of computation performed by a predicate:
\begin{definition}[timed decision tree]\label{def:timed-decision-tree}
A timed decision tree is a partial function $\tree : \Addr \pto
\Lab \times \N$ such that its first projection $bs \mapsto \tree(bs).1$
is a decision tree.
%
We write $\tl$ for the first projection ($bs \mapsto \tree(bs).1$) and
$\ts$ for the second projection ($bs \mapsto \tree(bs).2$) of a timed
decision tree.
\end{definition}
Here we think of $\steps(\tree)(bs)$ as the computation time
associated with the edge whose \emph{target} is the node addressed by
$bs$.
We now come to the method for associating a specific tree with a given
term $P$. One may think of this as a kind of denotational semantics,
but here we shall extract a tree from a term by purely operational
means using our abstract machine model. The key idea is to try
applying $P$ to a distinguished free variable $q: \Point$, which we
think of as an `abstract point'. Whenever $P$ wants to interrogate its
argument at some index $i$, the computation will get stuck at some
term $q\,i$: this both flags up the presence of a query node in the
decision tree, and allows us to explore the subsequent behaviour under
both possible responses to this query.
The core of our definition is couched in terms of abstract machine configurations.
We write $\Conf_q$ for the set of $\lambda_h$ configurations possibly involving $q$
(but no other free variables).
We write $a \simeq b$ for Kleene equality: either both $a$ and $b$ are
undefined or both are defined and $a = b$.
It is convenient to define the timed tree and then extract the untimed one from it:
\begin{definition}\label{def:model-construction}
~
\begin{enumerate}[(i)]
\item Define $\tr: \Conf_q \to \Addr \pto (\Lab \times \N)$ to be
the minimal family of partial functions satisfying the following
equations:
%
{
\begin{mathpar}
\ba{@{}r@{~}c@{~}l@{\qquad}l@{}}
\tr(\cek{\Return\;W \mid \env \mid \nil})\, \nil &~=~& (!b, 0),
&\text{if }\val{W}\env = b \smallskip\\
%% SL: the following clauses are useless as the value term returned
%% will *always* be a variable!
%
%% \tr(\cek{\Return\;\True \mid \env \mid \nil})\, \nil &~=~& (!\True, 0) \smallskip\\
%% \tr(\cek{\Return\;\False \mid \env \mid \nil})\, \nil &~=~& (!\False, 0) \smallskip\\
\tr(\cek{z\,V \mid \env \mid \kappa})\, \nil &~=~& (?\val{V}{\env}, 0),
&\text{if } \gamma(z) = q \smallskip\\
\tr(\cek{z\,V \mid \env \mid \kappa})\, (b \cons bs) &~\simeq~& \tr(\cek{\Return\;b \mid \env \mid \kappa})\,bs,
& \text{if } \gamma(z) = q \smallskip\\
\tr(\cek{M \mid \env \mid \kappa})\, bs &~\simeq~& \mathsf{inc}\,(\tr(\cek{M' \mid \env' \mid \kappa'})\, bs), &\\
&&\multicolumn{2}{l}{\quad\text{if } \cek{M \mid \env \mid \kappa} \stepsto \cek{M' \mid \env' \mid \kappa'}}
\ea
\end{mathpar}}%
%
Here $\mathsf{inc}(\ell, s) = (\ell, s + 1)$, and in all of the above equations
$\gamma(q) = \gamma'(q) = q$.
Clearly $\tr(\conf)$ is a timed decision tree for any $\conf \in \Conf_q$.
%
\item The timed decision tree of a computation term is obtained by
placing it in the initial configuration:
%
$\tr(M) \defas \tr(\cek{M, \emptyset[q \mapsto q], \kappa_0})$.
%
\item The timed decision tree of a closed value $P:\Predicate$ is
$\tr(P\,q)$. Since $q$ plays the role of a dummy argument, we will
usually omit it and write $\tr(P)$ for $\tr(P\,q)$.
\item The untimed decision tree $\tru(P)$ is obtained from $\tr(P)$
via first projection: $\tru(P) = \labs(\tr(P))$.
\end{enumerate}
\end{definition}
If the execution of a configuration $\conf$ runs forever or gets stuck
at an unhandled operation, then $\tr(\conf)(bs)$ will be undefined for
all $bs$. Although this is admitted by our definition of decision
tree, we wish to exclude such behaviours for the terms we accept as
valid predicates. Specifically, we frame the following definition:
\begin{definition} \label{def:n-predicate}
A decision tree $\tree$ is an \emph{$n$-predicate tree} if it satisfies the following:
\begin{itemize}
\item For every query $\query k$ appearing in $\tree$, we have $k \in \N_n$.
\item Every query node has both children present:
\[ \forall bs \in \Addr,\, k \in \N_n,\, b \in \B.~ \tree(bs) = \query k \Implies \snoc{bs}{b} \in dom(\tree) \]
\item All paths in $\tree$ are finite (so every maximal path terminates in an answer node).
\end{itemize}
A closed term $P: \Predicate$ is a \emph{(syntactic) $n$-predicate} if $\tru(P)$ is an $n$-predicate tree.
\end{definition}
If $\tree$ is an $n$-predicate tree, clearly any semantic $n$-point
$\pi$ gives rise to a path $b_0 b_1 \dots $ through $\tree$, given
inductively by: {
\[ \forall j.~ \mbox{if~} \tau(b_0\dots b_{j-1}) = \query k_j \mbox{~then~} b_j = \pi(k_j) \]
}%
This path will terminate at some answer node $b_0 b_1 \dots b_{r-1}$ of $\tree$,
and we may write $\tree \bullet \pi \in \B$ for the answer at this leaf.
\begin{proposition} \label{prop:pred-tree}
If $P$ is an $n$-predicate and $Q$ is an $n$-point, then
$P\,Q \reducesto^\ast \Return\;b$ where $b = \tru(P) \bullet \val{Q}$.
\end{proposition}
\begin{proof}
By interleaving the computation for the relevant path through
$\tru(P)$ with computations for queries to $Q$, and appealing to the
correspondence between the small-step reduction and abstract machine
semantics. We omit the routine details.
\end{proof}
It is thus natural to define the \emph{denotation} of an $n$-predicate
$P$ to be the semantic $n$-predicate $\val{P}$ given by
$\val{P}(\pi) = \tru(P) \bullet \pi$.
As mentioned earlier, we shall also be interested in a more constrained
class of trees and predicates:
\begin{definition}[$n$-standard trees and predicates]
An $n$-predicate tree $\tree$ is said to be $n$-standard if the following hold:
\begin{itemize}
\item The domain of $\tree$ is precisely $\Addr_n$, the set of bit vectors of length $\leq n$.
\item There are no repeated queries along any path in $\tree$:
\[ \forall bs, bs' \in dom(\tree),\, k \in \N_n.~ bs \sqsubseteq bs' \wedge \tree(bs)=\tau(bs')=\query k \Implies bs=bs' \]
\end{itemize}
A timed decision tree $\tree$ is $n$-standard if its underlying untimed
decision tree ($bs \mapsto \tree(bs).1$) is so.
An $n$-predicate $P$ is $n$-standard if $\tr(P)$ is $n$-standard.
\end{definition}
Clearly, in an $n$-standard tree, each of the $n$ queries
$\query 0,\dots, \query(n-1)$ appears exactly once on the path to any
leaf, and there are $2^n$ leaves, all of them answer nodes.
\subsection{Specification of counting programs}
\label{sec:counting}
We can now specify what it means for a program
$\Countprog : \Predicate \to \Nat$ to implement counting.
\begin{definition} \label{def:counting-function}
(i) The \emph{count} of a semantic $n$-predicate $\Pi$, written $\sharp \Pi$,
is simply the number of semantic $n$-points $\pi \in \B^n$ for which $\Pi(\pi)=\True$.
(ii) If $P$ is any $n$-predicate, we say that $\Countprog$ \emph{correctly counts} $P$ if
$\Countprog\,P \reducesto^\ast \Return\;m$, where $m = \sharp \val{P}$.
\end{definition}
This definition gives us the flexibility to talk about counting
programs that operate on various classes of predicates, allowing us to
state our results in their strongest natural form. On the positive
side, we shall shortly see that there is a single `efficient' program
in $\HPCF$ that correctly counts all $n$-standard $\HPCF$
predicates for every $n$; in Section~\ref{sec:beyond} we improve this
to one that correctly counts \emph{all} $n$-predicates of $\HPCF$.
On the negative side, we shall show that an $n$-indexed family of
counting programs written in $\BPCF$, even if only required to work
correctly on $n$-standard $\lambda_b$ predicates, can never compete
with our $\HPCF$ program for asymptotic efficiency even in the most
favourable cases.
\subsection{Efficient generic count with effects}
\label{sec:effectful-counting}
Now we are ready to implement a generic count function using effect
handlers. In fact, the implementation is so generic that it works on
all $n$-standard predicates.
The program uses a variation of the handler for nondeterministic
computation from Section~\ref{sec:tiny-unix-time}. The main idea is
to implement points as nondeterministic computations using the
$\Branch$ operation such that the handler may respond to every query
twice, by invoking the provided resumption with $\True$ and
subsequently $\False$. The key insight is that the resumption
restarts computation at the invocation site of $\Branch$, which means
that prior computation need not be repeated. In other words, the
resumption ensures that common portions of computations prior to any
query are shared between both branches.
We assert that $\Branch : \One \to \Bool \in \Sigma$ is a
distinguished operation that may not be handled in the definition of
any input predicate (it has to be forwarded according to the default
convention).
%
The algorithm is then as follows.
%
{
\[
\bl
\ECount : ((\Nat \to \Bool) \to \Bool) \to \Nat\\
\ECount\,pred \defas
\bl
\Handle\; pred\,(\lambda\_. \Do\; \Branch\; \Unit)\; \With\\
\quad\ba[t]{@{}l@{\hspace{1.5ex}}c@{\hspace{1.5ex}}l@{}}
\Return\, x &\mapsto& \If\; x\; \Then\;\Return\; 1 \;\Else\;\Return\; 0 \\
\OpCase{\Branch}{\Unit}{r} &\mapsto&
\ba[t]{@{}l}
\Let\;x_\True \revto r~\True\; \In\\
\Let\;x_\False \revto r~\False\;\In\;
x_\True + x_\False \\
\ea
\ea \\
\el
\el
\]}%
%
The handler applies predicate $pred$ to a single `generic point'
defined using $\Branch$. The boolean return value is interpreted as a
single solution, whilst $\Branch$ is interpreted by alternately
supplying $\True$ and $\False$ to the resumption and summing the
results. The sharing enabled by the use of the resumption is exactly
the `magic' we need to make it possible to implement generic count
more efficiently in $\HPCF$ than in $\BPCF$.
%
A curious feature of $\ECount$ is that it works for all $n$-standard
predicates without having to know the value of $n$. This is because
the generic point $(\Superpoint)$ informally serves as a
`superposition' of all possible points.
We may now articulate the crucial correctness and efficiency
properties of $\ECount$.
\begin{theorem}\label{thm:complexity-effectful-counting}
The following hold for any $n \in \N$ and any $n$-standard predicate $P$ of $\HPCF$:
%
\begin{enumerate}
\item $\ECount$ correctly counts $P$.
\item The number of machine steps required to evaluate $\ECount~P$ is
%
{
\[
\left( \displaystyle\sum_{bs \in \Addr_n} \steps(\tr(P))(bs) \right) ~+~ \BigO(2^n)
\]}%
\end{enumerate}
\end{theorem}
%
\begin{proof}[Proof outline]
Suppose $bs \in \Addr_n$, with $|bs|=j$. From the construction of
$\tr(P)$, one may easily read off a configuration $\conf_{bs}$ whose
execution is expected to compute the count for the subtree below
node $bs$, and we can explicitly describe the form $\conf_{bs}$ will
have. We write $\dec{Hyp}(bs)$ for the claim that $\conf_{bs}$
correctly counts this subtree, and does so within the following
number of steps: {\small
\[
\left( \displaystyle\sum_{bs' \in \Addr_n,\; bs' \sqsupset bs} \steps(\tr(P))(bs') \right) ~+~ 9 * (2^{n-j} - 1) + 2*2^{n-j}
\]
}%
%
The $9*(2^{n-j}-1)$ expression is the number of machine steps
contributed by the $\Branch$-case inside the handler, whilst the
$2*2^{n-j}$ expression is the number of machine steps contributed by
the $\Return$-case.
%
We prove $\dec{Hyp}(bs)$ by a laborious but routine downwards
induction on the length of $bs$. The proof combines counting of
explicit machine steps with `oracular' appeals to the assumed
behaviour of $P$ as modelled by $\tr(P)$. Once
$\dec{Hyp}(\nil)$ is established, both halves of the theorem
follow easily.
%
The proof details and development of the proof gadgets are in
Appendix~\ref{sec:positive-theorem}.
\end{proof}
%
The above formula can clearly be simplified for certain reasonable
classes of predicates. For instance, suppose we fix some constant
$c \in \N$, and let $\mathcal{P}_{n,c}$ be the class of all
$n$-standard predicates $P$ for which all the edge times
$\steps(\tr(P))(bs)$ are bounded by $c$. (Clearly, many reasonable
predicates will belong to $\mathcal{P}_{n,c}$ for some modest value of
$c$.) Since the number of sequences $bs$ in question is less than
$2^{n+1}$, we may read off from the above formula that for predicates
in $\mathcal{P}_{n,c}$, the runtime of $\ECount$ is $\BigO(c2^n)$.
Alternatively, should we wish to use the finer-grained cost model that
assigns an $O(\log |\gamma|)$ runtime to each abstract machine step
(see Section~\ref{sec:realisability}), we may note that any
environment $\gamma$ arising in the computation contains at most $n$
entries introduced by the let-bindings in $\ECount$, and (if $P \in
\mathcal{P}_{n,c}$) at most $\BigO(cn)$ entries introduced by $P$.
Thus, the time for each step in the computation remains $\BigO(\log c
+ \log n)$, and the total runtime for $\ECount$ is $\BigO(c 2^n (\log
c + \log n))$.
One might also ask about the execution time for an implementation of
$\HPCF$ that performs genuine copying of continuations, as in systems
such as MLton~\cite{Fluet20}.
%
As MLton copies the entire continuation (stack), whose size is
$\BigO(n)$, at each of the $2^n$ branches, continuation copying alone
takes time $\BigO(n2^n)$ and the effectful implementation offers no
performance benefit (Tables~\ref{tbl:results-mlton-queens} and
\ref{tbl:results-mlton-integration}).
%
More refined implementations \citep{FarvardinR20, FlattD20} that are
able to take advantage of delimited control operators or sharing in
copies of the stack can bring the complexity of continuation copying
back down to $\BigO(2^n)$.
Finally, one might consider another dimension of cost, namely the
space used by $\ECount$.
%
Consider a class $\mathcal{Q}_{n,c,d}$ of $n$-standard predicates $P$
for which the edge times in $\tr(P)$ never exceed $c$ and the sizes of
pure continuations never exceed $d$.
%
If we consider any $P \in \mathcal{Q}_{n,c,d}$ then the total number
of environment entries is bounded by $cn$, taking up space
$\BigO(cn(\log cn))$.
%
We must also account for the pure continuations. There are $n$ of
these, each taking at most $d$ space.
%
Thus the total space is $\BigO(n(d + c(\log c + \log n)))$.
\section{Pure generic count: a lower bound}
\label{sec:pure-counting}
\newcommand{\naivecount}{\dec{naivecount}}
\newcommand{\lazycount}{\dec{lazycount}}
\newcommand{\BergerCount}{\dec{BergerCount}}
\newcommand{\bestshot}{\dec{bestshot}}
\newcommand{\FF}{\mathcal{F}}
\newcommand{\GG}{\mathcal{G}}
We have shown that there is an implementation of generic count in
$\HPCF$ with a runtime bound of $\BigO(2^n)$ for certain well-behaved
predicates. We now prove that no implementation in $\BPCF$ can match
this: in fact, we establish a lower bound of $\Omega(n2^n)$ for the
runtime of any counting program on \emph{any} $n$-standard predicate.
This mathematically rigorous characterisation of the efficiency gap
between languages with and without effect handlers is the objective of
this chapter.
% One might ask at this point whether the claimed lower bound could not
% be obviated by means of some known continuation passing style (CPS) or
% monadic transform of effect handlers
% \cite{HillerstromLAS17,Leijen17}. This can indeed be done, but only by
% dint of changing the type of our predicates $P$ which would defeat the
% purpose of our enquiry. We want to investigate the relative power of
% various languages for manipulating predicates that are given to us in
% a certain way which we do not have the luxury of choosing.
To get a feel for the issues that the proof must address, let us
consider how one might construct a counting program in $\BPCF$. The
\naive approach, of course, would be simply to apply the given
predicate $P$ to all $2^n$ possible $n$-points in turn, keeping a
count of those on which $P$ yields true. It is a routine exercise to
implement this approach in $\BPCF$, yielding (parametrically in $n$) a
program
%
{
\[
\naivecount_n ~: ((\Nat_n \to \Bool) \to \Bool) \to \Nat
\]}%
%
Since the evaluation of an $n$-standard predicate on an individual
$n$-point must clearly take time $\Omega(n)$, we have that the
evaluation of $\naivecount_n$ on any $n$-standard predicate $P$ must
take time $\Omega(n2^n)$. If $P$ is not $n$-standard, the $\Omega(n)$
lower bound need not apply, but we may still say that the evaluation
of $\naivecount_n$ on \emph{any} predicate $P$ (at level $n$) must
take time $\Omega(2^n)$.
One might at first suppose that these properties are inevitable for
any implementation of generic count within $\BPCF$, or indeed any
purely functional language: surely, the only way to learn something
about the behaviour of $P$ on every possible $n$-point is to apply $P$
to each of these points in turn? It turns out, however, that the
$\Omega(2^n)$ lower bound can sometimes be circumvented by
implementations that cleverly exploit \emph{nesting} of calls to $P$.
%
The germ of the idea may be illustrated within $\BPCF$ itself.
Suppose that we first construct some program
%
{
\[
\bestshot_n ~: ((\Nat_n \to \Bool) \to \Bool) \to (\Nat_n \to \Bool)
\]}%
%
which, given a predicate $P$, returns some $n$-point $Q$ such that
$P~Q$ evaluates to true, if such a point exists, and any point at all
if no such point exists.
%
(In other words, $\bestshot_n$ embodies Hilbert's choice operator
$\varepsilon$ on predicates.)
%
It is once again routine to construct such a program by \naive means;
and we may moreover assume that for any $P$, the evaluation of
$\bestshot_n\;P$ takes only constant time, all the real work being
deferred until the argument of type $\Nat_n$ is supplied.
Now consider the following program:
%
{
\[
\lazycount_n \defas \lambda pred.\; \If \; pred~(\bestshot_n\;pred)\; \Then\; \naivecount_n\;pred\; \Else\; \Return\;0
\]}%
%
Here the term $pred~(\bestshot_n~pred)$ serves to test whether there
exists an $n$-point satisfying $pred$: if there is not, our count
program may return $0$ straightaway. It is thus clear that
$\lazycount_n$ is a correct implementation of generic count, and also
that if $pred$ is the predicate $\lambda q.\False$ then
$\lazycount_n\;pred$ returns $0$ within $O(1)$ time, thus violating
the $\Omega(2^n)$ lower bound suggested above.
This might seem like a footling point, as $\lazycount_n$ offers this
efficiency gain \emph{only} on (certain implementations of) the
constantly false predicate. However, it turns out that by a recursive
application of this nesting trick, we may arrive at a generic count
program that spectacularly defies the $\Omega(2^n)$ lower bound for an
interesting class of (non-$n$-standard) predicates, and indeed proves
quite viable for counting solutions to `$n$-queens' and similar
problems. We shall refer to this program as $\BergerCount$, as it is
modelled largely on Berger's PCF implementation of the so-called
\emph{fan functional}~\citep{Berger90, LongleyN15}. This program is of
interest in its own right and is briefly presented in
Appendix~\ref{sec:berger-count}. It actually requires a mild
extension of $\BPCF$ with a `memoisation' primitive to achieve the
effect of call-by-need evaluation; but such a language can still be
seen as purely `functional' in the same sense as Haskell.
In the meantime, however, the moral is that the use of \emph{nesting}
can lead to surprising phenomena which sometimes defy intuition
(\citet{Escardo07} gives some striking further examples). What we now
wish to show is that for \emph{$n$-standard} predicates, the \naive
lower bound of $\Omega(n2^n)$ cannot in fact be circumvented. The
example of $\BergerCount$ both highlights the need for a rigorous
proof of this and tells us that such a proof will need to pay
particular attention to the possibility of nesting.
We now proceed to the proof itself. We here present the argument in
the basic setting of $\BPCF$; later we will see how a more delicate
argument applies to languages with mutable state
(Section~\ref{sec:mutable-state}).
As a first step, we note that where lower bounds are concerned, it
will suffice to work with the small-step operational semantics of
$\BPCF$ rather than the more elaborate abstract machine model
employed in Section~\ref{sec:base-abstract-machine}. This is because,
as observed in Section~\ref{sec:base-abstract-machine}, there is a
tight correspondence between these two execution models such that for
the evaluation of any closed term, the number of abstract machine
steps is always at least the number of small-step reductions. Thus,
if we are able to show that the number of small-step reductions for
any generic program program in $\BPCF$ on any $n$-standard predicate
is $\Omega(n2^n)$, this will establish the desired lower bound on the
runtime.
Let us suppose, then, that $\Countprog$ is a program of $\BPCF$ that correctly counts
all $n$-standard predicates of $\BPCF$ for some specific $n$.
We now establish a key lemma, which vindicates the \naive intuition
that if $P$ is $n$-standard, the only way for $\Countprog$ to discover the correct
value for $\sharp \val{P}$ is to perform $2^n$ separate applications $P~Q$
(allowing for the possibility that these applications need not
be performed `in turn' but might be nested in some complex way).
\begin{lemma}[No shortcuts]\label{lem:no-shortcuts}
Suppose $\Countprog$ correctly counts all $n$-standard predicates of $\BPCF$.
If $P$ is an $n$-standard predicate,
then $\Countprog$ applies $P$ to at least $2^n$ distinct $n$-points.
More formally, for any of the $2^n$ possible semantic $n$-points
$\pi : \N_n \to \B$, there is a term $\EC[P~Q]$ appearing in the
small-step reduction of $\Countprog~P$ such that $Q$ is an $n$-point and $\val{Q} = \pi$.
\end{lemma}
\begin{proof}
Suppose for a contradiction that $\pi$ is some semantic $n$-point
such that no application $P~Q$ with $\val{Q}=\pi$ ever arises in the
course of computing $\Countprog~P$. Let $\tree$ be the untimed
decision tree for $P$. Let $l$ be the maximal path through $\tau$
associated with $\pi$: that is, the one we construct by responding
to each query $\query k$ with $\pi(k)$. Then $l$ is a leaf node such
that $\tree(l) = \ans (\tree \bullet \pi)$. We now let $\tree'$ be
the tree obtained from $\tree$ by simply negating this answer value
at $l$.
It is a simple matter to construct a $\BPCF$ $n$-standard predicate
$P'$ whose decision tree is $\tree'$. This may be done just by
mirroring the structure of $\tree'$ by nested $\If$ statements; we
omit the easy details.
Since the numbers of true-leaves in $\tree$ and $\tree'$ differ by
1, it is clear that if $\Countprog$ indeed correctly counts all
$n$-standard predicates, then the values returned by $\Countprog~P$
and $\Countprog~P'$ will have an absolute difference of 1. On the
other hand, we shall argue that if the computation of $\Countprog~P$
never actually `visits' the leaf $l$ in question, then $\Countprog$
will be unable to detect any difference between $P$ and $P'$.
The situation is reminiscent of Milner's \emph{context
lemma}~\cite{Milner77}, which (loosely) says that essentially the
only way to observe a difference between two programs is to apply
them to some argument on which they differ. Traditional proofs of
the context lemma reason by induction on length of reduction
sequences, and our present proof is closely modelled on these.
We shall make frequent use of term contexts $M[-]$ with a hole of
type $\Predicate$ (which may appear zero, one or more times in $M$)
in order to highlight particular occurrences of $P$ within a term.
The following definition enables us to talk about computations that
avoid the critical point $\pi$:
\begin{definition}[Safe terms]\label{def:safe}
If $M[-]$ is such a context of ground type, let us say $M[-]$ is \emph{safe} if
\begin{itemize}
\item $M[P]$ is closed, and $M[P] \reducesto^\ast \Return\;W$ for some closed
ground type value $W$;
\item For any term $\EC[P~Q]$ appearing in the reduction of $M[P]$, where the
applicand $P$ in $P~Q$ is a residual of one of the abstracted occurrences in $M[P]$,
we have that $\val{Q} \neq \pi$.
\end{itemize}
We may express this as `$M[P]$ is safe' when it is clear which occurrences of $P$
we intend to abstract.
\end{definition}
For example, our current hypotheses imply that $\Countprog~P$ is safe
(formally, $\Countprog'[-] \defas \Countprog\;-$ is safe).
%
We may now prove the following:
\begin{lemma}\label{lem:replacement}
~
\begin{enumerate}[(i)]
\item Suppose $Q[-] : \Point$ and $k : \Nat$ are values such that
$Q[P]~k$ is safe, and suppose $Q[P]~k \reducesto^m \Return\;b$
where $m \in \N$. Then also $Q[P']~k \reducesto^\ast \Return\;b$.
\item Suppose $P~Q[P]$ is safe and $P~Q[P] \reducesto^m
\Return\;b$. Then also
$P'~Q[P'] \reducesto^\ast \Return\;b$.
\end{enumerate}
\end{lemma}
We prove these claims by simultaneous induction on the computation
length $m$. Both claims are vacuous when $m=0$ as neither $Q[P]~k$
nor $P~Q[P]$ is a $\Return$ term. We therefore assume $m>0$ where
both claims hold for all $m'<m$.
(i) Let $p:\Predicate$ be a distinguished free variable, and
consider the behaviour of $Q[p]~k$. If this reduces to a value
$\Return\,W$, then also $Q[P]~k \reducesto^\ast\Return\,W$, whence
$W = b$ and also $Q[P']~k \reducesto \Return\;b$ as required.
Otherwise, the reduction of $Q[p]~k$ will get stuck at some term
$M_0 = \EC_0[p~Q_0[p], p]$.
%
Here the first hole in $\EC_0[-,-]$ is in the evaluation position,
and the second hole abstracts all remaining occurrences of $p$
within $M_0$. We may also assume that $Q_0[-]$ abstracts all
occurrences of $p$ in $Q_0[p]$.
Correspondingly, the reduction of $Q[P]~k$ will reach
$\EC_0[P~Q_0[P], P]$ and then proceed with the embedded reduction of
$P~Q_0[P]$. Note that $P~Q_0[P]$ will be safe because $Q[P]~k$ is.
So let us suppose that $P~Q_0[P] \reducesto^\ast \Return\;b_0$,
whence $Q[P]~k \reducesto^\ast \EC_0[\Return\;b_0, P]$.
We may now investigate the subsequent reduction behaviour of
$Q[P]~k$ by considering the reduction of $\EC_0[\Return\;b_0, p]$.
Once again, this may reduce to a value $\Return\;W$, in which case
$W = b$ and our computation is complete. Otherwise, the reduction
of $\EC_0[\Return\;b_0, p]$ will get stuck at some $M_1 =
\EC_1[p~Q_1[p], p]$, and we may again proceed as above.
By continuing in this way, we may analyse the reduction of $Q[P]~k$
as follows.
%
{
\begin{mathpar}
\begin{eqs}
Q[P]~k & \reducesto^\ast & \EC_0[P~Q_0[P], P] ~\reducesto^\ast \EC_0[\Return\;b_0, P]
\reducesto^\ast \EC_1[P~Q_1[P], P]\\
&\reducesto^\ast & \EC_1[\Return\;b_1, P]
\reducesto^\ast \dots
\reducesto^\ast \EC_{r-1}[P~Q_{r-1}[P], P]\\
&\reducesto^\ast& \EC_{r-1}[\Return\;b_{r-1}, P]
\reducesto~ \Return\;b
\end{eqs}
\end{mathpar}
}%
Here the terms $P~Q_j[P]$ will be safe, and the reductions $P~Q_j[P]
\reducesto^\ast \Return\;b_j$ each have length $<m$. We may
therefore apply part~(ii) of the induction hypothesis and conclude
that also $P'~Q_j[P'] \reducesto^\ast \Return\;b_j$.
%
Furthermore, the remaining segments of the above computation are all
obtained as instantiations of `generic' reduction sequences
involving $p$, so these segments will remain valid if $p$ is
instantiated to $P'$. Reassembling everything, we have a valid
reduction sequence:
%
%
{
\begin{mathpar}
\begin{eqs}
Q[P']~k & \reducesto^\ast & \EC_0[P'~Q_0[P'], P'] \reducesto^\ast~ \EC_0[\Return\;b_0, P']
\reducesto^\ast \EC_1[P'~Q_1[P'], P']\\
& \reducesto^\ast & \EC_1[\Return\;b_1, P']
\reducesto^\ast \dots
\reducesto^\ast \EC_{r-1}[P'~Q_{r-1}[P'], P']\\
&\reducesto^\ast & \EC_{r-1}[\Return\;b_{r-1}, P']
\reducesto \Return\;b
\end{eqs}
\end{mathpar}
}%
%
This establishes the induction step for part~(i).
(ii) We may apply a similar analysis to the computation of $P~Q[P]$
to detect the places where $Q[P]$ is applied to an argument. We do
this by considering the reduction behaviour of $P~q$, where
$q:\Point$ is the distinguished variable that featured in
Definition~\ref{def:model-construction}. In this way we may analyse
the computation of $P~Q[P]$ as:
%
{
\begin{mathpar}
\begin{eqs}
P~Q[P] & ~\reducesto^\ast~ & \EC_0[Q[P]~k_0, Q[P]] ~\reducesto^\ast~ \EC_0[\Return\;b_0, Q[P]]
~\reducesto^\ast~ \EC_1[Q[P]~k_1, Q[P]] ~\reducesto^\ast~ \dots \\
& ~\reducesto^\ast~ & \EC_{r-1}[Q[P]~k_{r-1}, Q[P]] ~\reducesto^\ast~ \EC_{r-1}[\Return\;b_{r-1}, Q[P]]
~\reducesto~ \Return\;b
\end{eqs}
\end{mathpar}}
%
where for each $j$, the first hole in $\EC_j[-,-]$ is in evaluation
position, the term $Q[P]~k_j$ is safe, the reduction
$Q[P]~k_j \reducesto^\ast \Return\;b_j$ has length $<m$, and the
remaining portions of computation are instantiations of generic
reductions involving $q$. By part~(i) of the induction hypothesis we
may conclude that also $Q[P']~k_j \reducesto^\ast \Return\;b_j$ for
each $j$, and for the remaining segments of computation we may
instantiate $q$ to $Q[P']$. We thus obtain a computation exhibiting
that $P~Q[P'] \reducesto^\ast \Return\;b$.
It remains to show that the applicand $P$ may be replaced by $P'$
here without affecting the result. The idea here is that the
booleans $b_0,\dots,b_{r-1}$ trace out a path through the decision
tree for $P$; but since $P~Q[P]$ is safe, we have that $\val{Q[P]}
\neq \pi$, and so this path does \emph{not} lead to the critical
leaf $l$. We now have everything we need to establish that $P'~Q[P']
\reducesto^\ast \Return\;b$ as required.
More formally, in view of the correspondence between small-step
reduction and abstract machine semantics, we may readily correlate
the above computation of $P~Q[P]$ with an exploration of the path
$bs = b_0 \dots b_{r-1}$ in $\tau = \tru(P)$, leading to a leaf with
label $\ans b$.
%
Since $P$ is $n$-standard, this correlation shows that $r=n$, that
for each $j$ we have $\tau(b_0\ldots b_{j-1}) = \query k_j$, and
that $\{ k_0,\ldots,k_{r-1} \} = \{ 0,\dots,n-1 \}$. Furthermore,
we have already ascertained that the values of $Q[P]$ and $Q[P']$ at
$k_j$ are both $b_j$, whence $\val{Q[P]} = \val{Q[P']} = \pi'$ where
$\pi'(k_j)=b_j$ for all $j$. But $P~Q[P]$ is safe, so in particular
$\pi' = \val{Q[P]} \neq \pi$. We therefore also have
$\tau'(b_0 \dots b_{j-1}) = \query k_j$ for each $j \leq r$ and
$\tau'(b_0 \dots b_{r-1}) = b$. Since $\tau' = \tru(P')$ and
$\val{Q[P']} = \pi'$, we may conclude by
Proposition~\ref{prop:pred-tree} that
$P'~Q[P'] \reducesto^\ast \Return\;b$. This completes the proof of
Lemma~\ref{lem:replacement}.
To finish off the proof of Lemma~\ref{lem:no-shortcuts}, we apply the same analysis
one last time to the reduction of $\Countprog~P$ itself. This will have the form
{
\begin{mathpar}
\begin{eqs}
\Countprog~P & ~\reducesto^\ast~ & \EC_0[P~Q_0[P], P] ~\reducesto^\ast \EC_0[\Return\;b_0,P]
~\reducesto^\ast~ \dots \\
& ~\reducesto^\ast~ & \EC_{r-1}[P~Q_{r-1}[P], P] ~\reducesto^\ast \EC_{r-1}[\Return\;b_{r-1},P]
~\reducesto^\ast~ \Return\;c
\end{eqs}
\end{mathpar}
}%
where, by hypothesis, each $P~Q_j[P]$ is safe. Using Lemma~\ref{lem:replacement} we may
replace each subcomputation $P~Q_j[P] \reducesto^\ast \Return\;b_j$ with
$P'~Q_j[P'] \reducesto^\ast \Return\;b_j$, and so construct a computation exhibiting that
$\Countprog~P' \reducesto^\ast \Return\;c$.
This gives our contradiction, as the values of $\Countprog~P$ and $\Countprog~P'$
are supposed to differ by 1.
\end{proof}
\begin{corollary}
Suppose $K$ and $P$ are as in Lemma~\ref{lem:no-shortcuts}. For any
semantic $n$-point $\pi$ and any natural number $k < n$, the
reduction sequence for $K~P$ contains a term $\FF[Q~k]$, where $\FF$
is an evaluation context and $\val{Q}=\pi$.
\end{corollary}
\begin{proof}
Suppose $\pi \in \B^n$. By Lemma~\ref{lem:no-shortcuts}, the
computation of $\Countprog~P$ contains some $\EC[P~Q]$ where
$\val{Q} = \pi$, and the above analysis of the computation of $P~Q$
shows that it contains a term $\EC'[Q~k]$ for each $k < n$. The
corollary follows, taking $\FF[-] \defas \EC[\EC'[-]]$.
\end{proof}
This gives our desired lower bound. Since our $n$-points $Q$ are
values, it is clearly impossible that $\FF[Q~k] = \FF'[Q'~k']$ (where
$\FF,\FF'$ are evaluation contexts) unless $Q=Q'$ and $k=k'$. We may
therefore read off $\pi$ from $\FF[Q~k]$ as $\val{Q}$. There are thus
at least $n2^n$ distinct terms in the reduction sequence for
$\Countprog~P$, so the reduction has length $\geq n 2^n$. We have
thus proved:
\begin{theorem}
If $\Countprog$ is a $\BPCF$ program that correctly counts all
$n$-standard $\BPCF$ predicates, and $P$ is any $n$-standard
$\BPCF$ predicate, then the evaluation of $\Countprog~P$ must take
time $\Omega(n2^n)$. \qed
\end{theorem}
Although we shall not go into details, it is not too hard to apply our
proof strategy with minor adjustments to certain richer languages: for
instance, an extension of $\BPCF$ with exceptions, or one containing
the memoisation primitive required for $\BergerCount$
(Appendix~\ref{sec:berger-count}). A deeper adaptation is required for
languages with state: we will return to this in
Section~\ref{sec:robustness}.
It is worth noting where the above argument breaks down if applied to
$\HPCF$. In $\BPCF$, in the course of computing $\Countprog~P$, every
$Q$ to which $P$ is applied will be a self-contained closed term
denoting some specific point $\pi$. This is intuitively why we may
only learn about one point at a time. In $\HPCF$, this is not the
case, because of the presence of operation symbols. For instance, our
$\ECount$ program from Section~\ref{sec:effectful-counting} will apply
$P$ to the `generic point' $\Superpoint$. Thus, for example, in our
treatment of Lemma~\ref{lem:replacement}(i), it need no longer be the
case that the reduction of $Q[p]~k$ either yields a value or gets
stuck at some $\EC_0[p~Q_0[p],p]$: a third possibility is that it gets
stuck at some invocation of $\ell$, so that control will then pass to
the effect handler.
%%
%% Generalising
%%
\section{Extensions and variations}
\label{sec:robustness}
Our complexity result is robust in that it continues to hold in more
general settings. We outline here how it generalises: beyond
$n$-standard predicates, from generic count to generic search, and
from pure $\BPCF$ to stateful $\BSPCF$.
\subsection{Beyond $n$-standard predicates}
\label{sec:beyond}
The $n$-standard restriction on predicates serves to make the
efficiency phenomenon stand out as clearly as possible. However, we
can relax the restriction by tweaking $\ECount$ to handle repeated
queries and missing queries.
%
The trade off is that the analysis of $\ECount$ becomes more involved.
%
The key to relaxing the $n$-standard restriction is the use of state
to keep track of which queries have been computed.
%
We can give stateful implementations of $\ECount$ without changing its
type signature by using \emph{parameter-passing}~\citep{KammarLO13,
Pretnar15} to internalise state within a handler.
%
Parameter-passing abstracts every handler clause such that the current
state is supplied before the evaluation of a clause continues and the
state is threaded through resumptions: a resumption becomes a
two-argument curried function $r : B \to S \to D$, where the first
argument of type $B$ is the return type of the operation and the
second argument is the updated state of type $S$.
\paragraph{Repeated queries} We can generalise $\ECount$ to handle
repeated queries by memoising previous answers. First, we generalise
the type of $\Branch$ such that it carries an index of a query.
%
{
\[
\Branch : \Nat \to \Bool
\]}
%
We assume a family of natural number to boolean maps, $\dec{Map}_n$
with the following interface.
%
{
\begin{equations}
\dec{empty}_n &:& \dec{Map}_n \\
\dec{add}_n &:& (\Nat_n \times \Bool) \to \dec{Map}_n \to \dec{Map}_n \\
\dec{lookup}_n &:& \Nat_n \to \dec{Map}_n \to (\One + \Bool) \\
\end{equations}}%
%
Invoking $\dec{lookup}~i~map$ returns $\Inl~\Unit$
if $i$ is not present in $map$, and $\Inr~ans$ if $i$ is
associated by $map$ with the value $ans : \Bool$.
%
Allowing ourselves a few extra constant-time arithmetic operations, we
can realise suitable maps in $\BPCF$ such that the time complexity of
$\dec{add}_n$ and $\dec{lookup}_n$ is
$\BigO(\log n)$~\cite{Okasaki99}. We can then use parameter-passing
to support repeated queries as follows.
%
{
\[
\bl
\ECount'_n : ((\Nat_n \to \Bool) \to \Bool) \to \Nat\\
\ECount'_n~pred \defas
\bl
\Let\; h \revto \Handle\; pred\,(\lambda i. \Do\; \Branch~i)\; \With\\
\quad\ba[t]{@{}l@{}c@{}l@{}}
\Return\; x &\mapsto& \lambda s. \If\; x\; \Then\; 1 \;\Else\; 0 \\
\OpCase{\Branch}{i}{r} &\mapsto&
\ba[t]{@{}l}\lambda s.
\Case\; \dec{lookup}_n~i~s\; \{\\
\ba[t]{@{~}l@{~}c@{~}l}
\Inl\,\Unit &\mapsto&
\ba[t]{@{}l}
\Let\;x_\True \revto r~\True~(\dec{add}_n\,\Record{i, \True}\,s)\; \In\\
\Let\;x_\False \revto r~\False~(\dec{add}_n\,\Record{i, \False}\,s)\; \In\\
x_\True + x_\False; \\
\ea\\
\Inr~x &\mapsto& r~x~s\; \} \\
\ea \\
\ea \\
\ea\\
\In\;h~\dec{empty}_n \\
\el \\
\el
\]}%
%
The state parameter $s$ memoises query results, thus avoiding
double-counting and enabling $\ECount'_n$ to work correctly for
predicates performing the same query multiple times.
\paragraph{Missing queries}
%
Similarly, we can use parameter-passing to support missing queries.
%
{
\[
\bl
\ECount''_n : ((\Nat_n \to \Bool) \to \Bool) \to \Nat\\
\ECount''_n~pred \defas
\bl
\Let\;h \revto \bl
\Handle\;pred\,(\lambda i. \Do\;\Branch~\Unit)\;\With\\
\quad
\ba[t]{@{}l@{\hspace{1.5ex}}c@{\hspace{1.5ex}}l@{}}
\Return~x &\mapsto& \lambda d.
\ba[t]{@{}l}
\Let\; result \revto \If\;x\;\Then\;1\;\Else\;0\;\In\;\\
result \times 2^{n - d}\\
\ea\\
\OpCase{\Branch}{\Unit}{r} &\mapsto& \lambda d.
\ba[t]{@{}l}
\Let\;x_\True \revto r~\True~(d+1)\;\In\\
\Let\;x_\False \revto r~\False~(d+1)\;\In\\
(x_\True + x_\False)
\ea
\ea\\
\el \\
\In\;h~0 \\
\el \\
\el
\]}%
%
The parameter $d$ tracks the depth and the returned result is scaled
by $2^{n-d}$ accounting for the unexplored part of the current
subtree.
%
This enables $\ECount''_n$ to operate correctly on predicates that
inspect $n$ points at most once.
%
We leave it as an exercise for the reader to combine $\ECount'_n$ and
$\ECount''_n$ in order to handle both repeated queries and missing
queries.
\subsection{From generic count to generic search}
\label{sec:count-vs-search}
We can generalise the problem of generic counting to generic
searching. The main operational difference is that a generic search
procedure must materialise a list of solutions, thus its type is
%
{
\[
\mathsf{search}_{n} : ((\Nat_n \to \Bool) \to \Bool) \to \List_{\Nat_n \to \Bool}
\]}%
%
where $\List_A$ is the type of cons-lists whose elements have type
$A$.
%
We modify $\ECount$ to return a list of solutions rather than the
number of solutions by lifting each result into a singleton list and
using list concatenation instead of addition to combine partial
results $xs_\True$ and $xs_\False$ as follows.
%
\newcommand{\ESearch}{\mathsf{effsearch}}
\newcommand{\Singleton}{\mathsf{singleton}}
\newcommand{\Concat}{\mathsf{concat}}
\newcommand{\HughesList}{\mathsf{HList}}
\newcommand{\ToConsList}{\mathsf{toConsList}}
{
\[
\bl
\ESearch_n : ((\Nat_n \to \Bool) \to \Bool) \to \List_{\Nat_n \to \Bool}\\
\ESearch_n~pred \defas
\bl\Let\; f \revto \bl
\Handle\; pred\,(\lambda i. \Do\; \Branch~i)\; \With\\
~~\ba[t]{@{}l@{}l}
\Return\; x &\mapsto \lambda q. \If\, x \;\Then\; \Singleton~q \;\Else\; \dec{nil} \\
\OpCase{\Branch}{i}{r} &\mapsto\\
\multicolumn{2}{l}{\quad\ba[t]{@{}l}\lambda q.
\ba[t]{@{}l}
\Let\;xs_\True \revto r~\True~(\lambda j.\If\;i=j\;\Then\;\True\;\Else\;q~j) \;\In\\
\Let\;xs_\False \revto r~\False~(\lambda j. \If\;i=j\;\Then\;\False\;\Else\;q~j) \;\In\\
\Concat~\Record{xs_\True,xs_\False} \\
\ea\\
\ea}\\
\ea \\
\el \\
\In\;\ToConsList~(f~(\lambda j. \bot))
\el \\
\el
\]}%
%
The $\Branch$ operation is now parameterised by an index $i$.
%
The handler is now parameterised by the current path as a point $q$,
which is output at a leaf iff it is in the predicate.
%
A little care is required to ensure that $\ESearch_n$ has runtime
$\BigO(2^n)$; \naive use of cons-list concatenation would result in
$\BigO(n2^n)$ runtime, as cons-list concatenation is linear in its
first operand. In place of cons-lists we use Hughes
lists~\citep{Hughes86}, which admit constant time concatenation:
%
$\HughesList_A \defas \List_A \to \List_A$. The empty Hughes list
$\dec{nil} : \HughesList_A$ is defined as the identity function:
$\dec{nil} \defas \lambda xs. xs$.
%
{
\[
\ba{@{}l@{\qquad}l}
\bl
\Singleton_A : A \to \HughesList_A\\
\Singleton_A~x \defas \lambda xs. x \cons xs
\el &
\bl
\Concat_A : \HughesList_A \times \HughesList_A \to \HughesList_A\\
\Concat_A~f\,g \defas \lambda xs. g~(f~xs)
\el \smallskip\\
\bl
\ToConsList_A : \HughesList \to \List_A\\
\ToConsList_A~f \defas f~\nil
\el &
\ea
\]}%
We use the function $\ToConsList$ to convert the final Hughes list to
a standard cons-list at the end; this conversion has linear time
complexity (it just conses all of the elements of the list together).
\subsection{From pure $\BPCF$ to stateful $\BSPCF$}
\label{sec:mutable-state}
Mutable state is a staple ingredient of many practical programming
languages. We now outline how our main lower bound result can be
extended to a language with state. We will not give full details, but
merely point out the respects in which our earlier treatment needs to
be modified.
We have in mind an extension $\BSPCF$ of $\BPCF$ with ML-style
reference cells: we extend our grammar for types with a reference type
($\PCFRef~A$), and that for computation terms with forms for creating
references ($\keyw{letref}\; x = V\; \In\; N$), dereferencing ($!x$),
and destructive update ($x := V$), with the familiar typing rules. We
also add a new kind of value, namely \emph{locations} $l^A$, of type
$\PCFRef~A$. We adopt a basic Scott-Strachey~\citeyearpar{ScottS71} model
of store: a location is a natural number decorated with a type, and
the execution of a stateful program allocates locations in the order
$0,1,2,\ldots$, assigning types to them as it does so. A \emph{store}
$s$ is a type-respecting mapping from some set of locations $\{
0,\ldots,l-1 \}$ to values. For the purposes of small-step
operational semantics, a \emph{configuration} will be a triple
$(M,l,s)$, where $M$ is a computation, $l$ is a `location counter',
and $s$ is a store with domain $\{ 0,\ldots,l-1 \}$. A reduction
relation $\reducesto$ on configurations is defined in a familiar way
(again we omit the details).
Certain aspects of our setup require care in the presence of state.
For instance, there is in general no unique way to assign an (untimed)
decision tree to a closed value $P : \Predicate_n$, since the
behaviour of $P$ on a value $q : \Point_n$ may depend both on the
initial state when $P$ is invoked, and on the ways in which the
associated computations $q~V \reducesto^\ast \Return\;W$ modify the
state. In this situation, there is not even a clear specification for
what an $n$-count program ought to do.
The simplest way to circumvent this difficulty is to restrict
attention to predicates $P$ \emph{within the sublanguage $\BPCF$}.
For such predicates, the notions of decision tree, counting and
$n$-standardness are unproblematic. Our result will establish a
runtime lower bound of $\Omega(n2^n)$ for programs $\Countprog \in
\BSPCF$ that correctly count predicates $P$ of this kind.
%
On the other hand, since $\Countprog$ itself may be stateful, we
cannot exclude the possibility that $\Countprog~P$ will apply $P$ to a
term $Q$ that is itself stateful. Such a $Q$ will no longer
unambiguously denote a semantic point $\pi$, hence the proof of
Section~\ref{sec:pure-counting} must be adapted.
To adapt our proof to the setting of $\BSPCF$, some more machinery is
needed. If $\Countprog$ is an $n$-count program and $P$ an
$n$-standard predicate, we expect that the evaluation of
$\Countprog~P$ will feature terms $\EC[P~Q]$ which are then reduced to
some $\EC[\Return\;b]$, via a reduction sequence which, modulo
$\EC[-]$, has the following form:
%
{
\[
P\,Q~
\bl \reducesto^\ast \EC_0[Q~k_0] \reducesto^\ast \EC_0[\Return\,b_0] \reducesto^\ast \cdots
\reducesto^\ast \EC_{n-1}[Q~k_{n-1}]\\
\reducesto^\ast \EC_{n-1}[\Return\,b_{n-1}]
\reducesto^\ast \Return\;b
\el
\]}%
%
(For notational clarity, we suppress mention of the location and store
components here.) Informally we think of this as a dialogue in
which control passes back and forth between $P$ and $Q$. We shall
refer to the portions $\EC_j[Q~k_j] \reducesto^\ast
\EC_j[\Return\;b_j]$ of the above reduction as \emph{$Q$-sections},
and to the remaining portions (including the first and the last) as
\emph{$P$-sections}. We refer to the totality of these $P$-sections
and $Q$-sections as the \emph{thread} arising from the given
occurrence of the application $P\,Q$. An important point to note is
that since $Q$ may contain other occurrences of $P$, it is quite
possible for the $Q$-sections above to contain further threads
corresponding to other applications $P~Q'$.
Since $P$ is $n$-standard, we know that each thread will consist of
$n+1$ $P$-sections separated by $n$ $Q$-sections.
%
Indeed, it is clear that this computation traces the path
$b_0 \ldots b_{n-1}$ through the decision tree for $P$, with
$k_0,\ldots,k_{n-1}$ the corresponding internal node labels. We may
now, `with hindsight', construe this as a semantic point
$\pi : \N_n \to \B$ (where $\pi(k_j)=b_j$ for each $j$), and call it
the semantic point \emph{associated with} (the thread arising from)
the application occurrence $P~p$.
The following lemma now serves as a surrogate for
Lemma~\ref{lem:no-shortcuts}:
\begin{lemma}
Let $P$ be an $n$-standard predicate. For any semantic point
$\pi \in \B^n$, the evaluation of $\Countprog~P$ involves an
application occurrence $P~Q$ with which $\pi$ is associated.
\end{lemma}
%
The proof of this lemma is not too different from that of
Lemma~\ref{lem:no-shortcuts}: if $\pi$ were a point with no associated
thread, there would be an unvisited leaf in the decision tree, and we
could manufacture an $n$-standard predicate $P'$ whose tree differed
from that of $P$ only at this leaf. We can then show, by induction on
length of reductions, that any portion of the evaluation of
$\Countprog~P$ can be suitably mimicked with $P$ replaced by $P'$.
Naturally, this idea now needs to be formulated at the level of
\emph{configurations} rather than plain terms: in the course of
reducing $(\Countprog~P,0,[])$, we may encounter configurations
$(M,l,s)$ in which residual occurrences of $P$ have found their way
into $s$ as well as $M$, so in order to replace $P$ by $P'$ we must
abstract on all these occurrences via an evident notion of
\emph{configuration context}. With this adjustment, however, the
argument of Lemma~\ref{lem:no-shortcuts} goes through.
A further argument is then needed to show that any two threads are
indeed `disjoint' as regards their $P$-sections, so that there must be
at least $n2^n$ steps in the overall reduction sequence.
%% Since each thread involves at least the $n$ terms $\EC_j[Q~k_j]$, our
%% proof of the $\Omega(n2^n)$ bound is complete provided we can show
%% that no two threads overlap: more precisely, none of the above terms
%% $\EC_j[Q~k_j]$ can belong to the $P$-section of more than one thread.
%% The difficulty here is that because syntactic points no longer have
%% unambiguous denotations, the relevant point $\pi$ can no longer be
%% simply read off from $Q$. Indeed, it is entirely possible that our
%% computation may involve two instances of the same application $P~Q$
%% giving rise to entirely different threads owing to the presence of
%% state. Fortunately, however, we may reason as follows.
%% Let us suppose that $P~Q$ and $P~Q'$ are any two application
%% occurrences arising in the evaluation of $\Countprog~P$, with $P~Q$
%% appearing before $P~Q'$, and suppose these respectively give rise to
%% threads $\theta, \theta'$. We wish to show that the $P$-sections of
%% $\theta$ do not overlap with those of $\theta'$. There are three
%% cases:
%% %
%% \begin{itemize}
%% \item If $\theta'$ does not start until after $\theta$ has finished,
%% then of course $\theta,\theta'$ are disjoint.
%% \item If $\theta'$ starts within some $Q$-section
%% $\EC_j[Q~k_j] \reducesto^\ast \EC_j[\Return\;b_j]$ of $\theta$, then it
%% is not hard to see that $\theta'$ must also end within this same
%% $Q$-section, as the evaluation of $P~Q'$ will form part of the
%% evaluation of $Q~k_j$.
%% \item It is not possible for $\theta'$ to start within a $P$-section of
%% $\theta$. This follows from the fact that a `residual occurrence' of
%% $P$ (that is, one arising as a residual of the $P$ in $\Countprog~P$)
%% cannot itself contain other residual occurrences of $P$; thus,
%% for any term arising from the reduction of $P~Q$ (discounting
%% $P\,Q$ itself), every residual occurrence of $P$ occurs within
%% some $Q$.
%% \end{itemize}
%% %
%% Arguing along such lines, one can show that any two threads are indeed
%% `disjoint' as regards their $P$-sections, so that there must be at
%% least $n2^n$ steps in the overall reduction sequence.
\newcommand{\tooslow}{-}
\newcommand{\tablesmlnjintegration}
{\begin{table*}
\centering
\begin{tabular}{ @{} | l | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} | r@{\,} | r@{\,} | @{} }
\cline{2-10}
\multicolumn{1}{l |}{} &
\multicolumn{9}{@{}c@{} |}{\textbf{Integration}}
\\\cline{2-10}
\multicolumn{1}{c |}{} &
%% Integration subheadings
\multicolumn{1}{@{}c@{} |@{\,}|}{\textbf{Id}} &
\multicolumn{3}{ @{}c@{} |@{\,}|}{\textbf{Squaring}} &
\multicolumn{5}{ @{}c@{} |}{\textbf{Logistic}}
\\\cline{2-10}
\multicolumn{1}{c |}{\emph{Parameter\!\!}} &
%% Integration parameters.
%%% Identity
\multicolumn{1}{@{}c@{} |@{\,}|}{$20$} &
%%% Squaring
\multicolumn{1}{@{}c@{} |}{$14$} &
\multicolumn{1}{@{}c@{} |}{$17$} &
\multicolumn{1}{@{}c@{} |@{\,}|}{$20$} &
%%% Logistic (fixed precision, variable iteration)
\multicolumn{1}{@{}c@{} |}{$1$} &
\multicolumn{1}{@{}c@{} |}{$2$} &
\multicolumn{1}{@{}c@{} |}{$3$} &
\multicolumn{1}{@{}c@{} |}{$4$} &
\multicolumn{1}{@{}c@{} |}{$5$}
\\\hline
%% Results: \Naive.
\Naive &
%%% Integration.
$\!\!12.89$ &
$\!\!45.04$ &
$\!\!57.80$ &
$\!\!69.86$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$
\\\hline
%% Results: Berger.
Berger &
%%% Integration.
$5.18$ &
$\!\!20.62$ &
$\!\!22.37$ &
$\!\!23.46$ &
$22.51$ &
$28.97$ &
$30.14$ &
$29.30$ &
$27.94$
\\\hline
%% Results: Modulus.
Pruned &
%%% Integration.
$2.07$ &
$3.78$ &
$4.05$ &
$4.24$ &
$4.10$ &
$5.44$ &
$6.42$ &
$7.26$ &
$7.94$
\\\cline{1-10}
\end{tabular}
\caption{SML/NJ: integration benchmark runtime relative to effectful implementation.}
\label{tbl:results-smlnj-integration}
\end{table*}}
\newcommand{\tableone}
{\begin{table*}
\centering
\begin{tabular}{@{} | l | r@{\,} | r@{\,} | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} |}
\cline{2-7}
\multicolumn{1}{l |}{} &
\multicolumn{6}{@{}c@{} |}{\textbf{Queens}}
\\\cline{2-7}
\multicolumn{1}{c |}{} &
%% Queens subheadings
\multicolumn{3}{| @{}c@{} |@{\,}|}{\textbf{First solution}} &
\multicolumn{3}{ @{}c@{} |}{\textbf{All solutions}}
\\\cline{2-7}
\multicolumn{1}{c |}{\emph{Parameter\!\!}} &
%% Queens parameters.
%%% first solution
\multicolumn{1}{@{}c@{} |}{$20$} &
\multicolumn{1}{@{}c@{} |}{$24$} &
\multicolumn{1}{@{}c@{} |@{\,}|}{$28$} &
%%% all solutions
\multicolumn{1}{@{}c@{} |}{$8$} &
\multicolumn{1}{@{}c@{} |}{$10$} &
\multicolumn{1}{@{}c@{} |}{$12$}
\\\hline
%% Results: \Naive.
\Naive &
%%% Queens.
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$\!\!217.74$ &
$\tooslow$ &
$\tooslow$
\\\hline
%% Results: Berger.
Berger &
%%% Queens.
$11.24$ &
$15.70$ &
$\tooslow$ &
$2.06$ &
$2.86$ &
$3.64$
\\\hline
%% Results: Modulus.
Pruned &
%%% Queens.
$2.13$ &
$2.54$ &
$2.91$ &
$1.04$ &
$1.24$ &
$1.39$
\\\cline{1-7}
%% Results: bespoke
Bespoke &
$0.12$ &
$0.12$ &
$0.12$ &
$0.13$ &
$0.13$ &
$0.12$
\\\cline{1-7}
\end{tabular}
\caption{SML/NJ: $n$-Queens benchmark runtime relative to effectful implementation.}
\label{tbl:results-smlnj-queens}
\end{table*}}
\newcommand{\tabletwo}
{\begin{table*}
\centering
\begin{tabular}{@{} | l | r@{\,} | r@{\,} | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} |}
\cline{2-7}
\multicolumn{1}{l |}{} &
\multicolumn{6}{@{}c@{} |}{\textbf{Queens}}
\\\cline{2-7}
\multicolumn{1}{c |}{} &
%% Queens subheadings
\multicolumn{3}{ @{}c@{} |@{\,}|}{\textbf{First solution}} &
\multicolumn{3}{ @{}c@{} |}{\textbf{All solutions}}
\\\cline{2-7}
\multicolumn{1}{c |}{\emph{Parameter\!\!}} &
%% Queens parameters.
%%% first solution
\multicolumn{1}{@{}c@{} |}{$20$} &
\multicolumn{1}{@{}c@{} |}{$24$} &
\multicolumn{1}{@{}c@{} |@{\,}|}{$28$} &
%%% all solutions
\multicolumn{1}{@{}c@{} |}{$8$} &
\multicolumn{1}{@{}c@{} |}{$10$} &
\multicolumn{1}{@{}c@{} |}{$12$}
\\\hline
%% Results: \Naive.
\Naive &
%%% Queens.
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$17.31$ &
$\tooslow$ &
$\tooslow$
\\\hline
%% Results: Berger.
Berger &
%%% Queens.
$0.52$ &
$0.66$ &
$\tooslow$ &
$0.19$ &
$0.22$ &
$0.20$
\\\hline
%% Results: Modulus.
Pruned &
%%% Queens.
$0.11$ &
$0.11$ &
$0.13$ &
$0.10$ &
$0.10$ &
$0.08$
\\\hline
%% Results: bespoke
Bespoke &
% $0.14$ &
% $0.14$ &
$0.005$ &
$0.004$ &
$0.004$ &
$0.01$ &
$0.009$ &
$0.006$
\\\hline
\end{tabular}
\caption{MLton: $n$-Queens benchmark runtime relative to effectful implementation.}
\label{tbl:results-mlton-queens}
\end{table*}}
\newcommand{\tablemltonintegration}
{\begin{table*}
\centering
\begin{tabular}{ @{} | l | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} | r@{\,} | r@{\,} | @{} }
\cline{2-10}
\multicolumn{1}{l |}{} &
\multicolumn{9}{@{}c@{} |}{\textbf{Integration}}
\\\cline{2-10}
\multicolumn{1}{c |}{} &
%% Integration subheadings
\multicolumn{1}{@{}c@{} |@{\,}|}{\textbf{Id}} &
\multicolumn{3}{ @{}c@{} |@{\,}|}{\textbf{Squaring}} &
\multicolumn{5}{ @{}c@{} |}{\textbf{Logistic}}
\\\cline{2-10}
\multicolumn{1}{c |}{\emph{Parameter\!\!}} &
%% Integration parameters.
%%% Identity
\multicolumn{1}{@{}c@{} |@{\,}|}{$20$} &
%%% Squaring
\multicolumn{1}{@{}c@{} |}{$14$} &
\multicolumn{1}{@{}c@{} |}{$17$} &
\multicolumn{1}{@{}c@{} |@{\,}|}{$20$} &
%%% Logistic (fixed precision, variable iteration)
\multicolumn{1}{@{}c@{} |}{$1$} &
\multicolumn{1}{@{}c@{} |}{$2$} &
\multicolumn{1}{@{}c@{} |}{$3$} &
\multicolumn{1}{@{}c@{} |}{$4$} &
\multicolumn{1}{@{}c@{} |}{$5$}
\\\hline
%% Results: \Naive.
\Naive &
%%% Integration.
$1.45$ &
$4.51$ &
$5.13$ &
$5.82$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$
\\\hline
%% Results: Berger.
Berger &
%%% Integration.
$0.43$ &
$2.02$ &
$1.95$ &
$1.92$ &
$2.17$ &
$3.59$ &
$4.24$ &
$4.34$ &
$4.28$
\\\hline
%% Results: Modulus.
Pruned &
%%% Integration.
$0.14$ &
$0.39$ &
$0.35$ &
$0.35$ &
$0.39$ &
$0.63$ &
$0.86$ &
$1.03$ &
$1.21$
\\\hline
\end{tabular}
\caption{MLton: integration benchmark runtime relative to effectful implementation.}
\label{tbl:results-mlton-integration}
\end{table*}}
%
\newcommand{\tablethree}
{\begin{table*}
\centering
\begin{tabular}{@{} | l | r@{\,} | r@{\,} | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} |}
\cline{2-7}
\multicolumn{1}{l |}{} &
\multicolumn{6}{@{}c@{} |@{\,}|}{\textbf{Queens}}
\\\cline{2-7}
\multicolumn{1}{c |}{} &
%% Queens subheadings
\multicolumn{3}{| @{}c@{} |@{\,}|}{\textbf{First solution}} &
\multicolumn{3}{| @{}c@{} |@{\,}|}{\textbf{All solutions}}
\\\cline{2-7}
\multicolumn{1}{c |}{\emph{Parameter\!\!}} &
%% Queens parameters.
%%% first solution
\multicolumn{1}{@{}c@{} |}{$20$} &
\multicolumn{1}{@{}c@{} |}{$24$} &
\multicolumn{1}{@{}c@{} |@{\,}|}{$28$} &
%%% all solutions
\multicolumn{1}{@{}c@{} |}{$8$} &
\multicolumn{1}{@{}c@{} |}{$10$} &
\multicolumn{1}{@{}c@{} |@{\,}|}{$12$}
\\\hline
%% Results: \Naive.
\Naive &
%%% Queens.
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$0.49$ &
$\tooslow$ &
$\tooslow$
\\\hline
%% Results: Berger.
Berger &
%%% Queens.
$0.62$ &
$0.64$ &
$\tooslow$ &
$0.73$ &
$0.65$ &
$0.68$
\\\hline
%% Results: Modulus.
Pruned &
%%% Queens.
$0.70$ &
$0.68$ &
$0.71$ &
$0.74$ &
$0.70$ &
$0.71$
\\\hline
%% Results: Control.
Effectful &
%%% Queens.
$12.87$ &
$13.99$ &
$14.90$ &
$8.00$ &
$8.60$ &
$12.19$
\\\hline
%% Results: bespoke
Bespoke &
% $0.14$ &
% $0.14$ &
$0.56$ &
$0.56$ &
$0.56$ &
$0.69$ &
$0.63$ &
$0.59$
\\\hline
\end{tabular}
\caption{MLton: $n$-Queens benchmark runtime relative to SML/NJ.}
\label{tbl:results-mlton-vs-smlnj-queens}
\end{table*}}
\newcommand{\tablemltonvssmlnjintegration}
{\begin{table*}
\centering
\begin{tabular}{ @{} | l | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} | @{\,} | r@{\,} | r@{\,} | r@{\,} | r@{\,} | r@{\,} | @{} }
\cline{2-10}
\multicolumn{1}{l |}{} &
\multicolumn{9}{@{}c@{} |}{\textbf{Integration}}
\\\cline{2-10}
\multicolumn{1}{c |}{} &
%% Integration subheadings
\multicolumn{1}{@{}c@{} |@{\,}|}{\textbf{Id}} &
\multicolumn{3}{ @{}c@{} |@{\,}|}{\textbf{Squaring}} &
\multicolumn{5}{ @{}c@{} |}{\textbf{Logistic}}
\\\cline{2-10}
\multicolumn{1}{c |}{\emph{Parameter\!\!}} &
%% Integration parameters.
%%% Identity
\multicolumn{1}{@{}c@{} |@{\,}|}{$20$} &
%%% Squaring
\multicolumn{1}{@{}c@{} |}{$14$} &
\multicolumn{1}{@{}c@{} |}{$17$} &
\multicolumn{1}{@{}c@{} |@{\,}|}{$20$} &
%%% Logistic (fixed precision, variable iteration)
\multicolumn{1}{@{}c@{} |}{$1$} &
\multicolumn{1}{@{}c@{} |}{$2$} &
\multicolumn{1}{@{}c@{} |}{$3$} &
\multicolumn{1}{@{}c@{} |}{$4$} &
\multicolumn{1}{@{}c@{} |}{$5$}
\\\hline
%% Results: \Naive.
\Naive &
%%% Integration.
$0.55$ &
$0.35$ &
$0.35$ &
$0.35$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$ &
$\tooslow$
\\\hline
%% Results: Berger.
Berger &
%%% Integration.
$0.41$ &
$0.35$ &
$0.34$ &
$0.34$ &
$0.37$ &
$0.37$ &
$0.37$ &
$0.37$ &
$0.37$
\\\hline
%% Results: Modulus.
Pruned &
%%% Integration.
$0.34$ &
$0.36$ &
$0.35$ &
$0.35$ &
$0.36$ &
$0.35$ &
$0.35$ &
$0.35$ &
$0.36$
\\\hline
%% Results: Control.
Effectful &
%%% Integration.
$4.93$ &
$3.53$ &
$3.95$ &
$4.20$ &
$3.80$ &
$3.00$ &
$2.62$ &
$2.46$ &
$2.37$
\\\hline
\end{tabular}
\caption{MLton: integration benchmarks runtime relative to SML/NJ.}
\label{tbl:results-mlton-vs-smlnj-integration}
\end{table*}}
\tableone
\tablesmlnjintegration
\tabletwo
\tablemltonintegration
\tablethree
\tablemltonvssmlnjintegration
%%
%% Experiments
%%
\section{Experiments}
\label{sec:experiments}
The theoretical efficiency gap between realisations of $\BPCF$ and
$\HPCF$ manifests in practice. We observe it empirically on
instantiations of $n$-queens and exact real number integration, which
can be cast as generic search. Tables~\ref{tbl:results-smlnj-queens}
and \ref{tbl:results-smlnj-integration} show the speedup of using an
effectful implementation of generic search over various pure
implementations of the $n$-Queens and integration benchmarks,
respectively. We discuss the benchmarks and results in further detail
below.
% \setlength{\floatsep}{1.0ex}
% \setlength{\textfloatsep}{1.0ex}
\paragraph{Methodology}
We evaluated an effectful implementation of generic search against
three ``pure'' implementations which are realisable in $\BPCF$
extended with mutable state:
%
\begin{itemize}
\item \Naive: a simple, and rather \naive, functional implementation;
\item Pruned: a generic search procedure with space pruning based on
Longley's technique~\cite{Longley99} (uses local state);
\item Berger: a lazy pure functional generic search procedure based on
Berger's algorithm.
\end{itemize}
%
Each benchmark was run 11 times. The reported figure is the median
runtime ratio between the particular implementation and the baseline
effectful implementation. Benchmarks that failed to terminate within a
threshold (1 minute for single solution, 8 minutes for enumerations),
are reported as $\tooslow$. The experiments were conducted in
SML/NJ~\cite{AppelM91} v110.97 64-bit with factory settings on an Intel Xeon
CPU E5-1620 v2 @ 3.70GHz powered workstation running Ubuntu 16.04. The
effectful implementation uses an encoding of delimited control akin to
effect handlers based on top of SML/NJ's call/cc.
%
The complete source code for the benchmarks and instructions on how to
run them are available at:
\begin{center}
\url{https://dl.acm.org/do/10.1145/3410231/abs/}
\end{center}
%
\paragraph{Queens}
We phrase the $n$-queens problem as a generic search problem. As a
control we include a bespoke implementation hand-optimised for the
problem. We perform two experiments: finding the first solution for $n
\in \{20,24,28\}$ and enumerating all solutions for $n \in
\{8,10,12\}$. The speedup over the \naive implementation is dramatic,
but less so over the Berger procedure. The pruned procedure is more
competitive, but still slower than the baseline. Unsurprisingly, the
baseline is slower than the bespoke implementation.
\paragraph{Exact real integration}
The integration benchmarks are adapted from \citet{Simpson98}. We
integrate three different functions with varying precision in the
interval $[0,1]$. For the identity function (Id) at precision $20$ the
speedup relative to Berger is $5.18\times$. For the squaring function
the speedups are larger at higher precisions: at precision $14$ the
speedup is $3.78\times$ over the pruned integrator, whilst it is
$4.24\times$ at precision $20$. The speedups are more extreme against
the \naive and Berger integrators. We also integrate the logistic map
$x \mapsto 1 - 2x^2$ at a fixed precision of $15$. We make the
function harder to compute by iterating it up to $5$ times. Between
the pruned and effectful integrator the speedup ratio increases as the
function becomes harder to compute.
\paragraph{MLton}
SML/NJ is compiled into CPS, thus providing a particularly efficient
implementation of call/cc.
%
MLton~\cite{Fluet20}, a whole program compiler for SML, implements
call/cc by copying the stack.
%
We repeated our experiments using MLton 20180207.
%
Tables~\ref{tbl:results-mlton-queens} and
\ref{tbl:results-mlton-integration} show the results. The effectful
implementation performs much worse under MLton than SML/NJ, being
surpassed in nearly every case by the pruned search procedure and in
some cases by the Berger search procedure.
%
Tables~\ref{tbl:results-mlton-vs-smlnj-queens} and
\ref{tbl:results-mlton-vs-smlnj-integration} summarise the runtime of
MLton relative to SML/NJ for both benchmarks. Berger, Pruned, and
Bespoke run between 1 and 3 times as fast with MLton compared to
SML/NJ.
%
However, the effectful implementation runs between 2 and 14 times as
fast with SML/NJ compared with MLton.
\section{Related work}
There are relatively little work in the present literature on
expressivity that has focused on complexity difference.
%
\citet{Pippenger96} gives an example of an online operation on
infinite sequences of atomic symbols (essentially a function from
streams to streams) such that the first $n$ output symbols can be
produced within time $\BigO(n)$ if one is working in an effectful
version of Lisp (one which allows mutation of cons pairs) but with a
worst-case runtime no better than $\Omega(n \log n)$ for any
implementation in pure Lisp (without such mutation). This example was
reconsidered by \citet{BirdJdM97} who showed that the same speedup can
be achieved in a pure language by using lazy evaluation.
\citet{Jones01} explores the approach of manifesting expressivity and
efficiency differences between certain languages by artificially
restricting attention to `cons-free' programs; in this setting, the
classes of representable first-order functions for the various
languages are found to coincide with some well-known complexity
classes.
The vast majority of work in this area has focused on computability
differences. One of the best known examples is the \emph{parallel if}
operation which is computable in a language with parallel evaluation
but not in a typical sequential programming
language~\cite{Plotkin77}. It is also well known that the presence of
control features or local state enables observational distinctions
that cannot be made in a purely functional setting: for instance,
there are programs involving call/cc that detect the order in which a
(call-by-name) `$+$' operation evaluates its arguments
\citep{CartwrightF92}. Such operations are `non-functional' in the
sense that their output is not determined solely by the extension of
their input (seen as a mathematical function
$\N_\bot \times \N_\bot \rightarrow \N_\bot$);
%%
however, there are also programs with `functional' behaviour that can
be implemented with control or local state but not without them
\citep{Longley99}. More recent results have exhibited differences
lower down in the language expressivity spectrum: for instance, in a
purely functional setting \textit{\`a la} Haskell, the expressive
power of \emph{recursion} increases strictly with its type level
\citep{Longley18a}, and there are natural operations computable by
low-order recursion but not by high-order iteration
\citep{Longley19}. Much of this territory, including the mathematical
theory of some of the natural notions of higher-order computability
that arise in this way, is mapped out by \citet{LongleyN15}.
\part{Conclusions}
\label{p:conclusions}
\chapter{Conclusions and future work}
\label{ch:conclusions}
%
I will begin this chapter with a brief summary of this
dissertation. The following sections each elaborates and spells out
directions for future work.
In Part~\ref{p:background} of this dissertation I have compiled an
extensive survey of first-class control. In this survey I characterise
the various kinds of control phenomena that appear in the literature
as well as providing an overview of the operational characteristics of
control operators appearing in the literature. To the best of my
knowledge this survey is the only of its kind in the present
literature (other studies survey a particular control phenomenon,
e.g. \citet{FelleisenS99} survey undelimited continuations as provided
by call/cc in programming practice, \citet{DybvigJS07} classify
delimited control operators according to their delimiter placement,
\citet{Brachthauser20} provides a delimited control perspective on
effect handlers, and \citet{MouraI09} survey the use of first-class
control operators to implement coroutines).
In Part~\ref{p:design} I have presented the design of a ML-like
programming language equipped an effect-and-type system and a
structural notion of effectful operations and effect handlers. In this
language I have demonstrated how to implement the essence of a
\UNIX{}-like operating system by making, almost, zealous use of deep,
shallow, and parameterised effect handlers.
In Part~\ref{p:implementation} I have devised two canonical
implementation strategies for the language, one based an
transformation into continuation passing style, and another based on
abstract machine semantics. Both strategies make key use of the notion
of generalised continuations, which provide a high-level model of
segmented runtime stacks.
In Part~\ref{p:expressiveness} I have explored how effect handlers fit
into the wider landscape of programming abstractions. I have shown
that deep, shallow, and parameterised effect handlers are
macro-expressible. Furthermore, I shown that effect handlers endow its
host language with additional computational power that provides an
asymptotic improvement in runtime performance for some class of
programs.
\section{Programming with effect handlers}
Chapters~\ref{ch:base-language} and \ref{ch:unary-handlers} present
the design of a core calculus that forms the basis for Links, which is
a practical programming language with deep, shallow, and parameterised
effect handlers. A distinguishing feature of the core calculus is that
it is based on a structural notion of data and effects, whereas other
literature predominantly consider nominal data and effects. In the
setting of structural effects the effect system play a pivotal role in
ensuring that the standard safety and soundness properties of
statically typed programming languages hold as the effect system is
used to track type and presence information about effectful
operations. In a nominal setting an effect system is not necessary to
ensure soundness (e.g. Section~\ref{sec:handlers-calculus} presents a
sound core calculus with nominal effects, but without an effect
system). Irrespective of nominal or structural notions of effects, an
effect system is a valuable asset when programming with effect
handlers as an effect system enables modular reasoning about the
composition of functions. The effect system provides crucial
information about the introduction and elimination of effects. In the
absence of an effect system programmers are essentially required to
reason globally their programs as for instance the composition of any
two functions may introduce arbitrary effects that need to be handled
accordingly. Alternatively, a composition of any two functions may
inadvertently eliminate arbitrary effects, and as such, programming
with effect handlers without an effect system is prone to error. The
\UNIX{} case study in Chapter~\ref{ch:unary-handlers} demonstrates how
the effect system assists to ensure that effectful function
compositions are meaningful.
The particular effect system that I have used throughout this
dissertation is based on \citeauthor{Remy93}-style row polymorphism
formalism~\cite{Remy93}. Whilst \citeauthor{Remy93}-style row
polymorphism provides a suitable basis for structural records and
variants, its suitability as a basis for practical effect systems is
questionable. From a practical point of view the problem with this
form of row polymorphism is that it leads to verbose type-and-effect
signature due to the presence and absence annotations. In many cases
annotations are redundant, e.g. in second-order functions like
$\dec{map}$ for lists, where the effect signature of the function is
the same as the signature of its functional argument. From a
theoretical point of view this verbosity is not a concern. However, in
practice verbosity may lead to `an overload of unequivocal
information' by which I mean the programmer is presented with too many
trivial facts about the program. Too much information can hinder both
readability and writability of programs. For instance, in most
mainstream programming languages with System F-style type polymorphism
programmers normally do not have to annotate type variables with
kinds, unless they happen to be doing something special. Similarly,
programmers do not have to write type variable quantifiers, unless
they do not appear in prenex position. In practice some defaults are
implicitly understood and it is only when programmers deviate from
those defaults that programmers ought to supply the compiler with
explicit information. In Section~\ref{sec:effect-sugar} introduces
some ad-hoc syntactic sugar for effect signature that tames the
verbosity of an effect system based on \citeauthor{Remy93}-style row
polymorphism to the degree that second-order functions like
$\dec{map}$ do not duplicate information. Rather than back-patching
the effect system in hindsight, a possibly better approach is to
design the effect system for practical programming from the ground up
as \citet{LindleyMM17} did for the Frank programming language.
Nevertheless, the \UNIX{} case study is indicative of the syntactic
sugar being adequate in practice to build larger effect-oriented
applications. The case study demonstrates how effect handlers provide
a high-degree of modularity and flexibility that enable substantial
behavioural changes to be retrofitted onto programs without altering
the existing the code. Thus effect handlers provide a mechanism for
building small task-oriented programs that later can be scaled to
interact with other programs in a larger context.
%
The case study also demonstrates how one might ascribe a handler
semantics to a \UNIX{}-like operating system. The resulting operating
system \OSname{} captures the essential features of a true operating
system including support for managing multiple concurrent user
environments simultaneously, process parallelism, file I/O. The case
study also shows how each feature can be implemented in terms of some
standard effect.
\subsection{Future work}
\paragraph{Operating systems via effect handlers}
In the \UNIX{} case study we explored the paradigmatic reading of
\emph{effect handlers as composable operating systems} in practice by
composing a \UNIX{}-like operating system out of several effects and
handlers. Obviously, the resulting system \OSname{} has been
implemented in the combined core calculus consisting of $\HCalc$,
$\SCalc$, and $\HPCalc$ calculi. There also exists an actual runnable
implementation of it in Links. It would be interesting to implement
the system in other programming languages with support for effect
handlers as at the time of writing most languages with effect handlers
have some unique trait, e.g. lexical handlers, special effect system,
etc. Ultimately, re-implementing the case study can help collect more
data points about programming with effect handlers, which can
potentially serve to inform the design of future effect
handler-oriented languages.
I have made no attempts at formally proving the correctness of
\OSname{} with respect to some specification. Although, I have
consciously opted to implement \OSname{} using standard effects with
well-known equations. Furthermore, the effect handlers have been
implemented such that they ought to respect the equations of their
effects. Thus, perhaps it is possible to devise an equational
specification for the operating system and prove the implementation
correct with respect to that specification.
One important feature that is arguably missing from \OSname{} is
external signal handling. Effect handlers as signal handlers is not a
new idea. In a previous paper we have outlined an idea for using
effect handlers to handle POSIX signals~\cite{DolanEHMSW17}. Signal
handling is a delicate matter as signals introduce a form of
preemption, thus some care needs to be taken to ensure that the
interpretation of a signal does not interrupted by another signal
instance. The essence of the idea is to have a \emph{mask} primitive,
which is a form of critical section for signals that permits some
block of code to suppress signal interruptions. A potential starting
point would be to combine \citeauthor{AhmanP21}'s calculus of
asynchronous effects with $\HCalc$ to explore this idea more
formally~\cite{AhmanP21}.
Another interesting thought is to implement an actual operating system
using effect handlers. Although, it might be a daunting task, the idea
is maybe not so far fetched. With the advent of effect handlers in
OCaml~\cite{SivaramakrishnanDWKJM21} it may be possible for MirageOS
project~\cite{MadhavapeddyS14}, which is a unikernel based operating
system written in OCaml, to take advantage of effect handlers to
implement features such as concurrency.
\paragraph{Effect-based optimisations} In this dissertation I have not
considered any effect-based optimisations. However if effect handler
oriented programming is to succeed in practice, then runtime
performance will matter. Optimisation of program structure is one way
to improve runtime performance. At our disposal we have the effect
system and the algebraic structure of effects and handlers.
%
Taking advantage of the information provided by the effect system to
optimise programs is an old idea that has been explored previously in
the literature~\cite{KammarP12,Kammar14,Saleh19}.
%
Other work has attempted to exploit the algebraic structure of (deep)
effect handlers to fuse nested handlers~\cite{WuS15}.
%
An obvious idea is to apply these lines of work to the handler calculi
of Chapter~\ref{ch:unary-handlers}.
%
Moreover, I hypothesise there is untapped potential in the combination
of effect-dependent analysis with respect to equational theories to
optimise effectful programs. A potential starting point for testing
out this hypothesis is to take \citeauthor{LuksicP20}'s a core
calculus where effects are equipped with equations~\cite{LuksicP20}
and combine it with techniques for effect-dependent
optimisations~\cite{KammarP12}.
\paragraph{Multi handlers} In this dissertation I have solely focused
on so-called \emph{unary} handlers, which handle a \emph{single}
effectful computation. A natural generalisation is \emph{n-ary}
handlers, which allow $n$ effectful computations to be handled
simultaneously. In the literature n-ary handlers are called
\emph{multi handlers}, and unary handlers are simply called
handlers. The ability to handle two or more computations
simultaneously make for a straightforward way to implement
synchronisation between two or more computations. For example, the
pipes example of Section~\ref{sec:pipes} can be expressed using a
single handler rather than two dual
handlers~\cite{LindleyMM17}. Shallow multi handlers are an ample
feature of the Frank programming language~\cite{LindleyMM17}. The
design space of deep and parameterised notions of multi handlers have
yet to be explored as well as their applications domains. Thus an
interesting future direction of research would be to extend $\HCalc$
with multi handlers and explore their practical programming
applicability. Retrofitting the effect system of $\HCalc$ to provide a
good programmer experience for programming with multi handlers pose an
interesting design challenge as any quirks that occur with unary
handlers only get amplified in the setting of multi handlers.
\paragraph{Handling linear resources} The implementation of effect
handlers in Links makes the language unsound, because the \naive{}
combination of effect handlers and session typing is unsound. The
combined power of being able to discard some resumptions and resume
others multiple times can make for bad interactions with sessions. For
instance, suppose some channel supplies only one value, then it is
possible to break session fidelity by twice resuming some resumption
that closes over a receive operation. Similarly, it is possible to
break type safety by using a combination of exceptions and multi-shot
resumptions, e.g. suppose some channel first expects an integer
followed by a boolean, then the running the program
$\Do\;\Fork\,\Unit;\keyw{send}~42;\Absurd\;\Do\;\Fail\,\Unit$ under
the composition of the nondeterminism handler and default failure
handler from Chapter~\ref{ch:unary-handlers} will cause the primitive
$\keyw{send}$ operation to supply two integers in succession, thus
breaking the session protocol. Figuring out how to safely combine
linear resources, such as channels, and handlers with multi-shot
resumptions is an interesting unsolved problem.
\section{Canonical implementation strategies for handlers}
Chapter~\ref{ch:cps} carries out a comprehensive study of CPS
translations for deep, shallow, and parameterised notions of effect
handlers.
%
We arrive at a higher-order CPS translation through step-wise
refinement of an initial standard first-order fine-grain call-by-value
CPS translation, which we extended to support deep effect
handlers. Firstly, we refined the first-order translation by
uncurrying it in order to yield a properly tail-recursive
translation. Secondly, we adapted it to a higher-order one-pass
translation that statically eliminates administrative
redexes. Thirdly, we solidified the structure of continuations to
arrive at the notion of \emph{generalised continuation}, which
provides the basis for implementing shallow and parameterised
handlers.
%
The CPS translations have been proven correct with respect to the
contextual small-step semantics of $\HCalc$, $\SCalc$, and $\HPCalc$.
Generalised continuations are a succinct syntactic framework for
modelling low-level stack manipulations. The structure of generalised
continuations closely mimics the structure of \citeauthor{HiebDB90}
and \citeauthor{BruggemanWD96}'s segmented
stacks~\cite{HiebDB90,BruggemanWD96}, which is a state-of-art
technique for implementing first-class
control~\cite{FlattDDKMSTZ19}. Each generalised continuation frame
consists of a pure continuation and a handler definition. The pure
continuation represents an execution stack delimited by some
handler. Thus chaining together generalised continuation frames yields
a sequence of segmented stacks.
The versatility of generalised continuations is illustrated in
Chapter~\ref{ch:abstract-machine}, where we plugged the notion of
generalised continuation into \citeauthor{FelleisenF86}'s CEK machine
to obtain an adequate execution runtime with simultaneous support for
deep, shallow, and parameterised effect
handlers~\cite{FelleisenF86}. The resulting abstract machine is proven
correct with respect to the reduction semantics of the combined
calculus of $\HCalc$, $\SCalc$, and $\HPCalc$. The abstract machine
provides a blueprint for both high-level interpreter-based
implementations of effect handlers as well as low-level
implementations based on stack manipulations. The server-side
implementation of effect handlers in the Links programming language is
a testimony to the former~\cite{HillerstromL16}, whilst the Multicore
OCaml implementation of effect handlers is a testimony to the
latter~\cite{SivaramakrishnanDWKJM21}.
\subsection{Future work}
\paragraph{Functional correspondence} The CPS translations and
abstract machine have been developed separately. Even though, the
abstract machine is presented as an application of generalised
continuations in Chapter~\ref{ch:abstract-machine} it did appear
before the CPS translations. The idea of generalised continuation
first solidified during the design of higher-order CPS translation for
shallow handlers~\cite{HillerstromL18}, where we adapted the
continuation structure of our initial abstract machine
design~\cite{HillerstromL16}. Thus it seems that there ought to be a
formal functional correspondence between higher-order CPS translation
and the abstract machine, however, the existence of such a
correspondence has yet to be established.
\paragraph{Abstracting continuations} It is evident from the step-wise
refinement of the CPS translations in Chapter~\ref{ch:cps} that each
translation has a certain structure to it.
%
In fact, this is how the CPS translation for effect handlers in Links
has been implemented. Concretely, the translation is implemented as a
functor, which is parameterised by a continuation interface. The
continuation interface has monoidal operation for continuation
extension and an application operation for applying the continuation
to a value argument. Theoretically, it would be interesting to pin
down and understand the precise algebraic nature of this nature would
be interesting with respect to abstracting the notion of
continuations. Practically, it would keep the code base modular and
pave the way for rapid compilation of new control structures. Ideally
one would simply have to implement a standard CPS translation, which
keeps the notion of continuation abstract such that any conforming
continuation can be plugged in.
\paragraph{Generalising generalised continuations} The incarnation of
generalised continuations in this dissertation has been engineered for
unary handlers. An obvious extension to investigate is support for
multi handlers. With multi handlers, handler definitions enter a
one-to-many relationship with pure continuations rather than an
one-to-one relationship with unary handlers. Thus at minimum the
structure of generalised continuation frames needs to be altered such
that each handler definition is paired with a list of pure
continuations, where each pure continuation represents a distinct
computation running under the handler.
\paragraph{Ad-hoc generalised continuations}
The literature contains plenty of ad-hoc techniques for realising
continuations. For instance, \citeauthor{PettyjohnCMKF05}'s technique
for implementing undelimited continuations via exception handlers and
state~\cite{PettyjohnCMKF05}, and \citeauthor{JamesS11}'s technique
for implementing delimited control via generators and
iterators~\cite{JamesS11}. Such techniques may be used to implement
effect handlers in control hostile environments by simulating the
structure of generalised continuations. By using these techniques to
implement effect handlers we may be able to bring effect handler
oriented programming to programming languages that do not offer
programmers much control.
\paragraph{Typed CPS for effect handlers} The image of each
translation developed in Chapter~\ref{ch:cps} is untyped. Typing the
translations may provide additional insight into the semantic content
of the translations. Effect forwarding poses a challenge in typing the
image. In order to encode forwarding we need to be able to
parametrically specify what a default case does.
%
The Appendix B of the paper by \citet{HillerstromLAS17} outlines a
possible typing for the CPS translation for deep handlers. The
extension we propose to our row type system is to allow a row type to
be given a \emph{shape} (something akin to
\citeauthor{BerthomieuS95}'s tagged types~\cite{BerthomieuS95}), which
constrains the form of the ordinary types it contains. A full
formalisation of this idea remains to be done.
\section{On the expressive power of effect handlers}
In Chapter~\ref{ch:deep-vs-shallow} we investigated the
interdefinability of deep, shallow, and parameterised handlers through
the lens of typed macro expressiveness. We establish that every kind
of handler is interdefinable. Although, the handlers are
interdefinable it may matter in practice which kind of handler is
being employed. For example, the encoding of shallow handlers using
deep handlers is rather inefficient. The encoding suffers from space
leaks as demonstrated empirically in Appendix B.3 of
\citet{HillerstromL18}. Similarly, the runtime and memory performance
of between native parameterised handlers and encoding parameterised
handlers as ordinary deep handlers may be observable in practice as
the latter introduce a new closure per operation invocation.
Chapter~\ref{ch:handlers-efficiency} explores the relative efficiency
of a base language, $\BPCF$, and its extension with effect handlers,
$\HPCF$, through the lens of type-respecting expressivity. Concretely,
we used the example program of \emph{generic count} to show that
$\HPCF$ admits realisations of this program whose asymptotic
efficiency is better than any possible realisation in
$\BPCF$. Concretely, we established that the lower bound of generic
count on $n$-standard predicates in $\BPCF$ is $\Omega(n2^n)$, whilst
the worst case upper bound in $\HPCF$ is $\BigO(2^n)$. Hence there is
a strict efficiency gap between the two languages. We observed this
efficiency gap in practice on several benchmarks.
%
The lower runtime bound also applies to a language $\BSPCF$ which
extends $\BPCF$ with state.
%
Although, I have not spelled out the details here, in
\citet{HillerstromLL20} we have verified that the lower bound also
applies to a language $\BEPCF$ with \citeauthor{BentonK01}-style
\emph{exceptions} and handlers~\cite{BentonK01}.
%
The lower bound also applies to the combined language $\BSEPCF$
with both state and exceptions --- this seems to bring us close to
the expressive power of real languages such as Standard ML, Java, and
Python, strongly suggesting that the speedup we have discussed is
unattainable in these languages.
The positive result for $\HPCF$ extends to other control operators by
appeal to existing results on interdefinability of handlers and other
control operators~\cite{ForsterKLP19,PirogPS19}.
%
% The result no longer applies directly if we add an effect type system
% to $\HPCF$, as the implementation of the counting program would
% require a change of type for predicates to reflect the ability to
% perform effectful operations.
%
% In future we plan to investigate how to account for effect type systems.
From a practical point of view one might be tempted to label the
efficiency result as merely of theoretical interest, since an
$\Omega(2^n)$ runtime is already infeasible. However, what has been
presented is an example of a much more pervasive phenomenon, and the
generic count example serves merely as a convenient way to bring this
phenomenon into sharp formal focus. For example, suppose that our
programming task was not to count all solutions to $P$, but to find
just one of them. It is informally clear that for many kinds of
predicates this would in practice be a feasible task, and also that we
could still gain our factor $n$ speedup here by working in a language
with first-class control. However, such an observation appears less
amenable to a clean mathematical formulation, as the runtimes in
question are highly sensitive to both the particular choice of
predicate and the search order employed.
\subsection{Future work}
\paragraph{Efficiency of handler encodings} Although, I do not give a
formal proof for the efficiency of the shallow as deep encoding in
Chapter~\ref{ch:deep-vs-shallow} it seems intuitively clear that the
encoding is rather inefficient. In fact in Appendix B.2 and B.3 of
\citet{HillerstromL18} we show empirically that the encoding is
inefficient. An interesting question is whether there exists an
efficient encoding of shallow handlers using deep handlers. Formally
proving the absence of an efficient encoding would give a strong
indication of the relative computational expressive power between
shallow and deep handlers. Likewise discovering that an efficient
encoding does exist would tell us that it may not matter
computationally whether a language incorporates shallow or deep
handlers.
\paragraph{Effect tracking breaks asymptotic improvement} The
result of Chapter~\ref{ch:handlers-efficiency} does not immediately
carry over to a language with an effect system as the implementation
of generic search in \HPCF{} would introduce an effectful operation,
which requires a change of types. In order to state and prove the
result in the presence of an effect system some other refined,
possibly new, notion of expressivity seems necessary.
\paragraph{Asymptotic improvement with affine handlers}
The result of Chapter~\ref{ch:handlers-efficiency} does not
immediately remain true in the presence of affine effect handlers
(handlers which their resumptions at most once) as they make it
possible to encode coroutines. The present proof method does not
readily adapt to a situation with coroutines, because the proof depend
at various points on an orderly nesting of subcomputations which
corouting would break.
\paragraph{Efficiency hierarchy of control} The definability hierarchy
of various control constructs such as iteration, recursion, recursion
with state, and first class control is fairly
well-understood~\cite{LongleyN15,Longley18a,Longley19}. However, the
relative asymptotic efficiency between them is less
well-understood. It would be interesting to formally establish a
hierarchy of relative asymptotic efficiency between various control
constructs in the style of Chapter~\ref{ch:handlers-efficiency}.
%%
%% Appendices
%%
\part{Appendices}
\appendix
\chapter{Continuations}
\label{ch:continuations}
A continuation represents the control state of computation at a given
point during evaluation. The control state contains the necessary
operational information for evaluation to continue. As such,
continuations drive computation. % Continuations are a ubiquitous
% phenomenon as they exist both semantically and programmatically.
%
Continuations are one of those canonical ideas, that have been
discovered multiple times and whose definition predates their
use~\cite{Reynolds93}. The term `continuation' first appeared in the
literature in 1974, when \citet{StracheyW74} used continuations to
give a denotational semantics to programming languages with
unrestricted jumps~\cite{StracheyW00}.
The inaugural use of continuations came well before
\citeauthor{StracheyW00}'s definition. About a decade earlier
continuation passing style had already been conceived, if not in name
then in spirit, as a compiler transformation for eliminating labels
and goto statements~\cite{Reynolds93}. In the mid 1960s
\citet{Landin98} introduced the J operator as a programmatic mechanism
for manipulating continuations.
\citeauthor{Landin98}'s J operator is an instance of a first-class
control operator, which is a mechanism that lets programmers reify
continuations as first-class objects, that can be invoked, discarded,
or stored for later use. There exists a wide variety of control
operators, which expose continuations of varying extent and behaviour.
The purpose of this chapter is to examine control operators and their
continuations in
programming. Section~\ref{sec:classifying-continuations} examines
different notions of continuations by characterising their extent and
behaviour operationally. Section~\ref{sec:controlling-continuations}
contains a detailed overview of various control operators that appear
in programming languages and in the
literature. Section~\ref{sec:programming-continuations} summarises
some applications of continuations, whilst
Section~\ref{sec:constraining-continuations} contains a brief summary
of ideas for constraining the power of continuations. Lastly,
Section~\ref{sec:implementing-continuations} outlines some
implementation strategies for continuations.
% A lot of literature has been devoted to study continuations. Whilst
% the literature recognises the importance and significance of
% continuations
% continuations are widely recognised as important in the programming
% language literature,
% The concrete structure and behaviour of continuations differs
% Continuations, when exposed programmatically, imbue programmers with
% the power to take control of programs. This power enables programmers
% to implement their own control idioms as user-definable libraries.
% The significance of continuations in the programming languages
% literature is inescapable as continuations have found widespread use .
%
% A continuation is an abstract data structure that captures the
% remainder of the computation from some given point in the computation.
% %
% The exact nature of the data structure and the precise point at which
% the remainder of the computation is captured depends largely on the
% exact notion of continuation under consideration.
% %
% It can be difficult to navigate the existing literature on
% continuations as sometimes the terminologies for different notions of
% continuations are overloaded or even conflated.
% %
% As there exist several notions of continuations, there exist several
% mechanisms for programmatic manipulation of continuations. These
% mechanisms are known as control operators.
% %
% A substantial amount of existing literature has been devoted to
% understand how to program with individual control operators, and to a
% lesser extent how the various operators compare.
% The purpose of this chapter is to provide a contemporary and
% unambiguous characterisation of the notions of continuations in
% literature. This characterisation is used to classify and discuss a
% wide range of control operators from the literature.
% % Undelimited control: Landin's J~\cite{Landin98}, Reynolds'
% % escape~\cite{Reynolds98a}, Scheme75's catch~\cite{SussmanS75} ---
% % which was based the less expressive MacLisp catch~\cite{Moon74},
% % callcc is a procedural variation of catch. It was invented in
% % 1982~\cite{AbelsonHAKBOBPCRFRHSHW85}.
% A full formal comparison of the control operators is out of scope of
% this chapter. The literature contains comparisons of various control
% operators along various dimensions, e.g.
% %
% \citet{Thielecke02} studies a handful of operators via double
% barrelled continuation passing style. \citet{ForsterKLP19} compare the
% relative expressiveness of untyped and simply-typed variations of
% effect handlers, shift/reset, and monadic reflection by means of
% whether they are macro-expressible. Their work demonstrates that in an
% untyped setting each operator is macro-expressible, but in most cases
% the macro-translations do not preserve typeability, for instance the
% simple type structure is insufficient to type the image of
% macro-translation between effect handlers and shift/reset.
% %
% However, \citet{PirogPS19} show that with a polymorphic type system
% the translation preserve typeability.
% %
% \citet{Shan04} shows that dynamic delimited control and static
% delimited control is macro-expressible in an untyped setting.
\section{Classifying continuations}
\label{sec:classifying-continuations}
The term `continuation' is really an umbrella term that covers several
distinct notions of continuations. It is common in the literature to
find the word `continuation' accompanied by a qualifier such as full,
partial, abortive, escape, undelimited, delimited, composable, or
functional (in Chapter~\ref{ch:cps} I will extend this list by three
new ones). Some of these notions of continuations are synonymous,
whereas others have distinct meanings. Common to all notions of
continuations is that they represent the control state. However, the
extent and behaviour of continuations differ widely from notion to
notion. The essential notions of continuations are
undelimited/delimited and abortive/composable. To tell them apart, we
will classify them according to their operational behaviour.
The extent and behaviour of a continuation in programming are
determined by its introduction and elimination forms,
respectively. Programmatically, a continuation is introduced via a
control operator, which reifies the control state as a first-class
object, e.g. a function, that can be eliminated via some form of
application.
\subsection{Introduction of continuations}
%
The extent of a continuation determines how much of the control state
is contained with the continuation.
%
The extent can be either undelimited or delimited, and it is
determined at the point of capture by the control operator.
%
We need some notation for control operators in order to examine the
introduction of continuations operationally. We will use the syntax
$\CC~k.M$ to denote a control operator, or control reifier, which that
reifies the control state and binds it to $k$ in the computation
$M$. Here the control state will simply be an evaluation context. We
will denote continuations by a special value $\cont_{\EC}$, which is
indexed by the reified evaluation context $\EC$ to make it
notationally convenient to reflect the context again. To characterise
delimited continuations we also need a control delimiter. We will
write $\Delim{M}$ to denote a syntactic marker that delimits some
computation $M$.
\paragraph{Undelimited continuation} The extent of an undelimited
continuation is indefinite as it ranges over the entire remainder of
computation.
%
In functional programming languages undelimted control operators most
commonly expose the \emph{current} continuation, which is the
precisely continuation following the control operator. The following
is the characteristic reduction for the introduction of the current
continuation.
% The indefinite extents means that undelimited continuation capture can
% only be understood in context. The characteristic reduction is as
% follows.
%
\[
\EC[\CC~k.M] \reducesto \EC[M[\cont_{\EC}/k]]
\]
%
The evaluation context $\EC$ is the continuation of $\CC$. The
evaluation context on the left hand side gets reified as a
continuation object, which is accessible inside of $M$ via $k$. On the
right hand side the entire context remains in place after
reification. Thus, the current continuation is evaluated regardless of
whether the continuation object is invoked. This is an instance of
non-abortive undelimited control. Alternatively, the control operator
can abort the current continuation before proceeding as $M$, i.e.
%
\[
\EC[\CC~k.M] \reducesto M[\cont_{\EC}/k]
\]
%
This is the characteristic reduction rule for abortive undelimited
control. The rule is nearly the same as the previous, except that on
the right hand side the evaluation context $\EC$ is discarded after
reification. Now, the programmer has control over the whole
continuation, since it is entirely up to the programmer whether $\EC$
gets evaluated.
Imperative statement-oriented programming languages commonly expose
the \emph{caller} continuation, typically via a return statement. The
caller continuation is the continuation of the invocation context of
the control operator. Characterising undelimited caller continuations
is slightly more involved as we have to remember the continuation of
the invocation context. We will use a bold lambda $\llambda$ as a
syntactic runtime marker to remember the continuation of an
application. In addition we need three reduction rules, where the
first is purely administrative, the second is an extension of regular
application, and the third is the characteristic reduction rule for
undelimited control with caller continuations.
%
\begin{reductions}
& \llambda.V &\reducesto& V\\
& (\lambda x.N)\,V &\reducesto& \llambda.N[V/x]\\
& \EC[\llambda.\EC'[\CC~k.M]] &\reducesto& \EC[\llambda.\EC'[M[\cont_{\EC}/k]]], \quad \text{where $\EC'$ contains no $\llambda$}
\end{reductions}
%
The first rule accounts for the case where $\llambda$ marks a value,
in which case the marker is eliminated. The second rule marks inserts
a marker after an application such that this position can be recalled
later. The third rule is the interesting rule. Here an occurrence of
$\CC$ reifies $\EC$, the continuation of some application, rather than
its current continuation $\EC'$. The side condition ensures that $\CC$
reifies the continuation of the inner most application. This rule
characterises a non-abortive control operator as both contexts, $\EC$
and $\EC'$, are left in place after reification. It is straightforward
to adapt this rule to an abortive operator. Although, there is no
abortive undelimited control operator that captures the caller
continuation in the literature.
It is worth noting that the two first rules can be understood locally,
that is without mentioning the enclosing context, whereas the third
rule must be understood globally.
In the literature an undelimited continuation is also known as a
`full' continuation.
\paragraph{Delimited continuation} A delimited continuation is in some
sense a refinement of a undelimited continuation as its extent is
definite. A delimited continuation ranges over some designated part of
computation. A delimited continuation is introduced by a pair
operators: a control delimiter and a control reifier. The control
delimiter acts as a barrier, which prevents the reifier from reaching
beyond it, e.g.
%
\begin{reductions}
& \Delim{V} &\reducesto& V\\
& \Delim{\EC[\CC~k.M]} &\reducesto& \Delim{\EC[M[\cont_{\EC}/k]]}
\end{reductions}
%
The first rule applies whenever the control delimiter delimits a
value, in which case the delimiter is eliminated. The second rule is
the characteristic reduction rule for a non-abortive delimited control
reifier. It reifies the context $\EC$ up to the control delimiter, and
then continues as $M$ under the control delimiter. Note that the
continuation of $\keyw{del}$ is invisible to $\CC$, and thus, the
behaviour of $\CC$ can be understood locally.
%
Most commonly, the control reifier is abortive, i.e.
\begin{reductions}
& \Delim{\EC[\CC~k.M]} &\reducesto& \Delim{M[\cont_{\EC}/k]}.
\end{reductions}
%
The design space of delimited control is somewhat richer than that of
undelimited control, as the control delimiter may remain in place
after reification, as above, be discarded, be included in the
continuation, or a combination. Similarly, the control reifier may
reify the continuation up to and including the delimiter or, as above,
without the delimiter.
%
\citet{DybvigJS07} use a taxonomy for delimited abortive control
reifiers, which classifies them according to how they interact with
their respective control delimiter.
% the introduction of delimited
% continuations, which classifies according to how their control reifiers interact their
% respective control delimiter.
They identify four variations.
%
\begin{description}
\item[\CCpp] The control reifier includes a copy of the control delimiter in the
reified context, and leaves the original in place, i.e.
\[
\Delim{\EC[\CC~k.M]} \reducesto \Delim{M[\cont_{\Delim{\EC}}/k]}
\]
\item[\CCpm] The control delimiter remains in place after
reification as the control reifier reifies the context up to, but
not including, the delimiter, i.e.
\[
\Delim{\EC[\CC~k.M]} \reducesto \Delim{M[\cont_{\EC}/k]}
\]
\item[\CCmp] The control reifier includes a copy of the control
delimiter in the reified context, but discards the original
instance, i.e.
\[
\Delim{\EC[\CC~k.M]} \reducesto M[\cont_{\EC}/k]
\]
\item[\CCmm] The control reifier reifies the context up to, but not
including, the delimiter and subsequently discards the delimiter, i.e.
\[
\Delim{\EC[\CC~k.M]} \reducesto M[\cont_{\EC}/k]
\]
\end{description}
% %
%
In the literature a delimited continuation is also known as a
`partial' continuation.
\subsection{Elimination of continuations}
The purpose of continuation application is to reinstall the captured
context.
%
However, a continuation application may affect the control state in
various ways. The literature features two distinct behaviours of
continuation application: abortive and composable. We need some
notation for application of continuations in order to characterise
abortive and composable behaviours. We will write
$\Continue~\cont_{\EC}~V$ to denote the application of some
continuation object $\cont$ to some value $V$.
\paragraph{Abortive continuation} Upon invocation an abortive
continuation discards the entire evaluation context before
reinstalling the captured context. In other words, an abortive
continuation replaces the current context with its captured context,
i.e.
%
\[
\EC[\Continue~\cont_{\EC'}~V] \reducesto \EC'[V]
\]
%
The current context $\EC$ is discarded in favour of the captured
context $\EC'$ (whether the two contexts coincide depends on the
control operator). Abortive continuations are a global phenomenon due
to their effect on the current context. However, in conjunction with a
control delimiter the behaviour of an abortive continuation can be
localised, i.e.
%
\[
\Delim{\EC[\Continue~\cont_{\EC'}~V]} \reducesto \EC'[V]
\]
%
Here, the behaviour of continuation does not interfere with the
context of $\keyw{del}$, and thus, the behaviour can be understood and
reasoned about locally with respect to $\keyw{del}$.
A key characteristic of an abortive continuation is that composition
is meaningless. For example, composing an abortive continuation with
itself have no effect.
%
\[
\EC[\Continue~\cont_{\EC'}~(\Continue~\cont_{\EC'}~V)] \reducesto \EC'[V]
\]
%
The innermost application erases the outermost application term,
consequently only the first application of $\cont$ occurs during
runtime. It is as if the first application occurred in tail position.
The continuations introduced by the early control operators were all
abortive, since they were motivated by modelling unrestricted jumps
akin to $\keyw{goto}$ in statement-oriented programming languages.
An abortive continuation is also known as an `escape' continuation in
the literature.
\paragraph{Composable continuation} A composable continuation splices
its captured context with the its invocation context, i.e.
%
\[
\Continue~\cont_{\EC}~V \reducesto \EC[V]
\]
%
The application of a composable continuation can be understood
locally, because it has no effect on its invocation context. A
composable continuation behaves like a function in the sense that it
returns to its caller, and thus composition is well-defined, e.g.
%
\[
\Continue~\cont_{\EC}~(\Continue~\cont_{\EC}~V) \reducesto \Continue~\cont_{\EC}~\EC[V]
\]
%
The innermost application composes the captured context with the
outermost application. Thus, the outermost application occurs when
$\EC[V]$ has been reduced to a value.
In the literature, virtually every delimited control operator provides
composable continuations. However, the notion of composable
continuation is not intimately connected to delimited control. It is
perfect possible to conceive of a undelimited composable continuation,
just as a delimited abortive continuation is conceivable.
A composable continuation is also known as a `functional' continuation
in the literature.
\section{Controlling continuations}
\label{sec:controlling-continuations}
As suggested in the previous section, the design space for
continuation is rich. This richness is to an extent reflected by the
large amount of control operators that appear in the literature
and in practice.
%
The purpose of this section is to survey a considerable subset of the
first-class \emph{sequential} control operators that occur in the
literature and in practice. Control operators for parallel programming
will not be considered here.
%
Tables~\ref{tbl:classify-ctrl-undelimited} and
\ref{tbl:classify-ctrl-delimited} provide classifications of some of
the undelimited control operators and delimited control operators,
respectively, that appear in the literature.
Note that a \emph{first-class} control operator is typically not
itself a first-class citizen, rather, the label `first-class' means
that the reified continuation is a first-class object. Control
operators that reify the current continuation can be made first-class
by enclosing them in a $\lambda$-abstraction. Obviously, this trick
does not work for operators that reify the caller continuation.
To study the control operators we will make use of a small base
language.
%
\begin{table}
\centering
\begin{tabular}{| l | l | l |}
\hline
\multicolumn{1}{| l |}{\textbf{Name}} & \multicolumn{1}{l |}{\textbf{Continuation behaviour}} & \multicolumn{1}{l |}{\textbf{Canonical reference}}\\
\hline
J & Abortive & \citet{Landin98}\\
\hline
escape & Abortive & \citet{Reynolds98a}\\
\hline
catch & Abortive & \citet{SussmanS75} \\
\hline
callcc & Abortive & \citet{AbelsonHAKBOBPCRFRHSHW85} \\
\hline
F & Composable & \citet{FelleisenFDM87}\\
\hline
C & Abortive & \citet{FelleisenF86} \\
\hline
\textCallcomc{} & Composable & \citet{Flatt20}\\
\hline
\end{tabular}
\caption{Classification of first-class undelimited control operators
(listed in chronological
order).}\label{tbl:classify-ctrl-undelimited}
\end{table}
%
\begin{table}
\centering
\begin{tabular}{| l | l | l | l |}
\hline
\multicolumn{1}{| l |}{\textbf{Name}} & \multicolumn{1}{l |}{\textbf{Taxonomy}} & \multicolumn{1}{l |}{\textbf{Continuation behaviour}} & \multicolumn{1}{l |}{\textbf{Canonical reference}}\\
\hline
control/prompt & \CCpm & Composable & \citet{Felleisen88}\\
\hline
shift/reset & \CCpp & Composable & \citet{DanvyF90}\\
\hline
spawn & \CCmp & Composable & \citet{HiebD90}\\
\hline
splitter & \CCmm & Abortive, composable & \citet{QueinnecS91}\\
\hline
fcontrol & \CCmm & Composable & \citet{Sitaram93} \\
\hline
cupto & \CCmm & Composable & \citet{GunterRR95}\\
\hline
catchcont & \CCmm & Composable & \citet{Longley09}\\
\hline
effect handlers & \CCmp & Composable & \citet{PlotkinP13} \\
\hline
\end{tabular}
\caption{Classification of first-class delimited control operators (listed in chronological order).}\label{tbl:classify-ctrl-delimited}
\end{table}
%
\paragraph{A small calculus for control}
%
To look at control we will use a simply typed fine-grain call-by-value
calculus. Although, we will sometimes have to discard the types, as
many of the control operators were invented and studied in a untyped
setting. The calculus is essentially the same as the one used in
Chapter~\ref{ch:handlers-efficiency}, except that here we will have an
explicit invocation form for continuations. Although, in practice most
systems disguise continuations as first-class functions, but for a
theoretical examination it is convenient to treat them specially such
that continuation invocation is a separate reduction rule from
ordinary function application. Figure~\ref{fig:pcf-lang-control}
depicts the syntax of types and terms in the calculus.
%
\begin{figure}
\centering
\begin{syntax}
\slab{Types} & A,B \in \TypeCat &::=& \UnitType \mid A \to B \mid A \times B \mid \Cont\,\Record{A;B} \mid A + B \smallskip\\
\slab{Values} & V,W \in \ValCat &::=& \Unit \mid \lambda x^A.M \mid \Record{V;W} \mid \cont_\EC \mid \Inl~V \mid \Inr~W \mid x\\
\slab{Computations} & M,N \in \CompCat &::=& \Return\;V \mid \Let\;x \revto M \;\In\;N \mid \Let\;\Record{x;y} = V \;\In\; N \\
& &\mid& V\,W \mid \Continue~V~W \smallskip\\
\slab{Evaluation\textrm{ }contexts} & \EC \in \CatName{Ctx} &::=& [\,] \mid \Let\;x \revto \EC \;\In\;N
\end{syntax}
\caption{Types and term syntax}\label{fig:pcf-lang-control}
\end{figure}
%
The types are the standard simple types with the addition of the
continuation object type $\Cont\,\Record{A;B}$, which is parameterised
by an argument type and a result type, respectively. The static
semantics is standard as well, except for the continuation invocation
primitive $\Continue$.
%
\begin{mathpar}
\inferrule*
{\typ{\Gamma}{V : A} \\ \typ{\Gamma}{W : \Cont\,\Record{A;B}}}
{\typ{\Gamma}{\Continue~W~V : B}}
\end{mathpar}
%
Although, it is convenient to treat continuation application specially
for operational inspection, it is rather cumbersome to do so when
studying encodings of control operators. Therefore, to obtain the best
of both worlds, the control operators will reify their continuations
as first-class functions, whose body is $\Continue$-expression. To
save some ink, we will use the following notation.
%
\[
\qq{\cont_{\EC}} \defas \lambda x. \Continue~\cont_{\EC}~x
\]
%
We will permit ourselves various syntactic sugar to keep the examples
relative concise, e.g. we write the examples in ordinary call-by-value.
\subsection{Undelimited control operators}
%
The early inventions of undelimited control operators were driven by
the desire to provide a `functional' equivalent of jumps as provided
by the infamous goto in imperative programming.
%
In 1965 Peter \citeauthor{Landin65} unveiled \emph{first} first-class
control operator: the J
operator~\cite{Landin65,Landin65a,Landin98}. Later in 1972 influenced
by \citeauthor{Landin65}'s J operator John \citeauthor{Reynolds98a}
designed the escape operator~\cite{Reynolds98a}. Influenced by escape,
\citeauthor{SussmanS75} designed, implemented, and standardised the
catch operator in Scheme in 1975. A while thereafter the perhaps most
famous undelimited control operator appeared: callcc. It initially
designed in 1982 and was standardised in 1985 as a core feature of
Scheme. Later another batch of control operators based on callcc
appeared. A common characteristic of the early control operators is
that their capture mechanisms were abortive and their captured
continuations were abortive, save for one, namely,
\citeauthor{Felleisen88}'s F operator. Later a non-abortive and
composable variant of callcc appeared. Moreover, every operator,
except for \citeauthor{Landin98}'s J operator, capture the current
continuation.
\paragraph{\citeauthor{Reynolds98a}' escape} The escape operator was introduced by
\citeauthor{Reynolds98a} in 1972~\cite{Reynolds98a} to make
statement-oriented control mechanisms such as jumps and labels
programmable in an expression-oriented language.
%
The operator introduces a new computation form.
%
\[
M, N \in \CompCat ::= \cdots \mid \Escape\;k\;\In\;M
\]
%
The variable $k$ is called the \emph{escape variable} and it is bound
in $M$. The escape variable exposes the current continuation of the
$\Escape$-expression to the programmer. The captured continuation is
abortive, thus an invocation of the escape variable in the body $M$
has the effect of performing a non-local exit.
%
In terms of jumps and labels the $\Escape$-expression can be
understood as corresponding to a kind of label and an application of
the escape variable $k$ can be understood as corresponding to a jump
to the label.
\citeauthor{Reynolds98a}' original treatise of escape was untyped, and
as such, the escape variable could escape its captor, e.g.
%
\[
\Let\;k \revto (\Escape\;k\;\In\;k)\;\In\; N
\]
%
Here the current continuation, $N$, gets bound to $k$ in the
$\Escape$-expression, which returns $k$ as-is, and thus becomes
available for use within $N$. \citeauthor{Reynolds98a} recognised the
power of this idiom and noted that it could be used to implement
coroutines and backtracking~\cite{Reynolds98a}.
%
\citeauthor{Reynolds98a} did not develop the static semantics for
$\Escape$, however, it is worth noting that this idiom require
recursive types to type check. Even in a language without recursive
types, the continuation may propagate outside its binding
$\Escape$-expression if the language provides an escape hatch such as
mutable references.
% In our simply-typed setting it is not possible for the continuation to
% propagate outside its binding $\Escape$-expression as it would require
% the addition of either recursive types or some other escape hatch like
% mutable reference cells.
% %
% The typing of $\Escape$ and $\Continue$ reflects that the captured
% continuation is abortive.
% %
% \begin{mathpar}
% \inferrule*
% {\typ{\Gamma,k : \Cont\,\Record{A;\Zero}}{M : A}}
% {\typ{\Gamma}{\Escape\;k\;\In\;M : A}}
% % \inferrule*
% % {\typ{\Gamma}{V : A} \\ \typ{\Gamma}{W : \Cont\,\Record{A;\Zero}}}
% % {\typ{\Gamma}{\Continue~W~V : \Zero}}
% \end{mathpar}
% %
% The return type of the continuation object can be taken as a telltale
% sign that an invocation of this object never returns, since there are
% no inhabitants of the empty type.
%
An invocation of the continuation discards the invocation context and
plugs the argument into the captured context.
%
\begin{reductions}
\slab{Capture} & \EC[\Escape\;k\;\In\;M] &\reducesto& \EC[M[\qq{\cont_{\EC}}/k]]\\
\slab{Resume} & \EC[\Continue~\cont_{\EC'}~V] &\reducesto& \EC'[V]
\end{reductions}
%
The \slab{Capture} rule leaves the context intact such that if the
body $M$ does not invoke $k$ then whatever value $M$ reduces is
plugged into the context. The \slab{Resume} discards the current
context $\EC$ and installs the captured context $\EC'$ with the
argument $V$ plugged in.
\paragraph{\citeauthor{SussmanS75}'s catch}
%
In 1975 \citet{SussmanS75} designed and implemented the catch operator
in Scheme. It is a more powerful variant of the catch operator in
MacLisp~\cite{Moon74}. The MacLisp catch operator had a companion
throw operation, which would unwind the evaluation stack until it was
caught by an instance of catch. \citeauthor{SussmanS75}'s catch
operator dispenses with the throw operation and instead provides the
programmer with access to the current continuation. Their operator is
identical to \citeauthor{Reynolds98a}' escape operator, save for the
syntax.
%
\[
M,N \in \CompCat ::= \cdots \mid \Catch~k.M
\]
%
Although, their syntax differ, their dynamic semantics are the same.
%
% \begin{mathpar}
% \inferrule*
% {\typ{\Gamma,k : \Cont\,\Record{A;\Zero}}{M : A}}
% {\typ{\Gamma}{\Catch~k.M : A}}
% % \inferrule*
% % {\typ{\Gamma}{V : A} \\ \typ{\Gamma}{W : \Cont\,\Record{A;B}}}
% % {\typ{\Gamma}{\Continue~W~V : B}}
% \end{mathpar}
%
\begin{reductions}
\slab{Capture} & \EC[\Catch~k.M] &\reducesto& \EC[M[\qq{\cont_{\EC}}/k]]\\
\slab{Resume} & \EC[\Continue~\cont_{\EC'}~V] &\reducesto& \EC'[V]
\end{reductions}
%
As an aside it is worth to mention that \citet{CartwrightF92} used a
variation of $\Catch$ to show that control operators enable programs
to observe the order of evaluation.
\paragraph{Call-with-current-continuation} In 1982 the Scheme
implementors observed that they could dispense of the special syntax
for $\Catch$ in favour of a higher-order function that would apply its
argument to the current continuation, and thus callcc was born (callcc
is short for
call-with-current-continuation)~\cite{AbelsonHAKBOBPCRFRHSHW85}.
%
Unlike the previous operators, callcc augments the syntactic
categories of values.
%
\[
V,W \in \ValCat ::= \cdots \mid \Callcc
\]
%
The value $\Callcc$ is essentially a hard-wired function name. Being a
value means that the operator itself is a first-class entity which
entails it can be passed to functions, returned from functions, and
stored in data structures. Operationally, $\Callcc$ captures the
current continuation and aborts it before applying it on its argument.
%
% The typing rule for $\Callcc$ testifies to the fact that it is a
% particular higher-order function.
%
% \begin{mathpar}
% \inferrule*
% {~}
% {\typ{\Gamma}{\Callcc : (\Cont\,\Record{A;\Zero} \to A) \to A}}
% % \inferrule*
% % {\typ{\Gamma}{V : A} \\ \typ{\Gamma}{W : \Cont\,\Record{A;B}}}
% % {\typ{\Gamma}{\Continue~W~V : B}}
% \end{mathpar}
% %
% An invocation of $\Callcc$ returns a value of type $A$. This value can
% be produced in one of two ways, either the function argument returns
% normally or it applies the provided continuation object to a value
% that then becomes the result of $\Callcc$-application.
%
\begin{reductions}
\slab{Capture} & \EC[\Callcc~V] &\reducesto& \EC[V~\qq{\cont_{\EC}}]\\
\slab{Resume} & \EC[\Continue~\cont_{\EC'}~V] &\reducesto& \EC'[V]
\end{reductions}
%
From the dynamic semantics it is evident that $\Callcc$ is a
syntax-free alternative to $\Catch$ (although, it is treated as a
special value form here; in actual implementation it suffices to
recognise the object name of $\Callcc$). They are trivially
macro-expressible.
%
\begin{equations}
\sembr{\Catch~k.M} &\defas& \Callcc\,(\lambda k.\sembr{M})\\
\sembr{\Callcc} &\defas& \lambda f. \Catch~k.f\,k
\end{equations}
\paragraph{Call-with-composable-continuation} A variation of callcc is
call-with-composable-continuation, abbreviated \textCallcomc{}.
%
As the name suggests the captured continuation is composable rather
than abortive. It was introduced by \citet{FlattYFF07} in 2007, and
implemented in November 2006 according to the history log of Racket
(Racket was then known as MzScheme, version 360)~\cite{Flatt20}. The
history log classifies it as a delimited control operator.
%
Truth to be told nowadays in Racket virtually all control operators
are delimited, even callcc, because they are parameterised by an
optional prompt tag. If the programmer does not supply a prompt tag at
invocation time then the optional parameter assume the actual value of
the top-level prompt, effectively making the extent of the captured
continuation undelimited.
%
In other words its default mode of operation is undelimited, hence the
justification for categorising it as such.
%
Like $\Callcc$ this operator is a value.
%
\[
V,W \in \ValCat ::= \cdots \mid \Callcomc
\]
%
% Unlike $\Callcc$, the continuation returns, which the typing rule for
% $\Callcomc$ reflects.
% %
% \begin{mathpar}
% \inferrule*
% {~}
% {\typ{\Gamma}{\Callcomc : (\Cont\,\Record{A;A} \to A) \to A}}
% \inferrule*
% {\typ{\Gamma}{V : A} \\ \typ{\Gamma}{W : \Cont\,\Record{A;B}}}
% {\typ{\Gamma}{\Continue~W~V : B}}
% \end{mathpar}
% %
% Both the domain and codomain of the continuation are the same as the
% body type of function argument.
Unlike $\Callcc$, captured continuations behave as functions.
%
\begin{reductions}
\slab{Capture} & \EC[\Callcomc~V] &\reducesto& \EC[V~\qq{\cont_{\EC}}]\\
\slab{Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]
\end{reductions}
%
The capture rule for $\Callcomc$ is identical to the rule for
$\Callcc$, but the resume rule is different.
%
The effect of continuation invocation can be understood locally as it
does not erase the global evaluation context, but rather composes with
it.
%
To make this more tangible consider the following example reduction
sequence.
%
\begin{derivation}
&1 + \Callcomc\,(\lambda k. \Continue~k~(\Continue~k~0))\\
\reducesto^+ & \reason{\slab{Capture} $\EC = 1 + [~]$}\\
% &1 + ((\lambda k. \Continue~k~(\Continue~k~0))\,\cont_\EC)\\
% \reducesto & \reason{$\beta$-reduction}\\
&1 + (\Continue~\cont_\EC~(\Continue~\cont_\EC~0))\\
\reducesto^+ & \reason{\slab{Resume} with $\EC[0]$}\\
&1 + (\Continue~\cont_\EC~1)\\
\reducesto^+ & \reason{\slab{Resume} with $\EC[1]$}\\
&1 + 2 \reducesto 3
\end{derivation}
%
The operator reifies the current evaluation context as a continuation
object and passes it to the function argument. The evaluation context
is left in place. As a result an invocation of the continuation object
has the effect of duplicating the context. In this particular example
the context has been duplicated twice to produce the result $3$.
%
Contrast this result with the result obtained by using $\Callcc$.
%
\begin{derivation}
&1 + \Callcc\,(\lambda k. \Absurd\;\Continue~k~(\Absurd\;\Continue~k~0))\\
\reducesto^+ & \reason{\slab{Capture} $\EC = 1 + [~]$}\\
% &1 + ((\lambda k. \Continue~k~(\Continue~k~0))\,\cont_\EC)\\
% \reducesto & \reason{$\beta$-reduction}\\
&1 + (\Absurd\;\Continue~\cont_\EC~(\Absurd\;\Continue~\cont_\EC~0))\\
\reducesto & \reason{\slab{Resume} with $\EC[0]$}\\
&1\\
\end{derivation}
%
The second invocation of $\cont_\EC$ never enters evaluation position,
because the first invocation discards the entire evaluation context.
%
Our particular choice of syntax and static semantics already makes it
immediately obvious that $\Callcc$ cannot be directly substituted for
$\Callcomc$, and vice versa, in a way that preserves operational
behaviour. % The continuations captured by the two operators behave
% differently.
An interesting question is whether $\Callcc$ and $\Callcomc$ are
interdefinable. Presently, the literature does not seem to answer to
this question. I conjecture that the operators exhibit essential
differences, meaning they cannot encode each other.
%
The intuition behind this conjecture is that for any encoding of
$\Callcomc$ in terms of $\Callcc$ must be able to preserve the current
evaluation context, e.g. using a state cell akin to how
\citet{Filinski94} encodes composable continuations using abortive
continuations and state.
%
The other way around also appears to be impossible, because neither
the base calculus nor $\Callcomc$ has the ability to discard an
evaluation context.
%
% \dhil{Remark that $\Callcomc$ was originally obtained by decomposing
% $\fcontrol$ a continuation composing primitive and an abortive
% primitive. Source: Matthew Flatt, comp.lang.scheme, May 2007}
\paragraph{\FelleisenC{} and \FelleisenF{}}
%
The C operator is a variation of callcc that provides control over the
whole continuation as it aborts the current continuation after
capture, whereas callcc implicitly invokes the current continuation on
the value of its argument. The C operator was introduced by
\citeauthor{FelleisenFKD86} in two papers during
1986~\cite{FelleisenF86,FelleisenFKD86}. The following year,
\citet{FelleisenFDM87} introduced the F operator which is a variation
of C, whose captured continuation is composable.
In our framework both operators are value forms.
%
\[
V,W \in \ValCat ::= \cdots \mid \FelleisenC \mid \FelleisenF
\]
%
% The static semantics of $\FelleisenC$ are the same as $\Callcc$,
% whilst the static semantics of $\FelleisenF$ are the same as
% $\Callcomc$.
% \begin{mathpar}
% \inferrule*
% {~}
% {\typ{\Gamma}{\FelleisenC : (\Cont\,\Record{A;\Zero} \to A) \to A}}
% \inferrule*
% {~}
% {\typ{\Gamma}{\FelleisenF : (\Cont\,\Record{A;A} \to A) \to A}}
% \end{mathpar}
%
The dynamic semantics of $\FelleisenC$ and $\FelleisenF$ are as
follows.
%
\begin{reductions}
\slab{C\textrm{-}Capture} & \EC[\FelleisenC\,V] &\reducesto& V~\qq{\cont_{\EC}}\\
\slab{C\textrm{-}Resume} & \EC[\Continue~\cont_{\EC'}~V] &\reducesto& \EC'[V] \medskip\\
\slab{F\textrm{-}Capture} & \EC[\FelleisenF\,V] &\reducesto& V~\qq{\cont_{\EC}}\\
\slab{F\textrm{-}Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]
\end{reductions}
%
Their capture rules are identical. Both operators abort the current
continuation upon capture. This is what set $\FelleisenF$ apart from
the other composable control operator $\Callcomc$.
%
The resume rules of $\FelleisenC$ and $\FelleisenF$ show the
difference between the two operators. The $\FelleisenC$ operator
aborts the current continuation and reinstall the then-current
continuation just like $\Callcc$, whereas the resumption of a
continuation captured by $\FelleisenF$ composes the current
continuation with the then-current continuation.
\citet{FelleisenFDM87} show that $\FelleisenF$ can simulate
$\FelleisenC$.
%
\[
\sembr{\FelleisenC} \defas \lambda m.\FelleisenF\,(\lambda k. m\,(\lambda v.\FelleisenF\,(\lambda\_.k~v)))
\]
%
The first application of $\FelleisenF$ has the effect of aborting the
current continuation, whilst the second application of $\FelleisenF$
aborts the invocation context.
\citet{FelleisenFDM87} also postulate that $\FelleisenC$ cannot express $\FelleisenF$.
\paragraph{\citeauthor{Landin98}'s J operator}
%
The J operator was introduced by Peter Landin in 1965 (making it the
world's \emph{first} first-class control operator) as a means for
translating jumps and labels in the statement-oriented language
\Algol{} into an expression-oriented
language~\cite{Landin65,Landin65a,Landin98}. Landin used the J
operator to account for the meaning of \Algol{} labels.
%
The following example due to \citet{DanvyM08} provides a flavour of
the correspondence between labels and J.
%
\[
\ba{@{}l@{~}l}
&\mathcal{S}\sembr{\keyw{begin}\;s_1;\;\keyw{goto}\;L;\;L:\,s_2\;\keyw{end}}\\
=& \lambda\Unit.\Let\;L \revto \J\,\mathcal{S}\sembr{s_2}\;\In\;\Let\;\Unit \revto \mathcal{S}\sembr{s_1}\,\Unit\;\In\;\Continue~L\,\Unit
\ea
\]
%
Here $\mathcal{S}\sembr{-}$ denotes the translation of statements. In the image,
the label $L$ manifests as an application of $\J$ and the
$\keyw{goto}$ manifests as an application of continuation captured by
$\J$.
%
The operator extends the syntactic category of values with a new
form.
%
\[
V,W \in \ValCat ::= \cdots \mid \J
\]
%
The previous example hints at the fact that the J operator is quite
different to the previously considered undelimited control operators
in that the captured continuation is \emph{not} the current
continuation, but rather, the continuation of statically enclosing
$\lambda$-abstraction. In other words, $\J$ provides access to the
continuation of its the caller.
%
To this effect, the continuation object produced by an application of
$\J$ may be thought of as a first-class variation of the return
statement commonly found in statement-oriented languages. Since it is
a first-class object it can be passed to another function, meaning
that any function can endow other functions with the ability to return
from it, e.g.
%
\[
\dec{f} \defas \lambda g. \Let\;return \revto \J\,(\lambda x.x) \;\In\; g~return;~\True
\]
%
If the function $g$ does not invoke its argument, then $\dec{f}$
returns $\True$, e.g.
\[
\dec{f}~(\lambda return.\False) \reducesto^+ \True
\]
%
However, if $g$ does apply its argument, then the value provided to
the application becomes the return value of $\dec{f}$, e.g.
%
\[
\dec{f}~(\lambda return.\Continue~return~\False) \reducesto^+ \False
\]
%
The function argument gets post-composed with the continuation of the
calling context.
%
The particular application $\J\,(\lambda x.x)$ is so idiomatic that it
has its own name: $\JI$, where $\keyw{I}$ is the identity function.
% Clearly, the return type of a continuation object produced by an $\J$
% application must be the same as the caller of $\J$. Thus to type $\J$
% we must track the type of calling context. Formally, we track the type
% of the context by extending the typing judgement relation with an
% additional singleton context $\Delta$. This context is modified by the
% typing rule for $\lambda$-abstraction and used by the typing rule for
% $\J$-applications. This is similar to type checking of return
% statements in statement-oriented programming languages.
% %
% \begin{mathpar}
% \inferrule*
% {\typ{\Gamma,x:A;B}{M : B}}
% {\typ{\Gamma;\Delta}{\lambda x.M : A \to B}}
% \inferrule*
% {~}
% {\typ{\Gamma;B}{\J : (A \to B) \to \Cont\,\Record{A;B}}}
% % \inferrule*
% % {\typ{\Gamma;\Delta}{V : A} \\ \typ{\Gamma;\Delta}{W : \Cont\,\Record{A;B}}}
% % {\typ{\Gamma;\Delta}{\Continue~W~V : B}}
% \end{mathpar}
%
Any meaningful applications of $\J$ must appear under a
$\lambda$-abstraction, because the application captures its caller's
continuation. In order to capture the caller's continuation we
annotate the evaluation contexts for ordinary applications.
%
\begin{reductions}
\slab{Annotate} & \EC[(\lambda x.M)\,V] &\reducesto& \EC_\lambda[M[V/x]]\\
\slab{Capture} & \EC_{\lambda}[\mathcal{D}[\J\,W]] &\reducesto& \EC_{\lambda}[\mathcal{D}[\qq{\cont_{\Record{\EC_{\lambda};W}}}]]\\
\slab{Resume} & \EC[\Continue~\cont_{\Record{\EC';W}}\,V] &\reducesto& \EC'[W\,V]
\end{reductions}
%
% \dhil{The continuation object should have time $\Cont\,\Record{A;\Zero}$}
%
The $\slab{Capture}$ rule only applies if the application of $\J$
takes place inside an annotated evaluation context. The continuation
object produced by a $\J$ application encompasses the caller's
continuation $\EC_\lambda$ and the value argument $W$.
%
This continuation object may be invoked in \emph{any} context. An
invocation discards the current continuation $\EC$ and installs $\EC'$
instead with the $\J$-argument $W$ applied to the value $V$.
\citeauthor{Landin98} and \citeauthor{Thielecke02} noticed that $\J$
can be recovered from the special form
$\JI$~\cite{Thielecke02}. Taking $\JI$ to be a primitive, we can
translate $\J$ to a language with $\JI$ as follows.
%
\[
\sembr{\J} \defas (\lambda k.\lambda f.\lambda x.\Continue\;k\,(f\,x))\,(\JI)
\]
%
The term $\JI$ captures the caller continuation, which gets bound to
$k$. The shape of the residual term is as expected: when $\sembr{\J}$
is applied to a function, it returns another function, which when
applied ultimately invokes the captured continuation.
%
% Strictly speaking in our setting this encoding is not faithful,
% because we do not treat continuations as first-class functions,
% meaning the types are not going to match up. An application of the
% left hand side returns a continuation object, whereas an application
% of the right hand side returns a continuation function.
Let us end by remarking that the J operator is expressive enough to
encode a familiar control operator like $\Callcc$~\cite{Thielecke98}.
%
\[
\sembr{\Callcc} \defas \lambda f. f\,\JI
\]
%
\citet{Felleisen87b} has shown that the J operator can be
syntactically embedded using callcc.
%
\[
\sembr{\lambda x.M} \defas \lambda x.\Callcc\,(\lambda k.\sembr{M}[\J \mapsto \lambda f.\lambda y. k~(f\,y)])
\]
%
The key point here is that $\lambda$-abstractions are not translated
homomorphically. The occurrence of $\Callcc$ immediately under the
binder reifies the current continuation of the function, which is the
precisely the caller continuation in the body $M$. In $M$ the symbol
$\J$ is substituted with a function that simulates $\J$ by
post-composing the captured continuation with the function argument
provided to $\J$.
\subsection{Delimited control operators}
%
The main problem with undelimited control is that it is the
programmatic embodiment of the proverb \emph{all or nothing} in the
sense that an undelimited continuation always represent the entire
residual program from its point of capture. In its basic form
undelimited control does not offer the flexibility to reify only some
segments of the evaluation context.
%
Delimited control rectifies this problem by associating each control
operator with a control delimiter such that designated segments of the
evaluation context can be captured individually without interfering
with the context beyond the delimiter. This provides a powerful and
modular programmatic tool that enables programmers to isolate the
control flow of specific parts of their programs, and thus enables
local reasoning about the behaviour of control infused program
segments.
%
One may argue that delimited control to an extent is more first-class
than undelimited control, because, in contrast to undelimited control,
it provides more fine-grain control over the evaluation context.
%
% Essentially, delimited control adds the excluded middle: \emph{all,
% some, or nothing}.
In 1988 \citeauthor{Felleisen88} introduced the first control
delimiter known as `prompt', as a companion to the composable control
operator F (alias control)~\cite{Felleisen88}.
%
\citeauthor{Felleisen88}'s line of work was driven by a dynamic
interpretation of composable continuations in terms of algebraic
manipulation of control component of abstract machines. In the
context of abstract machines, a continuation is defined as a sequence
of frames, whose end is denoted by a prompt, and continuation
composition is concatenation of their
sequences~\cite{Felleisen87,FelleisenF86,FelleisenWFD88}.
%
The natural outcome of this interpretation is the control phenomenon
known as \emph{dynamic delimited control}, where the control operator
is dynamically bound by its delimiter. An application of a control
operator causes the machine to scour through control component to
locate the corresponding delimiter.
The following year, \citet{DanvyF89} introduced an alternative pair of
operators known as `shift' and `reset', where `shift' is the control
operator and `reset' is the control delimiter. Their line of work were
driven by a static interpretation of composable continuations in terms
of continuation passing style (CPS). In ordinary CPS a continuation is
represented as a function, however, there is no notion of composition,
because every function call must appear in tail position. The `shift'
operator enables composition of continuation functions as it provides
a means for abstracting over control contexts. Technically, this works
by iterating the CPS transform twice on the source program, where
`shift' provides access to continuations that arise from the second
transformation. The `reset' operator acts as the identity for
continuation functions, which effectively delimits the extent of
`shift' as in terms of CPS the identity function denotes the top-level
continuation.
%
This interpretation of composable continuations as functions naturally
leads to the control phenomenon known as \emph{static delimited
control}, where the control operator is statically bound by its
delimiter.
The machine interpretation and continuation passing style
interpretation of composable continuations were eventually connected
through defunctionalisation and refunctionalisation in a line of work
by \citeauthor{Danvy04a} and
collaborators~\cite{DanvyN01,AgerBDM03,Danvy04,AgerDM04,Danvy04a,AgerDM05,DanvyM09}.
% The following year, \citet{DanvyF89} introduced an alternative pair of
% operators known as `shift' and `reset', where `shift' is the control
% operator and `reset' is the control delimiter. Their line of work were
% driven by a static interpretation of composable continuations in terms
% of algebraic manipulation of continuations arising from hierarchical
% continuation passing style (CPS) transformations. In ordinary CPS a
% continuation is represented as a function, which is abortive rather
% than composable, because every function application appear in tail
% position.
% %
% The operators `shift' and `reset' were introduced as a programmatic
% way to manipulate and compose continuations. Algebraically `shift'
% corresponds to the composition operation for continuation functions,
% whereas `reset' corresponds to the identity
% element~\cite{DanvyF89,DanvyF90,DanvyF92}.
% %
% Technically, the operators operate on a meta layer, which is obtained
% by CPS transforming the image again. An indefinite amount of meta
% layers can be obtained by iterating the CPS transformation on its
% image, leading to a whole hierarchy of CPS.
% %
% %
% \dhil{Consider dropping the blurb about hierarchy/meta layers.}
Since control/prompt and shift/reset a whole variety of alternative
delimited control operators has appeared.
% Delimited control: Control delimiters form the basis for delimited
% control. \citeauthor{Felleisen88} introduced control delimiters in
% 1988, although allusions to control delimiters were made a year
% earlier by \citet{FelleisenFDM87} and in \citeauthor{Felleisen87}'s
% PhD dissertation~\cite{Felleisen87}. The basic idea was teased even
% earlier in \citeauthor{Talcott85}'s teased the idea of control
% delimiters in her PhD dissertation~\cite{Talcott85}.
% %
% Common Lisp resumable exceptions (condition system)~\cite{Steele90},
% F~\cite{FelleisenFDM87,Felleisen88}, control/prompt~\cite{SitaramF90},
% shift/reset~\cite{DanvyF89,DanvyF90}, splitter~\cite{QueinnecS91},
% fcontrol~\cite{Sitaram93}, catchcont~\cite{LongleyW08}, effect
% handlers~\cite{PlotkinP09}.
% Comparison of various delimited control
% operators~\cite{Shan04}. Simulation of delimited control using
% undelimited control~\cite{Filinski94}
\paragraph{\citeauthor{Felleisen88}'s control and prompt}
%
Control and prompt were introduced by \citeauthor{Felleisen88} in
1988~\cite{Felleisen88}. The control operator `control' is a
rebranding of the F operator. Although, the name `control' was first
introduced a little later by \citet{SitaramF90}. A prompt acts as a
control-flow barrier that delimits different parts of a program,
enabling programmers to manipulate and reason about control locally in
different parts of a program. The name `prompt' is intended to draw
connections to shell prompts, and how they act as barriers between the
user and operating system.
%
In this presentation both control and prompt appear as computation
forms.
%
\begin{syntax}
&M,W \in \CompCat &::=& \cdots \mid \Control~k.M \mid \Prompt~M
\end{syntax}
%
The $\Control~k.M$ expression reifies the context up to the nearest,
dynamically determined, enclosing prompt and binds it to $k$ inside of
$M$. A prompt is written using the sharp ($\Prompt$) symbol.
%
The prompt remains in place after the reification, and thus any
subsequent application of $\Control$ will be delimited by the same
prompt.
%
Presenting $\Control$ as a binding form may conceal the fact that it
is same as $\FelleisenF$. However, the presentation here is close to
\citeauthor{SitaramF90}'s presentation, which in turn is close to
actual implementations of $\Control$.
The static semantics of control and prompt were absent in
\citeauthor{Felleisen88}'s original treatment. Later,
\citet{KameyamaY08} have given a polymorphic type system with answer
type modifications for control and prompt (we will discuss answer type
modification when discussing shift/reset). It is also worth mentioning
that \citet{DybvigJS07} present a typed embedding of control and
prompts in Haskell (actually, they present an entire general monadic
framework for implementing control operators based on the idea of
\emph{multi-prompts}, which are a slight generalisation of prompts ---
we will revisit multi-prompts when we discuss splitter and cupto).
%
% \dhil{Mention Yonezawa and Kameyama's type system.}
% %
% \citet{DybvigJS07} gave a typed embedding of multi-prompts in
% Haskell. In the multi-prompt setting the prompts are named and an
% instance of $\Control$ is indexed by the prompt name of its designated
% delimiter.
% Typing them, particularly using a simple type system,
% affect their expressivity, because the type of the continuation object
% produced by $\Control$ must be compatible with the type of its nearest
% enclosing prompt -- this type is often called the \emph{answer} type
% (this terminology is adopted from typed continuation passing style
% transforms, where the codomain of every function is transformed to
% yield the type of whatever answer the entire program
% yields~\cite{MeyerW85}).
% %
% \dhil{Give intuition for why soundness requires the answer type to be fixed.}
% %
% In the static semantics we extend the typing judgement relation to
% contain an up front fixed answer type $A$.
% %
% \begin{mathpar}
% \inferrule*
% {\typ{\Gamma;A}{M : A}}
% {\typ{\Gamma;A}{\Prompt~M : A}}
% \inferrule*
% {~}
% {\typ{\Gamma;A}{\Control : (\Cont\,\Record{A;A} \to A) \to A}}
% \end{mathpar}
% %
% A prompt has the same type as its computation constituent, which in
% turn must have the same type as fixed answer type.
% %
% Similarly, the type of $\Control$ is governed by the fixed answer
% type. Discarding the answer type reveals that $\Control$ has the same
% typing judgement as $\FelleisenF$.
% %
The dynamic semantics for control and prompt consist of three rules:
1) handle return through a prompt, 2) continuation capture, and 3)
continuation invocation.
%
\begin{reductions}
\slab{Value} &
\Prompt~V &\reducesto& V\\
\slab{Capture} &
\Prompt~\EC[\Control~k.M] &\reducesto& \Prompt~M[\qq{\cont_{\EC}}/k], \text{ where $\EC$ contains no \Prompt}\\
\slab{Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]
\end{reductions}
%
The \slab{Value} rule accounts for the case when the computation
constituent of $\Prompt$ has been reduced to a value, in which case
the prompt is removed and the value is returned.
%
The \slab{Capture} rule states that an application of $\Control$
captures the current continuation up to the nearest enclosing
prompt. The current continuation (up to the nearest prompt) is also
aborted. If we erase $\Prompt$ from the rule, then it is clear that
$\Control$ has the same dynamic behaviour as $\FelleisenF$.
%
It is evident from the \slab{Resume} rule that control and prompt are
an instance of a dynamic control operator, because resuming the
continuation object produced by $\Control$ does not install a new
prompt.
To illustrate $\Prompt$ and $\Control$ in action, let us consider a
few simple examples.
%
\begin{derivation}
& 1 + \Prompt~2 + (\Control~k.3 + k~0) + (\Control~k'.k'~4)\\
\reducesto^+& \reason{Capture $\EC = 2 + [\,] + (\Control~k'.k'~4)$}\\
& 1 + \Prompt~3+\Continue~\cont_{\EC}~0\\
\reducesto & \reason{Resume with 0}\\
& 1 + \Prompt~3 + (2 + 0) + (\Control~k'.k'~4)\\
\reducesto^+ & \reason{Capture $\EC' = 5 + [\,]$}\\
& 1 + \Prompt~\Continue~\cont_{\EC'}~4\\
\reducesto^+ & \reason{Resume with 4}\\
& 1 + \Prompt~5 + 4\\
\reducesto^+ & \reason{\slab{Value} rule}\\
& 1 + 9 \reducesto 10
\end{derivation}
%
The continuation captured by the either application of $\Control$ is
oblivious to the continuation $1 + [\,]$ of $\Prompt$. Since the
captured continuation is composable it returns to its call site. The
invocation of the captured continuation $k$ returns the value 0, but
splices the captured context into the context $3 + [\,]$. The second
application of $\Control$ captures the new context up to the
delimiter. The continuation is immediately applied to the value 4,
which causes the captured context to be reinstated with the value 4
plugged in. Ultimately the delimited context reduces to the value $9$,
after which the prompt $\Prompt$ gets eliminated, and the continuation
of the $\Prompt$ is applied to the value $9$, resulting in the final
result $10$.
Let us consider a slight variation of the previous example.
%
\begin{derivation}
& 1 + \Prompt~2 + (\Control~k.3 + k~0) + (\Control~k'.4)\\
\reducesto^+& \reason{Capture $\EC = 2 + [\,] + (\Control~k'.4)$}\\
& 1 + \Prompt~3+\Continue~\cont_{\EC}~0\\
\reducesto & \reason{Resume with 0}\\
& 1 + \Prompt~3 + (2 + 0) + (\Control~k'.4)\\
\reducesto^+ & \reason{Capture $\EC' = 5 + [\,]$}\\
& 1 + \Prompt~4\\
\reducesto^+ & \reason{\slab{Value} rule}\\
& 1 + 4 \reducesto 5
\end{derivation}
%
Here the computation constituent of the second application of
$\Control$ drops the captured continuation, which has the effect of
erasing the previous computation, ultimately resulting in the value
$5$ rather than $10$.
% \begin{derivation}
% & 1 + \Prompt~2 + (\Control~k.\Continue~k~0) + (\Control~k'. 0)\\
% \reducesto^+& \reason{Capture $\EC = 2 + [\,] + (\Control~k'.0)$}\\
% & 1 + \Prompt~\Continue~\cont_{\EC}~0\\
% \reducesto & \reason{Resume with 0}\\
% & 1 + \Prompt~2 + 0 + (\Control~k'. 0)\\
% \reducesto^+ & \reason{Capture $\EC' = 2 + [\,]$}\\
% & 1 + \Prompt~0 \\
% \reducesto & \reason{\slab{Value} rule}\\
% & 1 + 0 \reducesto 1
% \end{derivation}
%
The continuation captured by the first application of $\Control$
contains another application of $\Control$. The application of the
continuation immediate reinstates the captured context filling the
hole left by the first instance of $\Control$ with the value $0$. The
second application of $\Control$ captures the remainder of the
computation of to $\Prompt$. However, the captured context gets
discarded, because the continuation $k'$ is never invoked.
%
A slight variation on control and prompt is $\Controlz$ and
$\Promptz$~\cite{Shan04}. The main difference is that $\Controlz$
removes its corresponding prompt, i.e.
%
\begin{reductions}
% \slab{Value} &
% \Prompt~V &\reducesto& V\\
\slab{Capture_0} &
\Promptz~\EC[\Controlz~k.M] &\reducesto& M[\qq{\cont_{\EC}}/k], \text{ where $\EC$ contains no \Promptz}\\
% \slab{Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]
\end{reductions}
%
Higher-order programming with control and prompt (and delimited
control in general) is fragile, because the body of a higher-order
function may inadvertently trap instances of control in its functional
arguments.
%
This observation led \citet{SitaramF90} to define an indexed family of
control and prompt pairs such that instances of control and prompt can
be layered on top of one another. The idea is that the index on each
pair denotes their level $i$ such that $\Control^i$ matches
$\Prompt^i$ and may capture any other instances of $\Prompt^j$ where
$j < i$.
% \dhil{Mention control0/prompt0 and
% the control hierarchy}
\paragraph{\citeauthor{DanvyF90}'s shift and reset} Shift and reset
first appeared in a technical report by \citeauthor{DanvyF89} in
1989. Although, perhaps the most widely known account of shift and
reset appeared in \citeauthor{DanvyF90}'s seminal work on abstracting
control the following year~\cite{DanvyF90}.
%
Shift and reset differ from control and prompt in that the contexts
abstracted by shift are statically scoped by reset.
% As with control and prompt, in our setting, shift appears as a value,
% whilst reset appear as a computation.
In our setting both shift and reset appear as computation forms.
%
\begin{syntax}
% & V, W &::=& \cdots \mid \shift\\
& M, N \in \CompCat &::=& \cdots \mid \shift\; k.M \mid \reset{M}
\end{syntax}
%
The $\shift$ construct captures the continuation delimited by an
enclosing $\reset{-}$ and binds it to $k$ in the computation $M$.
\citeauthor{DanvyF89}'s original development of shift and reset stands
out from the previous developments of control operators, as they
presented a type system for shift and reset, whereas previous control
operators were originally studied in untyped settings.
%
The standard inference-based approach to type
checking~\cite{Plotkin81,Plotkin04a} is inadequate for type checking
shift and reset, because shift may alter the \emph{answer type} of the
expression (the terminology `answer type' is adopted from typed
continuation passing style transforms, where the codomain of every
function is transformed to yield the type of whatever answer the
entire program yields~\cite{MeyerW85}).
%
To capture the potent power of shift in the type system they
introduced the notion of \emph{answer type
modification}~\cite{DanvyF89}.
%
The addition of answer type modification changes type judgement to be
a five place relation.
%
\[
\typ{\Gamma;B}{M : A; B'}
\]
%
This would be read as: in a context $\Gamma$ where the original result
type was $B$, the type of $M$ is $A$, and modifies the result type to
$B'$. In this system the typing rule for $\shift$ is as follows.
%
\begin{mathpar}
\inferrule*
{\typ{\Gamma,k : A / C \to B / C;D}{M : D;B'}}
{\typ{\Gamma;B}{\shift\;k.M : A;B'}}
\end{mathpar}
%
Here the function type constructor $-/- \to -/-$ has been endowed with
the domain and codomain of the continuation. The left hand side of
$\to$ contains the domain type of the function and the codomain of the
continuation, respectively. The right hand side contains the domain of
the continuation and the codomain of the function, respectively.
Answer type modification is a powerful feature that can be used to
type embedded languages, an illustrious application of this is
\citeauthor{Danvy98}'s typed $\dec{printf}$~\cite{Danvy98}. A
polymorphic extension of answer type modification has been
investigated by \citet{AsaiK07}, \citet{KiselyovS07} developed a
substructural type system with answer type modification, whilst
\citet{KoboriKK16} demonstrated how to translate from a source
language with answer type modification into a system without using
typed multi-prompts.
Differences between shift/reset and control/prompt manifest in the
dynamic semantics as well.
%
\begin{reductions}
\slab{Value} & \reset{V} &\reducesto& V\\
\slab{Capture} & \reset{\EC[\shift\;k.M]} &\reducesto& \reset{M[\qq{\cont_{\reset{\EC}}}/k]}, \text { where $\EC$ contains no $\reset{-}$}\\
% \slab{Resume} & \reset{\EC[\Continue~\cont_{\reset{\EC'}}~V]} &\reducesto& \reset{\EC[\reset{\EC'[V]}]}\\
\slab{Resume} & \Continue~\cont_{\reset{\EC}}~V &\reducesto& \reset{\EC[V]}\\
\end{reductions}
%
The key difference between \citeauthor{Felleisen88}'s control/prompt
and shift/reset is that the $\slab{Capture}$ rule for the latter
includes a copy of the delimiter in the reified continuation. This
delimiter gets installed along with the captured context $\EC$ when
the continuation object is resumed. The extra reset has ramifications
for the operational behaviour of subsequent occurrences of $\shift$ in
$\EC$. To put this into perspective, let us revisit the second
control/prompt example with shift/reset instead.
%
\begin{derivation}
& 1 + \reset{2 + (\shift\;k.3 + k\,0) + (\shift\;k'.4)}\\
\reducesto^+& \reason{Capture $\EC = 2 + [\,] + (\shift\;k.4)$}\\
& 1 + \reset{\Continue~\cont_{\EC}~0}\\
\reducesto & \reason{Resume with 0}\\
& 1 + \reset{3 + \reset{2 + 0 + (\shift\;k'. 4)}}\\
\reducesto^+ & \reason{Capture $\EC' = 2 + [\,]$}\\
& 1 + \reset{3 + \reset{4}} \\
\reducesto^+ & \reason{\slab{Value} rule}\\
& 1 + \reset{7} \reducesto^+ 8 \\
\end{derivation}
%
Contrast this result with the result $5$ obtained when using
control/prompt. In essence the insertion of a new reset after
resumption has the effect of remembering the local context of the
previous continuation invocation.
This difference naturally raises the question whether shift/reset and
control/prompt are interdefinable or exhibit essential expressivity
differences. \citet{Shan04} answered this question demonstrating that
shift/reset and control/prompt are macro-expressible. The translations
are too intricate to be reproduced here, however, it is worth noting
that \citeauthor{Shan04} were working in the untyped setting of Scheme
and the translation of control/prompt made use of recursive
continuations. \citet{BiernackiDS05} typed and reimplemented this
translation in Standard ML New Jersey~\cite{AppelM91}, using
\citeauthor{Filinski94}'s encoding of shift/reset in terms of callcc
and state~\cite{Filinski94}.
%
As with control and prompt there exist various variation of shift and
reset. \citet{DanvyF89} also considered $\shiftz$ and
$\resetz{-}$. The operational difference between $\shiftz$/$\resetz{-}$
and $\shift$/$\reset{-}$ manifests in the capture rule.
%
\begin{reductions}
\slab{Capture_0} & \resetz{\EC[\shiftz\,k.M]} &\reducesto& M[\qq{\cont_{\resetz{\EC}}}/k], \text { where $\EC$ contains no $\resetz{-}$}\\
\end{reductions}
%
The control reifier captures the continuation up to and including its
delimiter, however, unlike $\shift$, it removes the control delimiter
from the current evaluation context. Thus $\shiftz$/$\resetz{-}$ are
`dynamic' variations on $\shift$/$\reset{-}$. \citet{MaterzokB12}
introduced $\dollarz{-}{-}$ (pronounced ``dollar0'') as an
alternative control delimiter for $\shiftz$.
\begin{reductions}
\slab{Value_{\$_0}} & \dollarz{x.N}{V} &\reducesto& N[V/x]\\
\slab{Capture_{\$_0}} & \dollarz{x.N}{\EC[\shiftz\,k.M]} &\reducesto& M[\qq{\cont_{(\dollarzh{x.N}{\EC})}}/k],\\
&&&\quad\text{where $\EC$ contains no $\reset{-\mid-}$}\\
\slab{Resume_{\$_0}} & \Continue~\cont_{(\dollarz{x.N}{\EC})}~V &\reducesto& \dollarz{x.N}{\EC[V]}\\
\end{reductions}
%
The intuition here is that $\dollarz{x.N}{M}$ evaluates $M$ to some
value $V$ in a fresh context, and then continues as $N$ with $x$ bound
to $V$. Thus it builds in a form of ``success continuation'' that
makes it possible to post-process the result of a reset0 term. In
fact, reset0 is macro-expressible in terms of
dollar0~\cite{MaterzokB12}.
%
\[
\sembr{\resetz{M}} \defas \dollarz{x.x}{\sembr{M}}\\
\]
%
By taking the success continuation to be the identity function dollar0
becomes operationally equivalent to reset0. As it turns out reset0 and
shift0 (together) can macro-express dollar0~\cite{MaterzokB12}.
%
\[
\sembr{\dollarz{x.N}{M}} \defas (\lambda k.\resetz{(\lambda x.\shiftz~z.k~x)\,\sembr{M}})\,(\lambda x.\sembr{N})
\]
%
This translation is a little more involved. The basic idea is to first
explicit pass in the success continuation, then evaluate $M$ under a
reset to yield value which gets bound to $x$, and then subsequently
uninstall the reset by invoking $\shiftz$ and throwing away the
captured continuation, afterwards we invoke the success continuation
with the value $x$.
% Even though the two constructs are equi-expressive (in the sense of macro-expressiveness) there are good reason for preferring dollar0 over reset0 Since the two constructs are equi-expressive, the curious reader might
% wonder why \citet{MaterzokB12} were
% \dhil{Maybe mention the implication is that control/prompt has CPS semantics.}
% \dhil{Mention shift0/reset0, dollar0\dots}
% \begin{reductions}
% % \slab{Value} & \reset{V} &\reducesto& V\\
% \slab{Capture} & \reset{\EC[\shift\,k.M]} &\reducesto& M[\cont_{\reset{\EC}}/k]\\
% % \slab{Resume} & \Continue~\cont_{\reset{\EC}}~V &\reducesto& \reset{\EC[V]}\\
% \end{reductions}
%
\paragraph{\citeauthor{QueinnecS91}'s splitter} The `splitter' control
operator reconciles abortive continuations and composable
continuations. It was introduced by \citet{QueinnecS91} in 1991. The
name `splitter' is derived from it operational behaviour, as an
application of `splitter' marks evaluation context in order for it to
be split into two parts, where the context outside the mark represents
the rest of computation, and the context inside the mark may be
reified into a delimited continuation. The operator supports two
operations `abort' and `calldc' to control the splitting of evaluation
contexts. The former has the effect of escaping to the outer context,
whilst the latter reifies the inner context as a delimited
continuation (the operation name is short for ``call with delimited
continuation'').
Splitter and the two operations abort and calldc are value forms.
%
\[
V,W \in \ValCat ::= \cdots \mid \splitter \mid \abort \mid \calldc
\]
%
In their treatment of splitter, \citeauthor{QueinnecS91} gave three
different presentations of splitter. The presentation that I have
opted for here is close to their second presentation, which is in
terms of multi-prompt continuations. This variation of splitter admits
a pleasant static semantics too. Thus, we further extend the syntactic
categories with the machinery for first-class prompts.
%
\begin{syntax}
& A,B \in \TypeCat &::=& \cdots \mid \prompttype~A \smallskip\\
& V,W \in \ValCat &::=& \cdots \mid p\\
& M,N \in \CompCat &::=& \cdots \mid \Prompt_V~M
\end{syntax}
%
The type $\prompttype~A$ classifies prompts whose answer type is
$A$. Prompt names are first-class values and denoted by $p$. The
computation $\Prompt_V~M$ denotes a computation $M$ delimited by a
parameterised prompt, whose value parameter $V$ is supposed to be a
prompt name.
%
The static semantics of $\splitter$, $\abort$, and $\calldc$ are as
follows.
%
\begin{mathpar}
\inferrule*
{~}
{\typ{\Gamma}{\splitter : (\prompttype~A \to A) \to A}}
\inferrule*
{~}
{\typ{\Gamma}{\abort : \prompttype~A \times (\UnitType \to A) \to B}}
\inferrule*
{~}
{\typ{\Gamma}{\calldc : \prompttype~A \times ((B \to A) \to B) \to B}}
\end{mathpar}
%
In this presentation, the operator and the two operations all amount
to special higher-order function symbols. The argument to $\splitter$
is parameterised by a prompt name. This name is injected by
$\splitter$ upon application. The operations $\abort$ and $\calldc$
both accept as their first argument the name of the delimiting
prompt. The second argument of $\abort$ is a thunk, whilst the second
argument of $\calldc$ is a higher-order function, which accepts a
continuation as its argument.
For the sake of completeness the prompt primitives are typed as
follows.
%
\begin{mathpar}
\inferrule*
{~}
{\typ{\Gamma,p:\prompttype~A}{p : \prompttype~A}}
\inferrule*
{\typ{\Gamma}{V : \prompttype~A} \\ \typ{\Gamma}{M : A}}
{\typ{\Gamma}{\Prompt_V~M : A}}
\end{mathpar}
%
The dynamic semantics of this presentation require a bit of
generativity in order to generate fresh prompt names. Therefore the
reduction relation is extended with an additional component to keep
track of which prompt names have already been allocated.
%
\begin{reductions}
\slab{AppSplitter} & \splitter~V,\rho &\reducesto& \Prompt_p~V\,p,\rho \uplus \{p\}\\
\slab{Value} & \Prompt_p~V,\rho &\reducesto& V,\rho\\
\slab{Abort} & \Prompt_p~\EC[\abort\,\Record{p;V}],\rho &\reducesto& V\,\Unit,\rho\\%, \quad \text{where $\EC$ contains no $\Prompt_p$}\\
\slab{Capture} & \Prompt_p~\EC[\calldc\,\Record{p;V}] &\reducesto& V~\qq{\cont_{\EC}},\rho\\
\slab{Resume} & \Continue~\cont_{\EC}~V,\rho &\reducesto& \EC[V],\rho
\end{reductions}
%
We see by the $\slab{AppSplitter}$ rule that an application of
$\splitter$ generates a fresh named prompt, whose name is applied on
the function argument.
%
The $\slab{Value}$ rule is completely standard.
%
The $\slab{Abort}$ rule show that an invocation of $\abort$ causes the
current evaluation context $\EC$ up to and including the nearest
enclosing prompt.
%
The next rule $\slab{Capture}$ show that $\calldc$ captures and aborts
the context up to the nearest enclosing prompt. The captured context
is applied on the function argument of $\calldc$. As part of the
operation the prompt is removed. % Thus, $\calldc$ behaves as a
% delimited variation of $\Callcc$.
%
It is clear by the prompt semantics that an invocation of either
$\abort$ and $\calldc$ is only well-defined within the dynamic extent
of $\splitter$. Since the prompt is eliminated after use of either
operation subsequent operation invocations must be guarded by a new
instance of $\splitter$.
Let us consider an example using both $\calldc$ and $\abort$.
%
\begin{derivation}
&2 + \splitter\,(\lambda p.2 + \splitter\,(\lambda p'.3 + \calldc\,\Record{p;\lambda k. k~0 + \abort\,\Record{p';\lambda\Unit.k~1}})),\emptyset\\
\reducesto& \reason{\slab{AppSplitter}}\\
&2 + \Prompt_p~2 + \splitter\,(\lambda p'.3 + \calldc\,\Record{p;\lambda k. k~0 + \abort\,\Record{p';\lambda\Unit.k~1}}), \{p\}\\
\reducesto& \reason{\slab{AppSplitter}}\\
&2 + \Prompt_p~2 + \Prompt_{p'}~3 + \calldc\,\Record{p;\lambda k. k~0 + \abort\,\Record{p';\lambda\Unit.k~1}}, \{p,p'\}\\
\reducesto& \reason{\slab{Capture} $\EC = 2 + \Prompt_{p'}~3 + [\,]$}\\
&2 + k~0 + \abort\,\Record{p';\lambda\Unit.k~1}, \{p,p'\}\\
\reducesto& \reason{\slab{Resume} $\EC$ with $0$}\\
&2 + 2 + \Prompt_{p'}~3 + \abort\,\Record{p';\lambda\Unit.\qq{\cont_{\EC}}\,1}, \{p,p'\}\\
\reducesto^+& \reason{\slab{Abort}}\\
& 4 + \qq{\cont_{\EC}}\,1, \{p,p'\}\\
\reducesto& \reason{\slab{Resume} $\EC$ with $1$}\\
& 4 + 2 + \Prompt_{p'}~3 + 1, \{p,p'\}\\
\reducesto^+& \reason{\slab{Value}}\\
& 6 + 4, \{p,p'\} \reducesto 10, \{p,p'\}
\end{derivation}
%
The important thing to observe here is that the application of
$\calldc$ skips over the inner prompt and reifies it as part of the
continuation. This behaviour stands differ from the original
formulations of control/prompt, shift/reset. The first application of
$k$ restores the context with the prompt. The $\abort$ application
erases the evaluation context up to this prompt, however, the body of
the functional argument to $\abort$ reinvokes the continuation $k$
which restores the prompt context once again.
\citet{MoreauQ94} proposed a variation of splitter called
\emph{marker}, which is also built on top of multi-prompt
semantics. The key difference is that the control reifier strips the
reified context of all prompts.
% \begin{derivation}
% &1 + \splitter\,(\lambda p.2 + \splitter\,(\lambda p'.3 + \calldc\,\Record{p';\lambda k. k\,0 + \calldc\,\Record{p';\lambda k'. k\,(k'\,1)}}))), \emptyset\\
% \reducesto& \reason{\slab{AppSplitter}}\\
% &1 + \Prompt_p~2 + \splitter\,(\lambda p'.3 + \calldc\,\Record{p;\lambda k.k~1 + \calldc\,\Record{p';\lambda k'. k\,(k'~1)}})), \{p\}\\
% \reducesto& \reason{\slab{AppSplitter}}\\
% &1 + \Prompt_p~2 + \Prompt_{p'} 3 + \calldc\,\Record{p';\lambda k.k~0 + \calldc\,\Record{p';\lambda k'. k\,(k'~1)}}, \{p,p'\}\\
% \reducesto& \reason{\slab{Capture} $\EC = 2 + \Prompt_{p'}~3 + [\,]$}, \{p,p'\}\\
% &1 + ((\lambda k.k~0 + \calldc\,\Record{p';\lambda k'. k\,(k'~1)})\,\qq{\cont_\EC}),\{p,p'\}\\
% \reducesto^+& \reason{\slab{Resume} $\EC$ with $0$}\\
% &1 + 2 + \Prompt_{p'}~3 + 0 + \calldc\,\Record{p';\lambda k'. \qq{\cont_\EC}\,(k'~1)}\\
% \reducesto^+& \reason{\slab{Capture} $\EC' = 3 + [\,]$}\\
% &3 + (\lambda k'. \qq{\cont_\EC}\,(k'~1)\,\qq{\cont_{\EC'}})\\
% \reducesto^+& \reason{\slab{Resume} $\EC'$ with $1$}\\
% &3 + \qq{\cont_\EC}\,(3 + 1)\\
% \reducesto^+& \reason{\slab{Resume} $\EC$ with $4$}\\
% &3 + 2 + \Prompt_{p'}~3 + 4\\
% \reducesto^+& \reason{\slab{Value}}\\
% &5 + 7 \reducesto 13
% \end{derivation}
%
% \begin{reductions}
% \slab{Value} & \splitter~abort~calldc.V &\reducesto& V\\
% \slab{Throw} & \splitter~abort~calldc.\EC[\,abort~V] &\reducesto& V~\Unit\\
% \slab{Capture} &
% \splitter~abort~calldc.\EC[calldc~V] &\reducesto& V~\qq{\cont_{\EC}} \\
% \slab{Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]
% \end{reductions}
\paragraph{Spawn} The spawn control operator appeared in a paper
published by \citet{HiebD90} in 1990. It is designed for using
continuations to program tree-based concurrency. Syntactically, spawn
is just a function symbol (like callcc), whose operational behaviour
establishes the root of a process tree, and passes the
\emph{controller} for the tree to its argument. As we will see shortly
a controller is a higher-order function, which grants its argument
access to the continuation of a process.
We add $\Spawn$ as a value form.
%
\begin{syntax}
&V,W \in \ValCat &::=& \Spawn
\end{syntax}
%
\citet{HiebD90} do not give a static semantics for $\Spawn$. Their
dynamic semantics depend on an extension reminiscent of multi-prompts.
%
\begin{syntax}
&\ell \in \mathcal{L} &&\\
&M,N \in \CompCat &::=& \ell : M \mid V\,\reflect\,\ell
\end{syntax}
%
The set $\mathcal{L}$ is some countably set of labels.
%
The expression ($\ell : M$) is called a \emph{labelled} expression. It
essentially plays the role of prompt. The expression
$(V\,\reflect\,\ell)$ is called a control expression. The operator
$\reflect$ is a control reifier which captures the continuation up to
the label $\ell$ and supplies this continuation to $V$.
%
\begin{reductions}
\slab{AppSpawn} & \Spawn~V,\rho &\reducesto& \ell : V\,(\lambda f. f\,\reflect\,\ell), \{\ell\} \uplus \rho\\
\slab{Value} & \ell : V,\rho &\reducesto& V,\rho\\
\slab{Capture} & \ell : \EC[V\,\reflect\,\ell],\rho &\reducesto& V\,\qq{\cont_{\ell : \EC}},\rho\\
\slab{Resume} & \Continue~\cont_{\ell : \EC}~V,\rho &\reducesto& \ell : \EC[V],\rho
\end{reductions}
%
The $\slab{AppSpawn}$ rule generates a fresh $\ell$ and applies the
functional value $V$ the controller for process tree. By the
$\slab{Capture}$ rule, an invocation of the controller causes the
evaluation context up to the matching label $\ell$ to be reified as a
continuation. This continuation gets passed to the functional value of
the control expression. The captured continuation contains the label
$\ell$, and as specified by the $\slab{Resume}$ rule an invocation of
the continuation causes this label to be reinstalled.
The following example usage of $\Spawn$ is a slight variation on an
example due to \citet{HiebDA94}.
%
\begin{derivation}
& 1 \cons (\Spawn\,(\lambda c. 2 \cons (c\,(\lambda k. 3 \cons k\,(k\,\nil))))), \emptyset\\
\reducesto& \reason{\slab{AppSpawn}}\\
&1 \cons (\ell : (\lambda c. 2 \cons (c\,(\lambda k. 3 \cons k\,(k\,\nil))))\,(\lambda f.f \reflect \ell)), \{\ell\}\\
\reducesto& \reason{$\beta$-reduction}\\
&1 \cons (\ell : 2 \cons ((\lambda f.f \reflect \ell)\,(\lambda k. 3 \cons k\,(k\,\nil)))), \{\ell\}\\
\end{derivation}
%
\begin{derivation}
\reducesto& \reason{$\beta$-reduction}\\
&1 \cons (\ell : 2 \cons ((\lambda k. 3 \cons k\,(k\,\nil)) \reflect \ell)), \{\ell\}\\
\reducesto& \reason{\slab{Capture} $\EC = 2 \cons [\,]$}\\
& 1 \cons 3 \cons \qq{\cont_{\EC}}\,(\qq{\cont_{\EC}}\,\nil), \{\ell\}\\
\reducesto& \reason{\slab{Resume} $\EC$ with $\nil$}\\
&1 \cons 3 \cons \qq{\cont_{\EC}}\,(\ell : 2 \cons \nil), \{\ell\}\\
\reducesto^+& \reason{\slab{Value}}\\
&1 \cons 3 \cons \qq{\cont_{\EC}}\,[2], \{\ell\}\\
\reducesto^+& \reason{\slab{Resume} $\EC$ with $[2]$}\\
&1 \cons 3 \cons (\ell : 2 \cons [2]), \{\ell\}\\
\reducesto^+& \reason{\slab{Value}}\\
&1 \cons 3 \cons [2,2], \{\ell\} \reducesto^+ [1,3,2,2], \{\ell\}
\end{derivation}
%
When the controller $c$ is invoked the current continuation is
$1 \cons (\ell : 2 \cons [\,])$. The control expression reifies the
$\ell : 2 \cons [\,]$ portion of the continuation and binds it to
$k$. The first invocation of $k$ reinstates the reified portion and
computes the singleton list $[2]$ which is used as argument to the
second invocation of $k$.
Both \citet{HiebD90} and \citet{HiebDA94} give several concurrent
programming examples with spawn. They show how
parallel-or~\cite{Plotkin77} can be codified as a macro using spawn
(and a parallel invocation primitive \emph{pcall}).
\paragraph{\citeauthor{Sitaram93}'s fcontrol} The control operator
`fcontrol' was introduced by \citet{Sitaram93} in 1993. It is a
refinement of control0/prompt0, and thus, it is a dynamic delimited
control operator. The main novelty of fcontrol is that it shifts the
handling of continuations from control capture operator to the control
delimiter. The prompt interface for fcontrol lets the programmer
attach a handler to it. This handler is activated whenever a
continuation captured.
%
\citeauthor{Sitaram93}'s observation was that with previous control
operators the handling of control happens at continuation capture
point, meaning that the control handling logic gets intertwined with
application logic. The inspiration for the interface of fcontrol and
its associated prompt came from exception handlers, where the handling
of exceptions is separate from the invocation site of
exceptions~\cite{Sitaram93}.
The operator fcontrol is a value and prompt with handler is a
computation.
%
\begin{syntax}
& V, W \in \ValCat &::=& \cdots \mid \fcontrol\\
& M, N \in \CompCat &::=& \cdots \mid \fprompt~V.M
\end{syntax}
%
As with $\Callcc$, the value $\fcontrol$ may be regarded as a special
unary function symbol. The syntax $\fprompt$ denotes a prompt (in
\citeauthor{Sitaram93}'s terminology it is called run). The value
constituent of $\fprompt$ is the control handler. It is a binary
function, that gets applied to the argument of $\fcontrol$ and the
continuation up to the prompt.
%
The dynamic semantics elucidate this behaviour formally.
%
\begin{reductions}
\slab{Value} &
\fprompt~V.W &\reducesto& W\\
\slab{Capture} &
\fprompt~V.\EC[\fcontrol~W] &\reducesto& V~W~\qq{\cont_{\EC}}, \text{ where $\EC$ contains no \fprompt}\\
\slab{Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]
\end{reductions}
%
The $\slab{Value}$ is similar to the previous $\slab{Value}$
rules. The interesting rule is the $\slab{Capture}$. When $\fcontrol$
is applied to some value $W$ the enclosing context $\EC$ gets reified
and aborted up to the nearest enclosing prompt, which invokes the
handler $V$ with the argument $W$ and the continuation.
Consider the following, slightly involved, example.
%
\begin{derivation}
&2 + \fprompt\,(\lambda y~k'.1 + k'~y).\fprompt\,(\lambda x~k. x + k\,(\fcontrol~k)).3 + \fcontrol~1\\
\reducesto& \reason{\slab{Capture} $\EC = 3 + [\,]$}\\
&2 + \fprompt\,(\lambda y~k'.1 + k'~y).(\lambda x~k. x + k\,(\fcontrol~k))\,1\,\qq{\cont_{\EC}}\\
\reducesto^+& \reason{\slab{Capture} $\EC' = 1 + \qq{\cont_{\EC}}\,[\,]$}\\
&2 + (\lambda k~k'.k'\,(k~1))\,\qq{\cont_{\EC}}\,\qq{\cont_{\EC'}}\\
\reducesto^+& \reason{\slab{Resume} $\EC$ with $1$}\\
&2 + \qq{\cont{\EC'}}\,(3 + 1)\\
\reducesto^+& \reason{\slab{Resume} $\EC'$ with $4$}\\
&2 + 1 + \qq{\cont_{\EC}}\,4\\
\reducesto^+& \reason{\slab{Resume} $\EC$ with 4}\\
&3 + 3 + 4 \reducesto^+ 10
\end{derivation}
%
This example makes use of nontrivial control manipulation as it passes
a captured continuation around. However, the point is that the
separation of the handling of continuations from their capture makes
it considerably easier to implement complicated control idioms,
because the handling code is compartmentalised.
\paragraph{Cupto} The control operator cupto is a variation of
control0/prompt0 designed to fit into the typed ML-family of
languages. It was introduced by \citet{GunterRR95} in 1995. The name
cupto is an abbreviation for ``control up to''~\cite{GunterRR95}.
%
The control operator comes with a set of companion constructs, and
thus, augments the syntactic categories of types, values, and
computations.
%
\begin{syntax}
& A,B \in \TypeCat &::=& \cdots \mid \prompttype~A \smallskip\\
& V,W \in \ValCat &::=& \cdots \mid p \mid \newPrompt\\
& M,N \in \CompCat &::=& \cdots \mid \Set\;V\;\In\;N \mid \Cupto~V~k.M
\end{syntax}
%
The type $\prompttype~A$ is the type of prompts. It is parameterised
by an answer type $A$ for the prompt context. Prompts are first-class
values, which we denote by $p$. The construct $\newPrompt$ is a
special function symbol, which returns a fresh prompt. The computation
form $\Set\;V\;\In\;N$ activates the prompt $V$ to delimit the dynamic
extent of continuations captured inside $N$. The $\Cupto~V~k.M$
computation binds $k$ to the continuation up to (the first instance
of) the active prompt $V$ in the computation $M$.
\citet{GunterRR95} gave a Hindley-Milner type
system~\cite{Hindley69,Milner78} for $\Cupto$, since they were working
in the context of ML languages. I do not reproduce the full system
here, only the essential rules for the $\Cupto$ constructs.
%
\begin{mathpar}
\inferrule*
{~}
{\typ{\Gamma,p:\prompttype~A}{p : \prompttype~A}}
\inferrule*
{~}
{\typ{\Gamma}{\newPrompt} : \UnitType \to \prompttype~A}
\inferrule*
{\typ{\Gamma}{V : \prompttype~A} \\ \typ{\Gamma}{N : A}}
{\typ{\Gamma}{\Set\;V\;\In\;N : A}}
\inferrule*
{\typ{\Gamma}{V : \prompttype~B} \\ \typ{\Gamma,k : A \to B}{M : B}}
{\typ{\Gamma}{\Cupto\;V\;k.M : A}}
\end{mathpar}
%
%val cupto : 'b prompt -> (('a -> 'b) -> 'b) -> 'a
%
The typing rule for $\Set$ uses the type embedded in the prompt to fix
the type of the whole computation $N$. Similarly, the typing rule for
$\Cupto$ uses the prompt type of its value argument to fix the answer
type for the continuation $k$. % The type of the $\Cupto$ expression is
% the same as the domain of the continuation, which at first glance may
% seem strange. The intuition is that $\Cupto$ behaves as a let binding
% for the continuation in the context of a $\Set$ expression, i.e.
% %
% \[
% \bl
% \Set\;p^{\prompttype~B}\;\In\;\EC[\Cupto\;p\;k^{A \to B}.M^B]^B\\
% \reducesto \Let\;k \revto \lambda x^{A}.\EC[x]^B \;\In\;M^B
% \el
% \]
% %
The dynamic semantics is generative to accommodate generation of fresh
prompts. Formally, the reduction relation is augmented with a store
$\rho$ that tracks which prompt names have already been allocated.
%
\begin{reductions}
\slab{Value} &
\Set\; p \;\In\; V, \rho &\reducesto& V, \rho\\
\slab{NewPrompt} &
\newPrompt~\Unit, \rho &\reducesto& p, \rho \uplus \{p\}\\
\slab{Capture} &
\Set\; p \;\In\; \EC[\Cupto~p~k.M], \rho &\reducesto& M[\qq{\cont_{\EC}}/k], \rho,\\
\multicolumn{4}{l}{\hfill\text{where $p$ is not active in $\EC$}}\\
\slab{Resume} & \Continue~\cont_{\EC}~V, \rho &\reducesto& \EC[V], \rho
\end{reductions}
%
The $\slab{Value}$ rule is akin to value rules of shift/reset and
control/prompt. The rule $\slab{NewPrompt}$ allocates a fresh prompt
name $p$ and adds it to the store $\rho$. The $\slab{Capture}$ rule
reifies and aborts the evaluation context up to the nearest enclosing
active prompt $p$. After reification the prompt is removed and
evaluation continues as $M$. The $\slab{Resume}$ rule reinstalls the
captured context $\EC$ with the argument $V$ plugged in.
%
\citeauthor{GunterRR95}'s cupto provides similar behaviour to
\citeauthor{QueinnecS91}'s splitter in regards to being able to `jump
over prompt'. However, the separation of prompt creation from the
control reifier coupled with the ability to set prompts manually
provide a considerable amount of flexibility. For instance, consider
the following example which illustrates how control reifier $\Cupto$
may escape a matching control delimiter. Let us assume that two
distinct prompts $p$ and $p'$ have already been created.
%
\begin{derivation}
&2 + \Set\; p \;\In\; 3 + \Set\;p'\;\In\;(\Set\;p\;\In\;\lambda\Unit.\Cupto~p~k.k\,(k~1))\,\Unit,\{p,p'\}\\
\reducesto& \reason{\slab{Value}}\\
&2 + \Set\; p \;\In\; 3 + \Set\;p'\;\In\;(\lambda\Unit.\Cupto~p~k.k\,(k~1))\,\Unit,\{p,p'\}\\
\reducesto^+& \reason{\slab{Capture} $\EC = 3 + \Set\;p'\;\In\;[\,]$}\\
&2 + \qq{\cont_{\EC}}\,(\qq{\cont_{\EC}}\,1),\{p,p'\}\\
\reducesto& \reason{\slab{Resume} $\EC$ with $1$}\\
&2 + \qq{\cont_{\EC}}\,(3 + \Set\;p'\;\In\;1),\{p,p'\}\\
\reducesto^+& \reason{\slab{Value}}\\
&2 + \qq{\cont_{\EC}}\,4,\{p,p'\}\\
\reducesto& \reason{\slab{Resume} $\EC$ with $4$}\\
&2 + \Set\;p'\;In\;4,\{p,p'\}\\
\reducesto& \reason{\slab{Value}}\\
&2 + 4,\{p,p'\} \reducesto 6,\{p,p'\}
\end{derivation}
%
The prompt $p$ is used twice, and the dynamic scoping of $\Cupto$
means when it is evaluated it reifies the continuation up to the
nearest enclosing usage of the prompt $p$. Contrast this with the
morally equivalent example using splitter, which would get stuck on
the application of the control reifier, because it has escaped the
dynamic extent of its matching delimiter.
%
\paragraph{\citeauthor{PlotkinP09}'s effect handlers} In 2009,
\citet{PlotkinP09} introduced handlers for \citeauthor{PlotkinP01}'s
algebraic effects~\cite{PlotkinP01,PlotkinP03,PlotkinP13}. In contrast
to the previous control operators, the mathematical foundations of
handlers were not an afterthought, rather, their origin is deeply
rooted in mathematics. Nevertheless, they turn out to provide a
pragmatic interface for programming with control. Operationally,
effect handlers can be viewed as a small extension to exception
handlers, where exceptions are resumable. Effect handlers are similar
to fcontrol in that handling of control happens at the delimiter and
not at the point of control capture. Unlike fcontrol, the interface of
effect handlers provide a mechanism for handling the return value of a
computation similar to \citeauthor{BentonK01}'s exception handlers
with success continuations~\cite{BentonK01}.
Effect handler definitions occupy their own syntactic category.
%
\begin{syntax}
&A,B \in \ValTypeCat &::=& \cdots \mid A \Harrow B \smallskip\\
&H \in \HandlerCat &::=& \{ \Return \; x \mapsto M \}
\mid \{ \OpCase{\ell}{p}{k} \mapsto N \} \uplus H\\
\end{syntax}
%
An effect handler consists of a $\Return$-clause and zero or more
operation clauses. Each operation clause binds the payload of the
matching operation $\ell$ to $p$ and the continuation of the operation
invocation to $k$ in $N$.
Effect handlers introduces a new syntactic category of signatures, and
extends the value types with operation types. Operation and handler
application both appear as computation forms.
%
\begin{syntax}
&\Sigma \in \mathsf{Sig} &::=& \emptyset \mid \{ \ell : A \opto B \} \uplus \Sigma\\
&A,B,C,D \in \ValTypeCat &::=& \cdots \mid A \opto B \smallskip\\
&M,N \in \CompCat &::=& \cdots \mid \Do\;\ell\,V \mid \Handle \; M \; \With \; H\\[1ex]
\end{syntax}
%
A signature is a collection of labels with operation types. An
operation type $A \opto B$ is similar to the function type in that $A$
denotes the domain (type of the argument) of the operation, and $B$
denotes the codomain (return type). For simplicity, we will just
assume a global fixed signature. The form $\Do\;\ell\,V$ is the
application form for operations. It applies an operation $\ell$ with
payload $V$. The construct $\Handle\;M\;\With\;H$ handles a
computation $M$ with handler $H$.
%
\begin{mathpar}
\inferrule*
{{\bl
% C = A \eff \{(\ell_i : A_i \opto B_i)_i; R\} \\
% D = B \eff \{(\ell_i : P_i)_i; R\}\\
\{\ell_i : A_i \opto B_i\}_i \in \Sigma \\
H = \{\Return\;x \mapsto M\} \uplus \{ \OpCase{\ell_i}{p_i}{k_i} \mapsto N_i \}_i \\
\el}\\\\
\typ{\Gamma, x : A;\Sigma}{M : D}\\\\
[\typ{\Gamma,p_i : A_i, k_i : B_i \to D;\Sigma}{N_i : D}]_i
}
{\typ{\Gamma;\Sigma}{H : C \Harrow D}}
\end{mathpar}
\begin{mathpar}
\inferrule*
{\{ \ell : A \opto B \} \in \Sigma \\ \typ{\Gamma;\Sigma}{V : A}}
{\typ{\Gamma;\Sigma}{\Do\;\ell\,V : B}}
\inferrule*
{
\typ{\Gamma}{M : C} \\
\typ{\Gamma}{H : C \Harrow D}
}
{\typ{\Gamma;\Sigma}{\Handle \; M \; \With\; H : D}}
\end{mathpar}
%
The first typing rule checks that the operation label of each
operation clause is declared in the signature $\Sigma$. The signature
provides the necessary information to construct the type of the
payload parameters $p_i$ and the continuations $k_i$. Note that the
domain of each continuation $k_i$ is compatible with the codomain of
$\ell_i$, and the codomain of $k_i$ is compatible with the codomain of
the handler.
%
The second and third typing rules are application of operations and
handlers, respectively. The rule for operation application simply
inspects the signature to check that the operation is declared, and
that the type of the payload is compatible with the declared type.
This particular presentation is nominal, because operations are
declared up front. Nominal typing is the only sound option in the
absence of an effect system (unless we restrict operations to work
over a fixed type, say, an integer). In
Chapter~\ref{ch:unary-handlers} we see a different presentation based
on structural typing.
The dynamic semantics of effect handlers are similar to that of
$\fcontrol$, though, the $\slab{Value}$ rule is more interesting.
%
\begin{reductions}
\slab{Value} & \Handle\; V \;\With\;H &\reducesto& M[V/x], \text{ where } \{\Return\;x \mapsto M\} \in H\\
\slab{Capture} & \Handle\;\EC[\Do\;\ell~V] \;\With\; H &\reducesto& M[V/p,\qq{\cont_{\Record{\EC;H}}}/k],\\
\multicolumn{4}{l}{\hfill\bl\text{where $\ell$ is not handled in $\EC$}\\\text{and }\{\OpCase{\ell}{p}{k} \mapsto M\} \in H\el}\\
\slab{Resume} & \Continue~\cont_{\Record{\EC;H}}~V &\reducesto& \Handle\;\EC[V]\;\With\;H\\
\end{reductions}
%
The \slab{Value} rule differs from previous operators as it is not
just the identity. Instead the $\Return$-clause of the handler
definition is applied to the return value of the computation.
%
The \slab{Capture} rule handles operation invocation by checking
whether the handler $H$ handles the operation $\ell$, otherwise the
operation implicitly passes through the term to the context outside
the handler. This behaviour is similar to how exceptions pass through
the context until a suitable handler has been found.
%
If $H$ handles $\ell$, then the context $\EC$ from the operation
invocation up to and including the handler $H$ are reified as a
continuation object, which gets bound in the corresponding clause for
$\ell$ in $H$ along with the payload of $\ell$.
%
This form of effect handlers is known as \emph{deep} handlers. They
are deep in the sense that they embody a structural recursion scheme
akin to fold over computation trees induced by effectful
operations. The recursion is evident from $\slab{Resume}$ rule, as
continuation invocation causes the same handler to be reinstalled
along with the captured context.
A classic example of handlers in action is handling of
nondeterminism. Let us fix a signature with two operations.
%
\[
\Sigma \defas \{\Fail : \UnitType \opto \ZeroType; \Choose : \UnitType \opto \Bool\}
\]
%
The $\Fail$ operation is essentially an exception as its codomain is
the empty type, meaning that its continuation can never be
invoked. The $\Choose$ operation returns a boolean.
We will define a handler for each operation.
%
\[
\ba{@{~}l@{~}l}
H^{A}_{f} : A \Harrow \Option~A\\
H_{f} \defas \{ \Return\; x \mapsto \Some~x; &\OpCase{\Fail}{\Unit}{k} \mapsto \None \}\\
H^B_{c} : B \Harrow \List~B\\
H_{c} \defas \{ \Return\; x \mapsto [x]; &\OpCase{\Choose}{\Unit}{k} \mapsto k~\True \concat k~\False \}
\ea
\]
%
The handler $H_f$ handles an invocation of $\Fail$ by dropping the
continuation and simply returning $\None$ (due to the lack
polymorphism, the definitions are parameterised by types $A$ and $B$
respectively. We may consider them as universal type variables). The
$\Return$-case of $H_f$ tags its argument with $\Some$.
%
The $H_c$ definition handles an invocation of $\Choose$ by first
invoking the continuation $k$ with $\True$ and subsequently with
$\False$. The two results are ultimately concatenated. The
$\Return$-case lifts its argument into a singleton list.
%
Now, let us define a simple nondeterministic coin tossing computation
with failure (by convention let us interpret $\True$ as heads and
$\False$ as tails).
%
\[
\bl
\toss : \UnitType \to \Bool\\
\toss~\Unit \defas
\ba[t]{@{~}l}
\If\;\Do\;\Choose~\Unit\\
\Then\;\Do\;\Choose~\Unit\\
\Else\;\Absurd\;\Do\;\Fail~\Unit
\ea
\el
\]
%
The computation $\toss$ first performs $\Choose$ in order to
branch. If it returns $\True$ then a second instance of $\Choose$ is
performed. Otherwise, it raises the $\Fail$ exception.
%
If we apply $\toss$ outside of $H_c$ and $H_f$ then the computation
gets stuck as either $\Choose$ or $\Fail$, or both, would be
unhandled. Thus, we have to run the computation in the context of both
handlers. However, we have a choice to make as we can compose the
handlers in either order. Let us first explore the composition, where
$H_c$ is the outermost handler. Thus we instantiate $H_c$ at type
$\Option~\Bool$ and $H_f$ at type $\Bool$.
%
\begin{derivation}
& \Handle\;(\Handle\;\toss~\Unit\;\With\; H_f)\;\With\;H_c\\
\reducesto & \reason{$\beta$-reduction, $\EC = \If\;[\,]\;\Then \cdots$}\\
& \Handle\;(\Handle\; \EC[\Do\;\Choose~\Unit] \;\With\; H_f)\;\With\;H_c\\
\reducesto & \reason{\slab{Capture}, $\{\OpCase{\Choose}{\Unit}{k} \mapsto \cdots\} \in H_c$, $\EC' = (\Handle\;\EC\;\cdots)$}\\
& k~\True \concat k~\False, \qquad \text{where $k = \qq{\cont_{\Record{\EC';H_c}}}$}\\
\reducesto^+ & \reason{\slab{Resume} with $\True$}\\
& (\Handle\;(\Handle\;\EC[\True] \;\With\;H_f)\;\With\;H_c) \concat k~\False\\
\reducesto & \reason{$\beta$-reduction}\\
& (\Handle\;(\Handle\; \Do\;\Choose~\Unit \;\With\; H_f)\;\With\;H_c) \concat k~\False\\
\end{derivation}
\begin{derivation}
\reducesto & \reason{\slab{Capture}, $\{\OpCase{\Choose}{\Unit}{k'} \mapsto \cdots\} \in H_c$, $\EC'' = (\Handle\;[\,]\;\cdots)$}\\
& (k'~\True \concat k'~\False) \concat k~\False, \qquad \text{where $k' = \qq{\cont_{\Record{\EC'';H_c}}}$}\\
\reducesto& \reason{\slab{Resume} with $\True$}\\
&((\Handle\;(\Handle\; \True \;\With\; H_f)\;\With\;H_c) \concat k'~\False) \concat k~\False\\
\reducesto& \reason{\slab{Value}, $\{\Return\;x \mapsto \cdots\} \in H_f$}\\
&((\Handle\;\Some~\True\;\With\;H_c) \concat k'~\False) \concat k~\False\\
\reducesto& \reason{\slab{Value}, $\{\Return\;x \mapsto \cdots\} \in H_c$}\\
& ([\Some~\True] \concat k'~\False) \concat k~\False\\
\reducesto^+& \reason{\slab{Resume} with $\False$, \slab{Value}, \slab{Value}}\\
& [\Some~\True] \concat [\Some~\False] \concat k~\False\\
\reducesto^+& \reason{\slab{Resume} with $\False$}\\
& [\Some~\True, \Some~\False] \concat (\Handle\;(\Handle\; \Absurd\;\Do\;\Fail\,\Unit \;\With\; H_f)\;\With\;H_c)\\
\reducesto& \reason{\slab{Capture}, $\{\OpCase{\Fail}{\Unit}{k} \mapsto \cdots\} \in H_f$}\\
& [\Some~\True, \Some~\False] \concat (\Handle\; \None\; \With\; H_c)\\
\reducesto& \reason{\slab{Value}, $\{\Return\;x \mapsto \cdots\} \in H_c$}\\
& [\Some~\True, \Some~\False] \concat [\None] \reducesto [\Some~\True,\Some~\False,\None]
\end{derivation}
%
Note how the invocation of $\Choose$ passes through $H_f$, because
$H_f$ does not handle the operation. This is a key characteristic of
handlers, and it is called \emph{effect forwarding}. Any handler will
implicitly forward every operation that it does not handle.
Suppose we were to swap the order of $H_c$ and $H_f$, then the
computation would yield $\None$, because the invocation of $\Fail$
would transfer control to $H_f$, which is the now the outermost
handler, and it would drop the continuation and simply return $\None$.
The alternative to deep handlers is known as \emph{shallow}
handlers. They do not embody a particular recursion scheme, rather,
they correspond to case splits to over computation trees.
%
To distinguish between applications of deep and shallow handlers, we
will mark the latter with a dagger superscript, i.e.
$\ShallowHandle\; - \;\With\;-$. Syntactically deep and shallow
handler definitions are identical, however, their typing differ.
%
\begin{mathpar}
%\mprset{flushleft}
\inferrule*
{{\bl
% C = A \eff \{(\ell_i : A_i \opto B_i)_i; R\} \\
% D = B \eff \{(\ell_i : P_i)_i; R\}\\
\{\ell_i : A_i \opto B_i\}_i \in \Sigma \\
H = \{\Return\;x \mapsto M\} \uplus \{ \OpCase{\ell_i}{p_i}{k_i} \mapsto N_i \}_i \\
\el}\\\\
\typ{\Gamma, x : A;\Sigma}{M : D}\\\\
[\typ{\Gamma,p_i : A_i, k_i : B_i \to C;\Sigma}{N_i : D}]_i
}
{\typ{\Gamma;\Sigma}{H : C \Harrow D}}
\end{mathpar}
%
The difference is in the typing of the continuation $k_i$. The
codomains of continuations must now be compatible with the return type
$C$ of the handled computation. The typing suggests that an invocation
of $k_i$ does not reinstall the handler. The dynamic semantics reveal
that a shallow handler does not reify its own definition.
%
\begin{reductions}
\slab{Capture} & \ShallowHandle\;\EC[\Do\;\ell~V] \;\With\; H &\reducesto& M[V/p,\qq{\cont_{\EC}}/k],\\
\multicolumn{4}{l}{\hfill\bl\text{where $\ell$ is not handled in $\EC$}\\\text{and }\{\ell~p~k \mapsto M\} \in H\el}\\
\slab{Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]\\
\end{reductions}
%
The $\slab{Capture}$ reifies the continuation up to the handler, and
thus the $\slab{Resume}$ rule can only reinstate the captured
continuation without the handler.
%
%\dhil{Revisit the toss example with shallow handlers}
% \begin{reductions}
% \slab{Capture} & \Handle\;\EC[\Do\;\ell~V] \;\With\; H &\reducesto& M[V/p,\qq{\cont_{\EC}}/k],\\
% \multicolumn{4}{l}{\hfill\bl\text{where $\ell$ is not handled in $\EC$}\\\text{and }\{\ell~p~k \mapsto M\} \in H\el}\\
% \slab{Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]\\
% \end{reductions}
%
Chapter~\ref{ch:unary-handlers} contains further examples of deep and
shallow handlers in action.
%
% \dhil{Consider whether to present the below encodings\dots}
% %
% Deep handlers can be used to simulate shift0 and
% reset0~\cite{KammarLO13}.
% %
% \begin{equations}
% \sembr{\shiftz~k.M} &\defas& \Do\;\dec{Shift0}~(\lambda k.M)\\
% \sembr{\resetz{M}} &\defas&
% \ba[t]{@{~}l}\Handle\;m\,\Unit\;\With\\
% ~\ba{@{~}l@{~}c@{~}l}
% \Return\;x &\mapsto& x\\
% \OpCase{\dec{Shift0}}{f}{k} &\mapsto& f\,k
% \ea
% \ea
% \end{equations}
% %
% Shallow handlers can be used to simulate control0 and
% prompt0~\cite{KammarLO13}.
% %
% \begin{equations}
% \sembr{\Controlz~k.M} &\defas& \Do\;\dec{Control0}~(\lambda k.M)\\
% \sembr{\Promptz~M} &\defas&
% \bl
% prompt0\,(\lambda\Unit.M)\\
% \textbf{where}\;
% \bl
% prompt0~m \defas
% \ba[t]{@{~}l}\ShallowHandle\;m\,\Unit\;\With\\
% ~\ba{@{~}l@{~}c@{~}l}
% \Return\;x &\mapsto& x\\
% \OpCase{\dec{Control0}}{f}{k} &\mapsto& prompt0\,(\lambda\Unit.f\,k)
% \ea
% \ea
% \el
% \el
% \end{equations}
%
%Recursive types are required to type the image of this translation
\paragraph{\citeauthor{Longley09}'s catch-with-continue}
%
The control operator \emph{catch-with-continue} (abbreviated
catchcont) is a delimited extension of the $\Catch$ operator. It was
designed by \citet{Longley09} in 2008~\cite{LongleyW08}. Its origin is
in game semantics, in which program evaluation is viewed as an
interactive dialogue with the ambient environment~\cite{Hyland97} ---
this view aligns neatly with the view of effect handler oriented
programming. Curiously, we can view catchcont and effect handlers as
``siblings'' in the sense that \citeauthor{Longley09} and
\citeauthor{PlotkinP09} them respectively, during the same time,
whilst working in the same department. However, the relationship is
presently just `spiritual' as no formal connections have been drawn
between the two operators.
The catchcont operator appears as a computation form in our calculus.
%
\begin{syntax}
&M,N \in \CompCat &::=& \cdots \mid \Catchcont~f.M
\end{syntax}
%
Unlike other delimited control operators, $\Catchcont$ does not
introduce separate explicit syntactic constructs for the control
delimiter and control reifier. Instead it leverages the higher-order
facilities of $\lambda$-calculus: the syntactic construct $\Catchcont$
play the role of control delimiter and the name $f$ of function type
is the name of the control reifier. \citet{LongleyW08} describe $f$ as
a `dummy variable'.
The typing rule for $\Catchcont$ is as follows.
%
\begin{mathpar}
\inferrule*
{\typ{\Gamma, f : A \to B}{M : C \times D} \\ \Ground\;C}
{\typ{\Gamma}{\Catchcont~f.M : C \times ((A \to B) \to D) + (A \times (B \to (A \to B) \to C \times D))}}
\end{mathpar}
%
The computation handled by $\Catchcont$ must return a pair, where the
first component must be a ground value. This restriction ensures that
the value is not a $\lambda$-abstraction, which means that the value
cannot contain any further occurrence of the control reifier $f$. The
second component is unrestricted, and thus, it may contain further
occurrences of $f$. If $M$ fully reduces then $\Catchcont$ returns a
pair consisting of a ground value (i.e. an answer from $M$) and a
continuation function which allow $M$ to yield further
`answers'. Alternatively, if $M$ invokes the control reifier $f$, then
$\Catchcont$ returns a pair consisting of the argument supplied to $f$
and the current continuation of the invocation of $f$.
The operational rules for $\Catchcont$ are as follows.
%
\begin{reductions}
\slab{Value} &
\Catchcont~f . \Record{V;W} &\reducesto& \Inl\; \Record{V;\lambda\,f. W}\\
\slab{Capture} &
\Catchcont~f .\EC[\,f\,V] &\reducesto& \Inr\; \Record{V; \lambda x. \lambda f. \Continue~\cont_{\EC}~x}\\
\slab{Resume} & \Continue~\cont_{\EC}~V &\reducesto& \EC[V]
\end{reductions}
%
The \slab{Value} makes sure to bind any lingering instances of $f$ in
$W$ before escaping the delimiter. The \slab{Capture} rule reifies and
aborts the current evaluation up to, but no including, the delimiter,
which gets uninstalled. The reified evaluation context gets stored in
the second component of the returned pair. Importantly, the second
$\lambda$-abstraction makes sure to bind any instances of $f$ in the
captured evaluation context once it has been reinstated by the
\slab{Resume} rule.
Let us consider an example use of catchcont to compute a tree
representing the interaction between a second-order function and its
first-order parameter.
%
\[
\bl
\dec{odd} : (\Int \to \Bool) \to \Bool \times \UnitType\\
\dec{odd}~f \defas \Record{\dec{xor}\,(f~0)\,(f~1); \Unit}
\el
\]
%
The function $\dec{odd}$ expects its environment to provide it with an
implementation of a single operation of type $\Int \to \Bool$. The
body of $\dec{odd}$ invokes, or queries, this operation twice with
arguments $0$ and $1$, respectively. The results are tested using
exclusive-or.
Now, let us implement the environment for $\dec{odd}$.
%
\[
\bl
\dec{Dialogue} \defas [!:\Int;?:\Record{\Bool,\dec{Dialogue},\dec{Dialogue}}] \smallskip\\
\dec{env} : ((\Int \to \Bool) \to \Bool \times \UnitType) \to \dec{Dialogue}\\
\dec{env}~m \defas
\bl
\Case\;\Catchcont~f.m~f\;\{
\ba[t]{@{}l@{~}c@{~}l}
\Inl~\Record{ans;\Unit} &\mapsto& \dec{!}ans;\\
\Inr~\Record{q;k} &\mapsto& \dec{?}q\,(\dec{env}~k~\True)\,(\dec{env}~k~\False)\}
\ea
\el
\el
\]
%
Type $\dec{Dialogue}$ represents the dialogue between $\dec{odd}$ and
its parameter. The data structure is a standard binary tree with two
constructors: $!$ constructs a leaf holding a value of type $\Int$ and
$?$ constructs an interior node holding a value of type $\Bool$ and
two subtrees. The function $\dec{env}$ implements the environment that
$\dec{odd}$ will be run in. This function evaluates its parameter $m$
under $\Catchcont$ which injects the operation $f$. If $m$ returns,
then the left component gets tagged with $!$, otherwise the argument
to the operation $q$ gets tagged with a $?$ along with the subtrees
constructed by the two recursive applications of $\dec{env}$.
%
% The primitive structure of catchcont makes it somewhat fiddly to
% program with it compared to other control operators as we have to
% manually unpack the data.
The following derivation gives the high-level details of how
evaluation proceeds.
%
\begin{figure}
\begin{center}
\scalebox{1.3}{\SXORTwoModel}
\end{center}
\caption{Visualisation of the result obtained by
$\dec{env}~\dec{odd}$.}\label{fig:decision-tree-cc}
\end{figure}
%
\begin{derivation}
&\dec{env}~\dec{odd}\\
\reducesto^+ & \reason{$\beta$-reduction}\\
&\bl
\Case\;\Catchcont~f.\;\Record{\dec{xor}\,(f~0)\,(f~1);\Unit}\{\cdots\}
% \ba[t]{@{}l@{~}c@{~}l}
% \Inl~\Record{ans;\Unit} &\mapsto& \dec{!}ans;\\
% \Inr~\Record{q;k} &\mapsto& \dec{?}q\,(\dec{env}~k~\True)\,(\dec{env}~k~\False)\}
% \ea
\el\\
\reducesto& \reason{\slab{Capture} $\EC = \Record{\dec{xor}\,[\,]\,(f~1),\Unit}$}\\
&\bl
\Case\;\Inr\,\Record{0;\lambda x.\lambda f.\qq{\cont_{\EC}}\,x}\;\{\cdots\}
% \ba[t]{@{}l@{~}c@{~}l}
% \Inl~\Record{ans;\Unit} &\mapsto& \dec{!}ans;\\
% \Inr~\Record{q;k} &\mapsto& \dec{?}q\,\bl (\dec{env}~\qq{\cont_{\EC}}~\True)\\(\dec{env}~\qq{\cont_{\EC}}~\False)\}\el
% \ea
\el\\
\reducesto^+& \reason{\slab{Resume} $\EC$ with $\True$}\\
&\dec{?}0\,(\Case\;\Catchcont~f.\;\Record{\dec{xor}~\True\,(f~1);\Unit}\{\cdots\})\,(\dec{env}~\qq{\cont_{\EC}}~\False)\\
\reducesto^+& \reason{\slab{Capture} $\EC' = \Record{\dec{xor}~\True\,[\,], \Unit}$}\\
&\dec{?}0\,(\dec{?}1\,(\dec{env}~\qq{\cont_{\EC'}}~\True)\,(\dec{env}~\qq{\cont_{\EC'}}~\False))\,(\dec{env}~\qq{\cont_{\EC}}~\False)\\
\reducesto^+& \reason{\slab{Resume} $\EC'$ with $\True$}\\
&\dec{?}0\,\bl(\dec{?}1\,(\Case\;\Catchcont~f.\Record{\dec{xor}~\True~\True;\Unit}\;\{\cdots\})\,(\dec{env}~\qq{\cont_{\EC'}}~\False))\\(\dec{env}~\qq{\cont_{\EC}}~\False)\el\\
\reducesto^+& \reason{\slab{Value}}\\
&\dec{?}0\,\bl(\dec{?}1\,(\Case\;\Inl~\Record{\False;\Unit}\;\{\cdots\})\,(\dec{env}~\qq{\cont_{\EC'}}~\False))\\(\dec{env}~\qq{\cont_{\EC}}~\False)\el\\
\reducesto& \reason{$\beta$-reduction}\\
&\dec{?}0\,\bl(\dec{?}1\,\dec{!}\False\,(\dec{env}~\qq{\cont_{\EC'}}~\False))\,(\dec{env}~\qq{\cont_{\EC}}~\False)\el\\
\reducesto^+&\reason{Same reasoning}\\
&?0\,(?1\,!\False\,!\True)\,(?1\,!\True\,!\False)
\end{derivation}
%
Figure~\ref{fig:decision-tree-cc} visualises this result as a binary
tree. The example here does not make use of the `continuation
component', the interested reader may consult \citet{LongleyW08} for
an example usage.
% \subsection{Second-class control operators}
% Coroutines, async/await, generators/iterators, amb.
% Backtracking: Amb~\cite{McCarthy63}.
% Coroutines~\cite{DahlDH72} as introduced by Simula
% 67~\cite{DahlMN68}. The notion of coroutines was coined by Melvin
% Conway, who used coroutines as a code idiom in assembly
% programs~\cite{Knuth97}. Canonical reference for implementing
% coroutines with call/cc~\cite{HaynesFW86}.
\section{Programming continuations}
\label{sec:programming-continuations}
%Blind vs non-blind backtracking. Engines. Web
% programming. Asynchronous
% programming. Coroutines.
Amongst the first uses of continuations were modelling of unrestricted
jumps, such as \citeauthor{Landin98}'s modelling of \Algol{} labels
and gotos using the J
operator~\cite{Landin65,Landin65a,Landin98,Reynolds93}.
Backtracking is another early and prominent use of continuations. For
example, \citet{Burstall69} used the J operator to implement a
heuristic-driven search procedure with continuation-backed
backtracking for tree-based search.
%
Somewhat related to backtracking, \citet{FriedmanHK84} posed the
\emph{devils and angels problem} as an example that has no direct
solution in a programming language without first-class control. Any
solution to the devils and angels problem involves extensive
manipulation of control to jump both backwards and forwards to resume
computation.
If the reader ever find themselves in a quiz show asked to single out
a canonical example of continuation use, then implementation of
concurrency would be a qualified guess. Cooperative concurrency in
terms of various forms of coroutines as continuations occur so
frequently in the literature and in the wild that they have become
routine.
%
\citet{HaynesFW86} published one of the first implementations of
coroutines using first-class control.
%
Preemptive concurrency in the form of engines were implemented by
\citet{DybvigH89}. An engine is a control abstraction that runs
computations with an allotted time budget~\cite{HaynesF84}. They used
continuations to represent strands of computation and timer interrupts
to suspend continuations.
%
\citet{KiselyovS07a} used delimited continuations to explain various
phenomena of operating systems, including multi-tasking and file
systems.
%
On the web, \citet{Queinnec04} used continuations to model the
client-server interactions. This model was adapted by
\citet{CooperLWY06} in Links with support for an Erlang-style
concurrency model~\cite{ArmstrongVW93}.
%
\citet{Leijen17a} and \citet{DolanEHMSW17} gave two different ways of
implementing the asynchronous programming operator async/await as a
user-definable library.
%
In the setting of distributed programming, \citet{BracevacASEEM18}
describe a modular event correlation system that makes crucial use of
effect handlers. \citeauthor{Bracevec19}'s PhD dissertation
explicates the theory, design, and implementation of event correlation
by way of effect handlers~\cite{Bracevec19}.
Continuations have also been used in meta-programming to speed up
partial evaluation and
multi-staging~\cite{LawallD94,KameyamaKS11,OishiK17,Yallop17,WeiBTR20}. Let
insertion is a canonical example of use of continuations in
multi-staging~\cite{Yallop17}.
Probabilistic programming is yet another application domain of
continuations. \citet{KiselyovS09} used delimited continuations to
speed up probabilistic programs. \citet{GorinovaMH20} used
continuations to achieve modularise probabilistic programs and to
provide a simple and efficient mechanism for reparameterisation of
inference algorithms.
%
In the subject of differentiable programming \citet{WangZDWER19}
explained reverse-mode automatic differentiation operators in terms of
delimited continuations.
The aforementioned applications of continuations are by no means
exhaustive, though, the diverse application spectrum underlines the
versatility of continuations.
\section{Constraining continuations}
\label{sec:constraining-continuations}
\citet{FriedmanH85} advocated for constraining the power of
(undelimited) continuations~\cite{HaynesF87}.
%
Even though, they were concerned with callcc and undelimited
continuations some of their arguments are applicable to other control
operators and delimited continuations.
%
For example, they argued in favour of restricting continuations to be
one-shot, which means continuations may only be invoked once. Firstly,
because one-shot continuations admit particularly efficient
implementations. Secondly, many applications involve only single use
of continuations. Thirdly, one-shot continuations interact more
robustly with resources, such as file handles, than general multi-shot
continuations, because multiple use of a continuation may accidentally
interact with a resource after it has been released.
One-shot continuations by themselves are no saving grace for avoiding
resource leakage as they may be dropped or used to perform premature
exits from a block with resources. For example, Racket provides the
programmer with a facility known as \emph{dynamic-wind} to protect a
context with resources such that non-local exits properly release
whatever resources the context has acquired~\cite{Flatt20}.
%
An alternative approach is taken by Multicore OCaml, whose
implementation of effect handlers with one-shot continuations provides
both a \emph{continue} primitive for continuing a given continuation
and a \emph{discontinue} primitive for aborting a given
continuation~\cite{DolanWSYM15,DolanEHMSW17}. The latter throws an
exception at the operation invocation site to which can be caught by
local exception handlers to release resources properly.
%
This approach is also used by \citet{Fowler19}, who uses a
substructural type system to statically enforce the use of
continuations, either by means of a continue or a discontinue.
% For example callec is a variation of callcc where continuation
% invocation is only defined during the dynamic extent of
% callec~\cite{Flatt20}.
\section{Implementing continuations}
\label{sec:implementing-continuations}
There are numerous strategies for implementing continuations. There is
no best implementation strategy. Each strategy has different
trade-offs, and as such, there is no ``best'' strategy. In this
section, I will briefly outline the gist of some implementation
strategies and their trade-offs. For an in depth analysis the
interested reader may consult the respective work of
\citet{ClingerHO88} and \citet{FarvardinR20}, which contain thorough
studies of implementation strategies for first-class continuations.
%
Table~\ref{tbl:ctrl-operators-impls} lists some programming languages
with support for first-class control operators and their
implementation strategies.
\begin{table}
\centering
\begin{tabular}{| l | >{\raggedright}p{4.3cm} | l |}
\hline
\multicolumn{1}{|c|}{\textbf{Language}} & \multicolumn{1}{c |}{\textbf{Control operators}} & \multicolumn{1}{c|}{\textbf{Implementation strategies}}\\
\hline
Eff & Effect handlers & Virtual machine, interpreter \\
\hline
Effekt & Lexical effect handlers & CPS\\
\hline
Frank & N-ary effect handlers & CEK machine \\
% \hline
% Gauche & callcc, shift/reset & Virtual machine \\
\hline
Helium & Effect handlers & CEK machine \\
\hline
Koka & Effect handlers & Continuation monad\\
\hline
Links & Effect handlers, escape & CEK machine, CPS\\
\hline
MLton & callcc & Stack copying\\
\hline
Multicore OCaml & Affine effect handlers & Segmented stacks\\
\hline
OchaCaml & shift/reset & Virtual machine\\
\hline
Racket & callcc, \textCallcomc{}, cupto, fcontrol, control/prompt, shift/reset, splitter, spawn & Segmented stacks\\
\hline
% Rhino JavaScript & JI & Interpreter \\
% \hline
Scala & shift/reset & CPS\\
\hline
SML/NJ & callcc & CPS\\
\hline
Wasm/k & control/prompt & Virtual machine \\
\hline
\end{tabular}
\caption{Some languages and their implementation strategies for continuations.}\label{tbl:ctrl-operators-impls}
\end{table}
%
The control stack provides a adequate runtime representation of
continuations as the contiguous sequence of activation records quite
literally represent what to do next.
%
Thus continuation capture can be implemented by making a copy of the
current stack (possibly up to some delimiter), and continuation
invocation as reinstatement of the stack. This implementation strategy
works well if continuations are captured infrequently. The MLton
implementation of Standard ML utilises this strategy~\cite{Fluet20}.
A slight variation is to defer the first copy action until the
continuation is invoked, which requires marking the stack to remember
which sequence of activation records to copy.
Obviously, frequent continuation use on top of a stack copying
implementation can be expensive time wise as well as space wise,
because with undelimited continuations multiple copies of the stack
may be alive simultaneously.
%
Typically the prefix of copies will be identical, which suggests they
ought to be shared. One way to achieve optimal sharing is to move from
a contiguous stack to a non-contiguous stack representation,
e.g. representing the stack as a heap allocated linked list of
activation records~\cite{Danvy87}. With such a representation copying
is a constant time and space operation, because there is no need to
actually copy anything as the continuation is just a pointer into the
stack.
%
The disadvantage of this strategy is that it turns every operation
into an indirection.
Segmented stacks provide a middle ground between contiguous stack and
non-contiguous stack representations. With this representation the
control stack is represented as a linked list of contiguous stacks
which makes it possible to only copy a segment of the stack. The
stacks grown and shrink dynamically as needed. This representation is
due to \citet{HiebDB90}. It is used by Chez Scheme, which is the
runtime that powers Racket~\cite{FlattD20}.
%
For undelimited continuations the basic idea is to create a pointer to
the current stack upon continuation capture, and then allocate a new
stack where subsequent computation happens.
%
For delimited continuations the control delimiter identify when a new
stack should be allocated.
%
A potential problem with this representation is \emph{stack
thrashing}, which is a phenomenon that occurs when a stack is being
continuously resized.
%
This problem was addressed by \citet{BruggemanWD96}, who designed a
slight variation of segmented stacks optimised for one-shot
continuations, which has been adapted by Multicore
OCaml~\cite{DolanEHMSW17}.
Full stack copying and segmented stacks both depend on being able to
manipulate the stack directly. This is seldom possible if the language
implementer do not have control over the target runtime,
e.g. compilation to JavaScript. However, it is possible to emulate
stack copying and segmented stacks in lieu of direct stack access. For
example, \citet{PettyjohnCMKF05} describe a technique that emulates
stack copying by piggybacking on the facile stack inception facility
provided by exception handlers in order to lazily reify the control
stack.
%
\citet{KumarBD98} emulated segmented stacks via threads. Each thread
has its own local stack, and as such, a collection of threads
effectively models segmented stacks. To actually implement
continuations as threads \citeauthor{KumarBD98} also made use of
standard synchronisation primitives.
%
The advantage of these techniques is that they are generally
applicable and portable. The disadvantage is the performance overhead
induced by emulation.
Abstract and virtual machines are a form of full machine emulation. An
abstract machine is an idealised machine. Abstract machines, such as
the CEK machine~\cite{FelleisenF86}, are attractive because they
provide a suitably high-level framework for defining language
semantics in terms of control string manipulations, whilst admitting a
direct implementation.
%
We will discuss abstract machines in more detail in
Chapter~\ref{ch:abstract-machine}.
%
The term virtual machine typically connotes an abstract machine that
works on a byte code representation of programs, whereas the default
connotation of abstract machine is a machine that works on a rich
abstract syntax tree representation of programs.
% \citeauthor{Landin64}'s SECD machine was the
% first abstract machine for evaluating $\lambda$-calculus
% terms~\cite{Landin64,Danvy04}.
%
% Either machine model has an explicit representation of the control
% state in terms of an environment and a control string. Thus either machine can to the
% interpretative overhead.
The disadvantage of abstract machines is their interpretative
overhead, although, techniques such as just-in-time compilation can be
utilised to reduce this overhead.
Continuation passing style (CPS) is a canonical implementation
strategy for continuations --- the word `continuation' even feature in
its name.
%
CPS is a particular idiomatic notation for programs, where every
function takes an additional argument, the current continuation, as
input and every function call appears in tail position. Consequently,
every aspect of control flow is made explicit, which makes CPS a good
fit for implementing control abstraction. In classic CPS the
continuation argument is typically represented as a heap allocated
closure~\cite{Appel92}, however, as we shall see in
Chapter~\ref{ch:cps} richer representations of continuations are
possible.
%
At first thought it may seem that CPS will not work well in
environments that lack proper tail calls such as JavaScript. However,
the contrary is true, because the stackless nature of CPS means it can
readily be implemented with a trampoline~\cite{GanzFW99}. Alas, at the
cost of the indirection induced by the trampoline.
\chapter{Get get is redundant}
\label{ch:get-get}
The global state effect is often presented with following four
equations.
%
\begin{reductions}
\slab{Get\textrm{-}get} & x \revto \getF; y \revto \getF; M &=& x \revto \getF; M[x/y]\\
\slab{Get\textrm{-}put} & x \revto \getF; \putF~x; M &=& M\\
\slab{Put\textrm{-}get} & \putF~V; x \revto \getF; M &=& \putF~V; M[V/x]\\
\slab{Put\textrm{-}put} & \putF~V; \putF~W; M &=& \putF~W;M
\end{reductions}
%
However, the first equation is derivable from the second and third
equations. I first learned this from Paul{-}Andr{\'{e}} Melli{\`{e}}s
during Shonan Seminar No.103 \emph{Semantics of Effects, Resources,
and Applications}. I have been unable to find a proof of this fact
in the literature, though, \citeauthor{Mellies14} does have published
paper, which only states the three necessary
equations~\cite{Mellies14}.
%
Therefore I include a proof of this fact here (thanks to Sam Lindley
for helping me relearning this fact from first principles).
\begin{theorem}
\slab{Get\textrm{-}put} and \slab{Put\textrm{-}get} implies \slab{Get\textrm{-}get}
\end{theorem}
\begin{proof}
\begin{derivation}
&x \revto \getF; y \revto \getF; M\\
=& \reason{\slab{Get\textrm{-}put} right-to-left; $z \notin \FV(M)$}\\
&z \revto \getF; \putF~z; x \revto \getF; y \revto \getF; M\\
=& \reason{\slab{Put\textrm{-}get}}\\
&z \revto \getF; \putF~z; y \revto \getF; M[z/x]\\
=& \reason{\slab{Put\textrm{-}get}}\\
&z \revto \getF; \putF~z; M[z/x, z/y]\\
=& \reason{composition of substitution}\\
&z \revto \getF; \putF~z; (M[x/y])[z/x]\\
\end{derivation}
\begin{derivation}
=& \reason{\slab{Put\textrm{-}get} right-to-left}\\
&z \revto \getF; \putF~z; x \revto \getF; M[x/y]\\
=& \reason{\slab{Get\textrm{-}put}}\\
&x \revto \getF; M[x/y]
\end{derivation}
\end{proof}
\chapter{Proofs of correctness of the higher-order uncurried CPS
translation}
\label{sec:proofs-cps-gen-cont}
This appendix contains the proof details for the higher-order
uncurried CPS translation.
\paragraph{Relation to prior work} This appendix is imported from
Appendix A of \citet{HillerstromLA20}.\medskip
\begin{lemma}[Substitution]\label{lem:subst-gen-cont-proof}
%
The CPS translation commutes with substitution in value terms
%
\[
\cps{W}[\cps{V}/x] = \cps{W[V/x]},
\]
%
and with substitution in computation terms
\[
\ba{@{}l@{~}l}
&(\cps{M} \sapp (\sV_f \scons \sW))[\cps{V}/x]\\
= &\cps{M[V/x]} \sapp (\sV_f \scons \sW)[\cps{V}/x],
\ea
\]
%
and with substitution in handler definitions
%
\begin{equations}
\cps{\hret}[\cps{V}/x]
&=& \cps{\hret[V/x]},\\
\cps{\hops}[\cps{V}/x]
&=& \cps{\hops[V/x]}.
\end{equations}
\end{lemma}
%
\begin{proof}
The proof is by mutual induction on the structure of the computation
term $M$ and the value term $W$. For most of the cases, the
existence of the top level frame on the stack is not important, so
we just refer to the whole static continuation stack as $\sW$. Note
that we make implicit use of the fact that the parts of the
continuation stack that are statically known are all of the form of
right nested triples of reflected dynamic terms.
\begin{description}
\item[Case] $M = V' \, W$.
\begin{derivation}
& (\cps{V' \, W} \sapp \sW)[\cps{V}/x]\\
=& \reason{definition of $\cps{-}$} \\
& ((\slam \sk . \cps{V'} \dapp \cps{W} \dapp \reify \sk) \sapp \sW)[\cps{V}/x] \\
=& \reason{static $\beta$-conversion} \\
& (\cps{V'} \dapp \cps{W} \dapp \reify \sW)[\cps{V}/x] \\
=& \reason{definition of $[-]$} \\
& (\cps{V'}[\cps{V}/x]) \dapp (\cps{W}[\cps{V}/x]) \dapp \reify \sW[\cps{V}/x] \\
=& \reason{IH 2, twice} \\
& \cps{V'[V/x]} \dapp \cps{W[V/x]} \dapp \reify \sW[\cps{V}/x] \\
=& \reason{static $\beta$-conversion} \\
& (\slam \sk . \cps{V'[V/x]} \dapp \cps{W[V/x]} \dapp \reify \sk) \sapp \sW[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$} \\
& \cps{(V'[V/x])\, (W[V/x])} \sapp \sW[\cps{V}/x] \\
=& \reason{definition of $[-]$} \\
& \cps{(V'\, W)[V/x]} \sapp \sW[\cps{V}/x]
\end{derivation}
\item[Case] $M = W\, T$.
\begin{derivation}
& (\cps{W \, T} \sapp \sW)[\cps{V}/x]\\
=& \reason{definition of $\cps{-}$} \\
& ((\slam \sk . \cps{W} \dapp \Record{} \dapp \reify \sk) \sapp \sW)[\cps{V}/x] \\
=& \reason{static $\beta$-conversion} \\
& (\cps{W} \dapp \Record{} \dapp \reify \sW)[\cps{V}/x] \\
=& \reason{definition of $[-]$} \\
& \cps{W}[\cps{V}/x] \dapp \Record{} \dapp \reify \sW[\cps{V}/x] \\
=& \reason{IH 2} \\
& \cps{W[V/x]} \dapp \Record{} \dapp \reify \sW[\cps{V}/x] \\
\end{derivation}
\begin{derivation}
=& \reason{static $\beta$-conversion} \\
& (\slam \sk . \cps{W[V/x]} \dapp \Record{} \dapp \reify \sk) \sapp \sW[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$} \\
& \cps{W[V/x]\,T} \sapp \sW[\cps{V}/x] \\
=& \reason{definition of $[-]$} \\
& \cps{(W\,T)[V/x]} \sapp \sW[\cps{V}/x]
\end{derivation}
\item[Case] $M = \Let \; \Record{\ell = x'; y} = W \; \In \; N$.
\begin{derivation}
& (\cps{\Let \; \Record{\ell = x'; y} = W \; \In \; N} \sapp \sW)[\cps{V}/x]\\
=& \reason{definition of $\cps{-}$} \\
& ((\slam \sk . \Let \; \Record{\ell = x'; y} = \cps{W} \; \In \; \cps{N} \sapp \sk) \sapp \sW)[\cps{V}/x] \\
=& \reason{static $\beta$-conversion} \\
& (\Let \; \Record{\ell = x'; y} = \cps{W} \; \In \; \cps{N} \sapp \sW)[\cps{V}/x] \\
=& \reason{definition of $[-]$} \\
& \Let \; \Record{\ell = x'; y} = \cps{W}[\cps{V}/x] \; \In \; (\cps{N} \sapp \sW)[\cps{V}/x] \\
=& \reason{IH 1 and IH 2} \\
& \Let \; \Record{\ell = x'; y} = \cps{W[V/x]} \; \In \; \cps{N[V/x]} \sapp \sW[\cps{V}/x] \\
%% \end{derivation}
%% \begin{derivation}
=& \reason{static $\beta$-conversion} \\
& (\slam \sk . \Let \; \Record{\ell = x'; y} = \cps{W[V/x]} \; \In \; \cps{N[V/x]} \sapp \sk) \sapp \sW[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Let \; \Record{\ell = x'; y} = W[V/x] \; \In \; N[V/x]} \sapp \sW[\cps{V}/x]\\
=& \reason{definition of $[-]$} \\
& \cps{(\Let \; \Record{\ell = x'; y} = W \; \In \; N)[V/x]} \sapp \sW[\cps{V}/x]
\end{derivation}
\item[Case] $M = \Case\;V\;\{\ell\;x \mapsto M; y \mapsto
N\}$. Similar to the $M = \Let\;\Record{\ell=x;y} = V\;\In\;N$
case.
\item[Case] $M = \Absurd \; W$.
\begin{derivation}
& (\cps{\Absurd \; W} \sapp \sW)[\cps{V}/x]\\
=& \reason{definition of $\cps{-}$} \\
& ((\slam \sk. \Absurd \; W) \sapp \sW)[\cps{V}/x] \\
=& \reason{static $\beta$-conversion} \\
& (\Absurd \; \cps{W})[\cps{V}/x] \\
=& \reason{definition of $[-]$} \\
& \Absurd \; \cps{W}[\cps{V}/x] \\
=& \reason{IH 2} \\
& \Absurd \; \cps{W[V/x]} \\
=& \reason{static $\beta$-conversion} \\
& (\slam \sk . \Absurd\; \cps{W[V/x]}) \sapp \sW[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Absurd\; W[V/x]} \sapp \sW[\cps{V}/x]\\
=& \reason{definition of $[-]$} \\
& \cps{(\Absurd\; W)[V/x]} \sapp \sW[\cps{V}/x]\\
\end{derivation}
\item[Case] $M = \Return \; W$.
\begin{derivation}
& (\cps{\Return\; W} \sapp \sW)[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$}\\
& ((\slam \sk. \kapp\;\reify\sk\;\cps{W}) \sapp \sW)[\cps{V}/x] \\
=& \reason{static $\beta$-conversion}\\
& (\kapp\;\reify \sW\;\cps{W})[\cps{V}/x]\\
=& \reason{definition of $[-/-]$} \\
& \kapp\;\reify (\sW[\cps{V}/x])\;(\cps{W}[\cps{V}/x])\\
=& \reason{IH 2} \\
& \kapp\;\reify (\sW[\cps{V}/x])\;\cps{W[V/x]}\\
=& \reason{static $\beta$-conversion} \\
& (\slam \sk.\,\kapp\;\reify \sk\;\cps{W[V/x]}) \sapp \sW[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$}\\
& \cps{\Return\; (W[V/x])} \sapp \sW[\cps{V}/x]\\
=& \reason{definition of $[-/-]$}\\
& \cps{(\Return\; W)[V/x]} \sapp \sW[\cps{V}/x]
\end{derivation}
\item[Case] $M = \Let \; y \revto M' \; \In \; N$. We have:
\begin{derivation}
&(\cps{\Let \; y \revto M' \; \In \; N} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW))[\cps{V}/x]\\
=& \reason{definition of $\cps{-}$} \\
&(\bl(\slam \sRecord{\theta, \sRecord{\svhret,\svhops}}\scons \sk.\\
\qquad \cps{M'}\sapp(\sRecord{\reflect((\dlam y\,k.\,\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\scons k'=k\;\In \\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k'))\dcons \reify\theta),\sRecord{\svhret,\svhops}} \scons \kappa)) \\
\el \\
\quad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW))[\cps{V}/x]\el\\
=& \reason{static $\beta$-conversion}\\
&(\cps{M'}\sapp(\sRecord{\reflect((\dlam y\,k.\,\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\scons k'=k\;\In \\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k'))\dcons \reify \sV_{fs}),\sV_h} \scons \sW'))[\cps{V}/x] \\
\el \\
=& \reason{IH 1 on $M'$}\\
&\cps{M'[V/x]}\sapp
((\sRecord{\reflect((\dlam y\,k.\,\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\scons k'=k\;\In \\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k'))\dcons \reify \sV_{fs}),\sV_h} \scons \sW')[\cps{V}/x])
\el \\
=& \reason{definition of $[-/-]$}\\
&\cps{M'[V/x]} \sapp
(\sRecord{\bl
\reflect((\dlam y\,k.\,\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\scons k'=k\;\In \\
(\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret,\reflect \vhops}} \scons \reflect k'))[\cps{V}/x])\dcons \reify(\sV_{fs}[\cps{V}/x])), \\
\el \\
\sV_h[\cps{V}/x]} \scons (\sW'[\cps{V}/x])) \\
\el \\
=& \reason{IH 1 on $N$} \\
&\bl\cps{M'[V/x]}\sapp
(\sRecord{\bl
\reflect((\dlam y\,k.\,\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\scons k'=k\;\In\\
\cps{N[V/x]} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret,\reflect \vhops}} \scons \reflect k'))\dcons (\reify \sV_{fs}[\cps{V}/x])),
\el \\
\sV_h[\cps{V}/x]} \scons (\sW'[\cps{V}/x])) \\
\el \\
\el \\
=& \reason{static $\beta$-conversion and definition of $\cps{-}$} \\
&\cps{\Let\;y\revto M'[V/x]\;\In\;N[V/x]} \sapp ((\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)[\cps{V}/x]) \\
=& \reason{definition of $[-/-]$} \\
&\cps{(\Let\;y\revto M'\;\In\;N)[V/x]} \sapp ((\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)[\cps{V}/x]) \\
\end{derivation}
\item[Case] $M = \Do \; (\ell\,W)^E$. We have:
\begin{derivation}
& (\cps{\Do \; (\ell\,W)^E} \sapp \sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$}\\
& (\bl
(\slam \sRecord{\shf, \sRecord{\svhret, \svhops}}\scons \sk.\reify\svhops \dapp \dRecord{\ell, \dRecord{\cps{W}, \dRecord{\reify \shf, \dRecord{\reify \svhret, \reify \svhops}} \dcons \dnil}} \dapp \reify \sk) \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW))[\cps{V}/x] \\
\el \\
=& \reason{static $\beta$-conversion} \\
& (\reify \sV_{ops} \dapp \dRecord{\ell, \dRecord{\cps{W}, \dRecord{\reify \sV_{fs}, \dRecord{\reify \sV_{ret}, \reify \sV_{ops}}} \dcons \dnil}} \dapp \reify \sW)[\cps{V}/x] \\
=& \reason{definition of $[-/-]$} \\
& \reify \sV_{ops}[\cps{V}/x] \bl
\dapp \dRecord{\ell, \dRecord{\cps{W}[\cps{V}/x], \dRecord{\reify \sV_{fs}[\cps{V}/x], \dRecord{\reify \sV_{ret}[\cps{V}/x], \reify \sV_{ops}[\cps{V}/x]}} \dcons \dnil}} \\
\dapp \reify \sW[\cps{V}/x] \\
\el \\
=& \reason{IH 2 on $W$} \\
& \reify \sV_{ops}[\cps{V}/x] \bl
\dapp \dRecord{\ell, \dRecord{\cps{W[V/x]}, \dRecord{\reify \sV_{fs}[\cps{V}/x], \dRecord{\reify \sV_{ret}[\cps{V}/x], \reify \sV_{ops}[\cps{V}/x]}} \dcons \dnil}} \\
\dapp \reify \sW[\cps{V}/x] \\
\el \\
=& \reason{static $\beta$-conversion} \\
&\bl
(\slam \sRecord{\shf, \sRecord{\svhret, \svhops}}\scons \sk.\vhops \dapp \dRecord{\ell, \dRecord{\cps{W[V/x]}, \dRecord{\reify \shf, \dRecord{\reify \svhret, \reify \svhops}} \dcons \dnil}} \dapp \reify \sk) \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)[\cps{V}/x] \\
\el \\
=& \reason{definition of $\cps{-}$} \\
& (\cps{\Do \; (\ell\,W[V/x])^E} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)[\cps{V}/x] \\
=& \reason{definition of $[-/-]$} \\
& (\cps{(\Do \; (\ell\,W)^E)[V/x]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)[\cps{V}/x] \\
\end{derivation}
\item[Case] $M = \Handle^\depth \; M' \; \With \; H$.
We make use of two auxiliary results.
\begin{enumerate}
\item $\cps{\hret}[\cps{V}/x] = \cps{\hret[V/x]}$\label{eq:hret-subst-proof}
\item $\cps{\hops}^\depth[\cps{V}/x] = \cps{\hops[V/x]}^\depth$\label{eq:hops-subst-proof}
\end{enumerate}
\begin{proof}
Suppose $\hret = \{ \Return \; y \mapsto N \}$.
\begin{derivation}
& \cps{\hret}[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$}\\
& (\dlam y\,k. \Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k'=k\;\In\;\cps{N}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k'))[\cps{V}/x] \\
=& \reason{definition of $[-/-]$} \\
& \dlam y\,k. \Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k'=k\;\In\;(\cps{N}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k'))[\cps{V}/x] \\
=& \reason{IH 1 for $N$}\\
& \dlam y\,k. \Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k'=k\;\In\;\cps{N[V/x]} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k') \\
=& \reason{definition of $\cps{-}$}\\
& \cps{\hret[V/x]}
\end{derivation}
The
$\hops = \{(\ell\,p\,r \mapsto N_\ell)_{\ell \in \mathcal{L}}\}$
case goes through similarly.
\end{proof}
We can now prove that substitution commutes with the translation
of handlers:
\begin{derivation}
& (\cps{\Handle^\depth \; M' \; \With \; H} \sapp \sW)[\cps{V}/x] \\
=& \reason{definition of $\cps{-}$} \\
& ((\slam \sk . \cps{M'} \sapp \sRecord{\snil, \sRecord{\cps{\hret}, \cps{\hops}^\depth}} \scons \sk) \sapp \sW)[\cps{V}/x] \\
=& \reason{static $\beta$-conversion} \\
& (\cps{M'} \sapp \sRecord{\snil, \sRecord{\cps{\hret}, \cps{\hops}^\depth}} \scons \sW)[\cps{V}/x] \\
=& \reason{IH 1 for $M'$}\\
& \cps{M'[V/x]} \sapp \sRecord{\snil, \sRecord{\cps{\hret}[\cps{V}/x], \cps{\hops}^\depth[\cps{V}/x]}} \scons \sW[\cps{V}/x] \\
=& \reason{\eqref{eq:hret-subst-proof} and \eqref{eq:hops-subst-proof}} \\
& \cps{M'[V/x]} \sapp \sRecord{\snil, \sRecord{\cps{\hret[V/x]}, \cps{\hops[V/x]}^\depth}} \scons \sW[\cps{V}/x] \\
=& \reason{static $\beta$-conversion} \\
& ((\slam \sk . \cps{M'[V/x]} \sapp \sRecord{\snil, \sRecord{\cps{\hret[V/x]}, \cps{\hops[V/x]}^\depth}} \scons \sk) \sapp \sW[\cps{V}/x]) \\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Handle^\depth\; M'[V/x]; \With\; H[V/x]} \sapp \sW[\cps{V}/x] \\
=& \reason{definition of $[-/-]$} \\
& \cps{(\Handle^\depth \; M' \; \With \; H)[V/x]} \sapp \sW[\cps{V}/x]
\end{derivation}
\end{description}
\end{proof}
% type erasure lemma
\begin{lemma}[Type erasure]\label{lem:erasure-proof}
~
\begin{enumerate}
\item $\cps{M} \sapp \sW = \cps{M[T/\alpha]} \sapp \sW$
\item $\cps{W} = \cps{W[T/\alpha]}$
\end{enumerate}
\end{lemma}
\begin{proof}
Follows from the observation that the translation is oblivious to
types.
\end{proof}
%\addtocounter{theorem}{-7}
\begin{lemma}[Evaluation context decomposition]
\label{lem:decomposition-gen-cont-proof}
\[
% \cps{\EC[M]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW)
% =
% \cps{M} \sapp (\cps{\EC} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW))
\cps{\EC[M]} \sapp (\sV_f \scons \sW)
=
\cps{M} \sapp (\cps{\EC} \sapp (\sV_f \scons \sW))
\]
\end{lemma}
%
\begin{proof}
% For reference, we repeat the translation of evaluations contexts
% here:
% \TranslateEC{}
The proof proceeds by structural induction on the evaluation context
$\EC$.
\begin{description}
% Empty context
\item[Case] $\EC = [\,]$.
\begin{derivation}
& \cps{\EC[M]} \sapp (\sV \scons \sW) \\
=& \reason{assumption} \\
& \cps{M} \sapp (\sV \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \cps{M} \sapp ((\slam \sk . \sk) \sapp (\sV \scons \sW)) \\
=& \reason{definition of $\cps{-}$} \\
& \cps{M} \sapp (\cps{\EC} \sapp (\sV \scons \sW))
\end{derivation}
% Let binding
\item[Case] $\EC = \Let \; x \revto \EC'[-] \; \In \; N$.
% Induction hypothesis:
% $\cps{\EC'[M]} \sapp (V \scons VS) = \cps{M} \sapp (\cps{\EC'} \sapp (V \scons VS))$.
%
\begin{derivation}
& \cps{\EC[M]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW) \\
=& \reason{assumption} \\
& \cps{\Let \; x \revto \EC'[M] \; \In \; N} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& \bl(\slam \sRecord{\shf, \sRecord{\svhret, \svhops}} \scons \shk.\\
\qquad \cps{\EC'[M]} \sapp (\sRecord{\reflect((\dlam x\,\dhk.\bl
\Let\;\dRecord{fs,\dRecord{\vhret, \vhops}}\dcons k'=k\;\In \\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect \dhk')) \dcons \reify \shf), \sRecord{\svhret, \svhops}} \scons \shk)) \\
\el \\
\quad\sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)\el \\
=& \reason{static $\beta$-conversion} \\
& \cps{\EC'[M]} \sapp (\sRecord{\reflect((\dlam x\,\dhk.\bl
\Let\;\dRecord{fs,\dRecord{\vhret, \vhops}}\dcons k'=k\;\In \\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect \dhk')) \dcons \reify \sV_{fs}), \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW) \\
\el \\
=& \reason{IH for $\EC'[-]$}\\
& \cps{M} \sapp (\bl\cps{\EC'} \sapp \\\quad(\sRecord{\reflect((\dlam x\,\dhk.\bl
\Let\;\dRecord{fs,\dRecord{\vhret, \vhops}}\dcons k'=k\;\In \\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect \dhk')) \dcons \reify\sV_{fs}), \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW))
\el \\
\el \\
=& \reason{static $\beta$-conversion} \\
& \cps{M} \sapp (\bl
(\slam \sRecord{\shf, \sRecord{\svhret,\svhops}} \scons \shk.\\
\quad\cps{\EC'} \sapp (\sRecord{\bl
\reflect((\dlam x\,\dhk.\bl
\Let\;\dRecord{fs,\dRecord{\vhret, \vhops}}\dcons k'=k\;\In\\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect \dhk')) \dcons \reify\shf), \\
\el \\
\sRecord{\svhret,\svhops}} \scons \shk))
\sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)) \\
\el \\
\el \\
=& \reason{definition of $\cps{-}$} \\
& \cps{M} \sapp (\cps{\Let \; x \revto \EC'[M] \; \In \; N}) \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)) \\
=& \reason{assumption} \\
& \cps{M} \sapp (\cps{\EC} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sW))
\end{derivation}
% handler
\item[Case] $\EC = \Handle^\depth \; \EC' \; \With \; H$.
% Induction hypothesis:
% $\cps{\EC'[M]} \sapp (V \scons VS) = \cps{M} \sapp (\cps{\EC'} \sapp (V \scons VS))$.
%
\begin{derivation}
& \cps{\EC[M]} \sapp (\sV \scons \sW) \\
=& \reason{assumption} \\
& \cps{\Handle^\depth \; \EC'[M] \; \With \; H} \sapp (\sV \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk. \cps{\EC'[M]} \sapp (\sRecord{\nil, \cps{H}^\depth}\scons \sk)) \sapp (\sV \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \cps{\EC'[M]} \sapp (\sRecord{\nil, \cps{H}^\depth} \scons (\sV \scons \sW)) \\
=& \reason{IH} \\
& \cps{M} \sapp (\cps{\EC'} \sapp (\sRecord{\nil, \cps{H}^\depth} \scons (\sV \scons \sW))) \\
=& \reason{static $\beta$-conversion} \\
& \cps{M} \sapp ((\slam \sk . \cps{\EC'} \sapp (\sRecord{\nil, \cps{H}^\depth} \scons \sk)) \sapp (\sV \scons \sW)) \\
=& \reason{definition of $\cps{-}$} \\
& \cps{M} \sapp (\cps{\Handle^\depth \; \EC' \; \With \; H} \sapp (\sV \scons \sW)) \\
=& \reason{assumption} \\
& \cps{M} \sapp (\cps{\EC} \sapp (\sV \scons \sW))
\end{derivation}
\end{description}
\end{proof}
% reflect after reify
\begin{lemma}[Reflect after reify]
\label{lem:reflect-after-reify-proof}
%
\[
% \cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons \reflect \reify \sW)
% =
% \cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW).
\cps{M} \sapp (\sV_f \scons \reflect \reify \sW)
=
\cps{M} \sapp (\sV_f \scons \sW).
\]
\end{lemma}
\begin{proof}
For an inductive proof to go through in the presence of $\Let$ and
$\Handle$, which alter or extend the continuation stack, we
generalise the lemma statement to include an arbitrary list of
handler frames:
\begin{displaymath}
\cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \sV_n \scons \reflect \reify \sW)
=
\cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \sV_n \scons \sW)
\end{displaymath}
This is the lemma statement when $n = 0$. The proof now proceeds by
induction on the structure of $M$. Most of the translated terms do
not examine the top of the continuation stack, so we will write
$\sV_0$ for $\sRecord{\sV_{fs},\sRecord{\sV_{ret},\sV_{ops}}}$ to
save space.
\begin{description}
\item[Case] $M = V\,W$.
\begin{derivation}
& \cps{V\,W} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk.\cps{V} \dapp \cps{W} \dapp \reify \sk) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{static $\beta$-conversion} \\
& \cps{V} \dapp \cps{W} \dapp \reify (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{definition of $\reify$} \\
& \cps{V} \dapp \cps{W} \dapp (\sV_1 \dcons \dots \dcons \sV_n \dcons \reify \sW)\\
=& \reason{definition of $\reify$} \\
& \cps{V} \dapp \cps{W} \dapp \reify (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
=& \reason{static $\beta$-conversion} \\
& (\slam \sks.\cps{V} \dapp \cps{W} \dapp \reify \sks) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
=& \reason{definition of $\cps{-}$} \\
& \cps{V\,W} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)
\end{derivation}
\item[Case] $M = V\,T$. Similar to the $M = V\,W$ case.
\item[Case] $M = \Let\; \Record{\ell=x; y} = V \;\In\; N$.
\begin{derivation}
& \cps{\Let\; \Record{\ell=x; y} = V \;\In\; N} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk.\Let\; \Record{\ell=x; y} = \cps{V} \;\In\; \cps{N} \sapp \sk) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{static $\beta$-conversion} \\
& \Let\; \Record{\ell=x; y} = \cps{V} \;\In\; \cps{N} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{IH} \\
& \Let\; \Record{\ell=x; y} = \cps{V} \;\In\; \cps{N} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
=& \reason{static $\beta$-conversion} \\
& (\slam \sk.\Let\; \Record{\ell=x; y} = \cps{V} \;\In\; \cps{N} \sapp \sk) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Let\; \Record{\ell=x; y} = \cps{V} \;\In\; \cps{N}} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
\end{derivation}
\item[Case] $M = \Case\; V \{\ell\; x \mapsto M; y \mapsto
N\}$. Similar to the $M = \Let\; \Record{\ell=x; y} = V \;\In\; N$
case.
\item[Case] $M = \Absurd\; V$.
\begin{derivation}
& \cps{\Absurd\; V} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk.\Absurd\; \cps{V}) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{static $\beta$-conversion} \\
& \Absurd\; \cps{V}\\
=& \reason{static $\beta$-conversion} \\
& (\slam \sks.\Absurd\; \cps{V}) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Absurd\; V} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
\end{derivation}
\item[Case] $M = \Return\;V$.
\begin{derivation}
& \cps{\Return\;V} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW) \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk. \kapp\;(\reify \sk)\;\cps{V}) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW) \\
=& \reason{static $\beta$-conversion} \\
& \kapp\; (\reify (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW))\; \cps{V}\\
=& \reason{definition of $\reify$} \\
& \kapp\; (\reify \sV_0 \dcons \dots \dcons \reify \sV_n \dcons \reify \sW))\; \cps{V}\\
=& \reason{definition of $\reify$} \\
& \kapp\; (\reify (\sV_0 \scons \dots \scons \sV_n \scons \sW))\; \cps{V}\\
=& \reason{static $\beta$-conversion} \\
& (\slam \sk. \kapp\;(\reify \sk)\;\cps{V}) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Return\;V} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW) \\
\end{derivation}
\item[Case] $M = \Let\; x \revto M'\;\In\; N$.
\begin{derivation}
& \cps{\Let\;x \revto M'\; \In\; N} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \reflect \reify \sW) \\
=& \reason{definition of $\cps{-}$} \\
& \bl
(\slam \sRecord{\shf, \sRecord{\svhret,\svhops}} \scons \sk. \cps{M'} \sapp (
\bl\sRecord{\reflect((\dlam x\,k.
\bl\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k;\In\\
\cps{N}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \shf),\el\\
\sRecord{\svhret,\svhops}} \scons \sk)))\el \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \reflect \reify \sW)
\el
\\
=& \reason{static $\beta$-conversion} \\
& \cps{M'} \sapp (
\bl\sRecord{\reflect((\dlam x\,k.
\bl\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k;\In\\
\cps{N}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \sV_{fs}),\el\\
\sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\el
\\
=& \reason{IH on $M$} \\
& \cps{M'} \sapp (
\bl\sRecord{\reflect((\dlam x\,k.
\bl\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k;\In\\
\cps{N}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \sV_{fs}),\el\\
\sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \sW)\el
\\
\end{derivation}
\begin{derivation}
=& \reason{static $\beta$-conversion} \\
& \bl
(\slam \sRecord{\shf, \sRecord{\svhret,\svhops}} \scons \sk. \cps{M'} \sapp (
\bl\sRecord{\reflect((\dlam x\,k.
\bl\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k;\In\\
\cps{N}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \shf),\el\\
\sRecord{\svhret,\svhops}} \scons \sk)))\el \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \sW)
\el
\\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Let\;x \revto M'\; \In\; N} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \sW) \\
\end{derivation}
\item[Case] $M = \Do\;\ell\;V$.
\begin{derivation}
& \cps{\Do\;\ell\;V} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{definition of $\cps{-}$} \\
& \bl
(\slam \sRecord{\shf, \sRecord{\svhret,\svhops}} \scons \sk. \reify \svhops \dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{\reify\shf,\dRecord{\reify\svhret,\reify\svhops}}\dcons\dnil}} \dapp \reify \sk) \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \reflect \reify \sW)
\el\\
=& \reason{static $\beta$-conversion} \\
& \reify \sV_{ops} \dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{\reify\sV_{fs},\dRecord{\reify\sV_{ret},\reify\sV_{ops}}}\dcons\dnil}} \dapp \reify (\sV_1 \scons \dots \scons \sV_n \scons \reflect \reify \sW) \\
=& \reason{definition of $\reify$} \\
& \reify \sV_{ops} \dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{\reify\sV_{fs},\dRecord{\reify\sV_{ret},\reify\sV_{ops}}}\dcons\dnil}} \dapp (\reify\sV_1 \dcons \dots \dcons \reify\sV_n \dcons \reify \sW) \\
=& \reason{definition of $\reify$} \\
& \reify \sV_{ops} \dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{\reify\sV_{fs},\dRecord{\reify\sV_{ret},\reify\sV_{ops}}}\dcons\dnil}} \dapp \reify(\sV_1 \scons \dots \scons \sV_n \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \bl
(\slam \sRecord{\shf, \sRecord{\svhret,\svhops}} \scons \sk. \reify \svhops \dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{\reify\shf,\dRecord{\reify\svhret,\reify\svhops}}\dcons\dnil}} \dapp \reify \sk) \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \sW)
\el\\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Do\;\ell\;V} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret},\sV_{ops}}} \scons \sV_1 \scons \dots \scons \sV_n \scons \sW)\\
\end{derivation}
\item[Case] $\Handle^\depth\; M \;\With\; H$.
\begin{derivation}
& \cps{\Handle^\depth\; M \;\With\; H} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk.\cps{M} \sapp (\sRecord{\snil, \cps{H}^\depth} \scons \sk)) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{static $\beta$-conversion} \\
& \cps{M} \sapp (\sRecord{\snil, \cps{H}^\depth} \scons \sV_0 \scons \dots \scons \sV_n \scons \reflect \reify \sW)\\
=& \reason{IH} \\
\end{derivation}
\begin{derivation}
& \cps{M} \sapp (\sRecord{\snil, \cps{H}^\depth} \scons \sV_0 \scons \dots \scons \sV_n \scons \sW)\\
=& \reason{static $\beta$-conversion} \\
& (\slam \sk.\cps{M} \sapp (\sRecord{\snil, \cps{H}^\depth} \scons \sk)) \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
=& \reason{definition of $\cps{-}$} \\
& \cps{\Handle^\depth\; M \;\With\; H} \sapp (\sV_0 \scons \dots \scons \sV_n \scons \sW)\\
\end{derivation}
\end{description}
\end{proof}
%
\begin{lemma}[Forwarding]\label{lem:forwarding-proof}
If $\ell \notin dom(H_1)$ then:
%
\[
\bl
\cps{\hops_1}^\delta \dapp \dRecord{\ell, \dRecord{V_p, V_{\dhkr}}} \dapp (\dRecord{V_{fs}, \cps{H_2}^\delta} \dcons W)
\reducesto^+ \qquad \\
\hfill
\cps{\hops_2}^\delta \dapp \dRecord{\ell, \dRecord{V_p, \dRecord{V_{fs}, \cps{H_2}^\delta} \dcons V_{\dhkr}}} \dapp W. \\
\el
\]
%
\end{lemma}
\begin{proof}
\begin{derivation}
& \cps{\hops_1}^\delta \dapp \dRecord{\ell, \dRecord{V_p, V_{rk}}} \dapp (\dRecord{V_{fs}, \cps{H_2}^\delta} \dcons W) \\
\reducesto^+ & \\
& M_{forward}((\ell, V_p, V_{rk}), \dRecord{V_{fs}, \cps{H_2}^\delta} \dcons W) \\
= & \\
& \bl
\Let\; \dRecord{fs, \dRecord{\vhret, \vhops}} \dcons \dhk' = \dRecord{V_{fs}, \cps{H_2}^\delta} \dcons W \;\In \\
\Let\; rk' = \dRecord{fs, \dRecord{\vhret, \vhops}} \dcons V_{rk}\;\In\\
\vhops \dapp \dRecord{\ell,\dRecord{V_p, rk'}} \dapp \dhk' \\
\el\\
\reducesto^+ & \\
& \cps{\hops_2}^\delta \dapp \dRecord{\ell,\dRecord{V_p, \dRecord{V_{fs}, \cps{H_2}^\delta} \dcons V_{rk}}} \dapp W
\end{derivation}
\end{proof}
\newcommand{\Append}{\mathop{+\kern-4pt+}}
The following lemma is central to our simulation theorem. It
characterises the sense in which the translation respects the handling
of operations. Note how the values substituted for the resumption
variable $r$ in both cases are in the image of the translation of
$\lambda$-terms in the CPS translation. This is thanks to the precise
way that the reductions rules for resumption construction works in our
dynamic language, as described above.
%
\begin{lemma}[Handling]\label{lem:handle-op-gen-cont-proof}
Suppose $\ell \notin BL(\EC)$ and
$\hell = \{\ell\,p\,r \mapsto N_\ell\}$. If $H$ is deep then
\[
\bl
\cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sV_f \scons \sW)) \reducesto^+ \\
\quad (\cps{N_\ell} \sapp (\sV_f \scons \sW))
[\cps{V}/p,
\dlam x\,\dhk.\bl
\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}} \dcons \dhk' = \dhk\;\In\;
\cps{\Return\;x}\\
\sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sRecord{\reflect \dlk, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect\dhk'))/r]. \\
\el\\
\el
\]
%
Otherwise if $H$ is shallow then
\[
\bl
\cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}^\dagger} \scons \sV_f \scons \sW)) \reducesto^+ \\
\quad (\cps{N_\ell} \sapp (\sV_f \scons \sW))
[\cps{V}/p, \dlam x\,\dhk. \bl
\Let\;\dRecord{\dlk, \dRecord{\vhret, \vhops}} \dcons \dhk' = \dhk \;\In\;\cps{\Return\;x}\\
\sapp (\cps{\EC} \sapp (\sRecord{\reflect \dlk, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect\dhk'))/r]. \\
\el \\
\el
\]
%
\end{lemma}
\begin{proof}
By the definition of $\cps{-}$ on evaluation contexts we can deduce
that
\begin{equation}
\label{eq:evalcontext-eq-proof}
\cps{\EC} \sapp (\sRecord{\reflect V, \cps{H}^\depth} \scons \sW)
=
\sRecord{\reflect V_1, \cps{H_1}^{\depth_1}} \scons \dots \scons \sRecord{\reflect (V_n \Append V), \cps{H_n}^{\depth_n}} \scons \sW
\end{equation}
for some dynamic value terms $V_1, \dots, V_n$, depths
$\depth_1, \dots, \depth_n$, and handlers $H_1, \dots, H_n$, where
$n \geq 1$, $H_n = H$, and $\Append$ is (dynamic) list
concatenation.
\begin{derivation}
& \cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sRecord{\sV_{fs},\sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)) \\
= & \reason{definition of $\cps{-}$} \\
& \bl(\slam \sRecord{\shf, \sRecord{\svhret, \svhops}}\scons \sk'. \reify \svhops \dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{\reify \shf, \dRecord{\reify \svhret, \reify \svhops}} \dcons \dnil}} \dapp \reify \sk)\\
\qquad \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sRecord{\sV_{fs},\sRecord{\sV_{ret},\sV_{ops}}} \scons \sW))\el\\
= & \reason{Equation \ref{eq:evalcontext-eq-proof}, above} \\
& \bl(\slam \sRecord{\shf, \sRecord{\svhret, \svhops}}\scons \sk'. \reify \svhops \dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{\reify \shf, \dRecord{\reify \svhret, \reify \svhops}} \dcons \dnil}} \dapp \reify \sk)\\
\qquad \sapp (\sRecord{\reflect V_1, \cps{H_1}^{\depth_1}} \scons \dots \scons \sRecord{\reflect V_n, \cps{H_n}^{\depth_n}} \scons \sRecord{\sV_{fs},\sRecord{\sV_{ret},\sV_{ops}}} \scons \sW)\el\\
= & \reason{static $\beta$-conversion} \\
& \cps{\hops_1}^{\depth_1} \bl
\dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{V_1, \cps{H_1}^{\depth_1}} \dcons \dnil}} \\
\dapp \reify (\dots \scons \sRecord{\reflect V_n, \cps{H_n}^{\depth_n}} \scons \sRecord{\sV_{fs},\sRecord{\sV_{ret},\sV_{ops}}} \scons \sW) \\
\el \\
= & \reason{definition of $\reify$} \\
& \cps{\hops_1}^{\depth_1} \bl
\dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{V_1, \cps{H_1}^{\depth_1}} \dcons \dnil}} \\
\dapp (\dots \scons \sRecord{V_n, \cps{H_n}^{\depth_n}} \dcons \dRecord{\reify\sV_{fs},\dRecord{\reify\sV_{ret},\reify\sV_{ops}}} \dcons \reify \sW)\\
\el \\
\reducesto^+ & \reason{$\ell \notin BL(\EC)$ and repeated application of Lemma~\ref{lem:forwarding-proof}} \\
\end{derivation}
\begin{derivation}
& \cps{\hops_n}^{\depth} \bl
\dapp \dRecord{\ell, \dRecord{\cps{V}, \dRecord{V_n, \cps{H_n}^{\depth_n}} \dcons \dots \dcons \dRecord{V_1, \cps{H_1}^{\depth_1}} \dcons \dnil}} \\
\dapp (\dRecord{\reify\sV_{fs},\dRecord{\reify\sV_{ret},\reify\sV_{ops}}} \dcons \reify \sW) \\
\el \\
\reducesto^+ & \reason{$H^\ell = \{\ell\;p\;r \mapsto N_\ell\}$} \\
& \bl \Let\;r=\Res^\depth\; (\dRecord{V_n, \cps{H_n}^{\depth_n}} \dcons \dots \dcons \dRecord{V_1, \cps{H_1}^{\depth_1}} \dcons \dnil)\;\In\\
\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = \dRecord{\reify\sV_{fs},\dRecord{\reify\sV_{ret},\reify\sV_{ops}}} \dcons \reify \sW\;\In\\
(\cps{N_\ell}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}}\scons\reflect k'))[\cps{V}/p]\el\\
\reducesto& \reason{$\usemlab{Res^\depth}$: there are two cases yielding different $\mathcal{R}$, see below} \\
& \bl \Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = \dRecord{\reify\sV_{fs},\dRecord{\reify\sV_{ret},\reify\sV_{ops}}} \dcons \reify \sW\;\In\\
(\cps{N_\ell}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}}\scons\reflect k'))[\cps{V}/p,\mathcal{R}/r]\el\\
\reducesto^+& \reason{$\usemlab{Split}$} \\
& (\cps{N_\ell}\sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}}\scons\reflect \reify\sW))[\cps{V}/p,\mathcal{R}/r]\\
= & \reason{Lemma~\ref{lem:reflect-after-reify-proof} (Reflect after reify)} \\
& (\cps{N_\ell}\sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}}\scons\sW))[\cps{V}/p,\mathcal{R}/r]\\
\end{derivation}
To complete the proof, we examine the resumption term $\mathcal{R}$
generated by the reduction of the $\Let\;r=\Res^\depth\;rk\;\In\;N$
construct. There are two cases, depending on whether the handler is
deep or shallow. When the handler is deep, we have:
\begin{equations}
\mathcal{R}
&=& \dlam y\,k. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = k\;\In\;\\
\kapp\;(\dRecord{V_1, \cps{H_1}^{\depth_1}} \dcons \dots \dcons \dRecord{V_n, \cps{H_n}^{\depth_n}} \dcons \dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k')\;y
\el
\\
&=& \reason{static $\beta$-conversion, and definition of $\reify$} \\
& & \dlam y\,k. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = k\;\In\;\\
(\slam \sk. \kapp\;(\reify \sk)\;y) \\
\quad \sapp ((\sRecord{\reflect V_1, \cps{H_1}^{\depth_1}} \scons \dots \scons \sRecord{\reflect V_n, \cps{H_n}^{\depth_n}} \scons \sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}}\scons \reflect k')
\el\\
&=& \reason{definition of $\cps{-}$} \\
& & \dlam y\,k. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = k\;\In\;\\
\cps{\Return\;y} \sapp ((\sRecord{\reflect V_1, \cps{H_1}^{\depth_1}} \scons \dots \scons \sRecord{\reflect V_n, \cps{H_n}^{\depth_n}} \scons \sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}}\scons \reflect k')
\el\\
&=& \reason{Equation~\ref{eq:evalcontext-eq-proof}} \\
& & \dlam y\,k. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = k\;\In\;\\
\cps{\Return\;y} \sapp (\cps{\EC} \sapp (\sRecord{\reflect \dnil, \cps{H_n}^{\depth_n}} \scons \sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}}\scons \reflect k'))
\el\\
\end{equations}
When the handler is shallow, we have:
\begin{equations}
\mathcal{R}
&=& \dlam y\,k. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = k\;\In\;\\
\kapp\;(\dRecord{V_1, \cps{H_1}^{\depth_1}} \dcons \dots \dcons \dRecord{V_n \Append fs, \dRecord{\vhret, \vhops}}\dcons k')\;y
\el
\\
&=& \reason{static $\beta$-conversion, and definition of $\reify$} \\
& & \dlam y\,k. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = k\;\In\;\\
(\slam \sk. \kapp\;(\reify \sk)\;y) \sapp ((\sRecord{\reflect V_1, \cps{H_1}^{\depth_1}} \scons \dots \scons \sRecord{\reflect (V_n \Append fs), \sRecord{\reflect \vhret, \reflect \vhops}}\scons \reflect k')
\el\\
&=& \reason{definition of $\cps{-}$} \\
& & \dlam y\,k. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = k\;\In\;\\
\cps{\Return\;y} \sapp ((\sRecord{\reflect V_1, \cps{H_1}^{\depth_1}} \scons \dots \scons \sRecord{\reflect (V_n \Append fs), \sRecord{\reflect \vhret, \reflect \vhops}}\scons \reflect k')
\el\\
&=& \reason{Equation~\ref{eq:evalcontext-eq-proof}} \\
& & \dlam y\,k. \bl\Let\;\dRecord{fs, \dRecord{\vhret, \vhops}}\dcons k' = k\;\In\;\\
\cps{\Return\;y} \sapp (\cps{\EC} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}}\scons \reflect k'))
\el\\
\end{equations}
\end{proof}
\clearpage
\medskip
%
\begin{theorem}[Simulation]
\label{thm:ho-simulation-gen-cont-proof}
If $M \reducesto N$ then
\[
\cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{\mret},\sV_{\mops}}}
\scons \sW) \reducesto^+ \cps{N} \sapp (\sRecord{\sV_{fs},
\sRecord{\sV_{\mret},\sV_{\mops}}} \scons \sW).
\]
\end{theorem}
\begin{proof}
The proof is by induction on the derivation of the reduction
relation ($\reducesto$).
\begin{description}
\item[Case] $\semlab{App}$: $(\lambda x^A.M)\,V \reducesto M[V/x]$.
\begin{derivation}
& \cps{(\lambda x^A.M)\,V} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk.\,\cps{\lambda x^A.M} \dapp \cps{V} \dapp \reify \sk) \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \cps{\lambda x^A.M} \dapp \cps{V} \dapp \reify (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\reify$} \\
& \cps{\lambda x^A.M} \dapp \cps{V} \dapp (\dRecord{\reify \sV_{fs}, \dRecord{\reify \sV_{ret}, \reify \sV_{ops}}} \dcons \reify \sW) \\
=& \reason{definition of $\cps{-}$} \\
& \bl(\dlam x\,k. \Let\;\dRecord{fs,\dRecord{\vhret, \vhops}} \dcons k' = k\;\In\cps{M} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \cons \reflect k))\\
\qquad \dapp \cps{V} \dapp (\dRecord{\reify \sV_{fs}, \dRecord{\reify \sV_{ret}, \reify \sV_{ops}}} \dcons \reify \sW)\el \\
\reducesto^+& \reason{dynamic $\beta$-reduction and pattern matching, and structure of continuations}\\
& \cps{M}[\cps{V}/x] \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \reflect \reify \sW) \\
=& \reason{Lemma~\ref{lem:subst-gen-cont-proof} (Substitution)} \\
& \cps{M[V/x]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \reflect \reify \sW) \\
=& \reason{Lemma~\ref{lem:reflect-after-reify-proof} (reflect after reify)} \\
& \cps{M[V/x]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
\end{derivation}
\item[Case] $\semlab{TyApp}$: $(\Lambda \alpha^K.M)\,T \reducesto M[T/\alpha]$.
\begin{derivation}
& \cps{(\Lambda \alpha^K.M)\,T} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk. \cps{\Lambda \alpha^K.M} \dapp \dRecord{} \dapp \reify \sk) \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \cps{\Lambda \alpha^K.M} \dapp \cps{V} \dapp \dRecord{} \dapp \reify (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
%% \end{derivation}
%% \begin{derivation}
=& \reason{definition of $\reify$} \\
& \cps{\Lambda \alpha^K.M} \dapp \cps{V} \dapp \dRecord{} \dapp (\dRecord{\reify \sV_{fs}, \dRecord{\reify \sV_{ret}, \reify \sV_{ops}}} \dcons \reify \sW) \\
=& \reason{definition of $\cps{-}$} \\
& \bl(\dlam x\,k. \Let\;\dRecord{fs,\dRecord{\vhret, \vhops}} \dcons k' = k\;\In\;\cps{M} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \cons \reflect k))\\
\qquad \dapp \dRecord{} \dapp (\dRecord{\reify \sV_{fs}, \dRecord{\reify \sV_{ret}, \reify \sV_{ops}}} \dcons \reify \sW)\el \\
\reducesto^+& \reason{dynamic $\beta$-reduction and pattern matching, and structure of continuations}\\
& \cps{M} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \reflect \reify \sW) \\
=& \reason{Lemma~\ref{lem:erasure-proof} (Type erasure)} \\
& \cps{M[T/\alpha]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \reflect \reify \sW) \\
=& \reason{Lemma~\ref{lem:reflect-after-reify-proof} (reflect after reify)} \\
& \cps{M[T/\alpha]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
\end{derivation}
\item[Case] $\semlab{Rec}$:
$(\Rec\,g\,x.M)\,V \reducesto M[(\Rec\,g\,x.M)/g,V/x]$. Similar to
the previous two cases.
\item[Case] $\semlab{Split}$: $\Let\;\Record{\ell=x;y} = \Record{\ell=V;W}\;\In\;N \reducesto N[V/x,W/y]$.
\begin{derivation}
& \cps{\Let\;\Record{\ell=x;y} = \Record{\ell=V;W}\;\In\;N} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
=& \reason{definition of $\cps{-}$}\\
& (\slam \kappa. \Let\;\dRecord{\ell,\dRecord{x,y}} = \Record{\ell,\dRecord{\cps{V},\cps{W}}}\;\In\;\cps{N} \sapp \sk) \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
=& \reason{static $\beta$-conversion}\\
& \Let\;\dRecord{\ell,\dRecord{x,y}} = \Record{\ell,\dRecord{\cps{V},\cps{W}}}\;\In\;\cps{N} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
\reducesto^+& \reason{\usemlab{Split}}\\
& (\cps{N} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW))[\cps{V}/x,\cps{W}/y] \\
=& \reason{Lemma~\ref{lem:subst-gen-cont-proof} (Substitution)}\\
& \cps{N[V/x,W/y]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
\end{derivation}
\item[Case] $\semlab{Case_1}$ and $\semlab{Case_2}$: Similar to the previous case.
\item[Case] $\semlab{Let}$:
$\Let\;x\revto \Return\;V\;\In\;N \reducesto N[V/x]$.
\begin{derivation}
& \cps{\Let\;x\revto \Return\;V\;\In\;N} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& \bl(\slam \sRecord{\shf, \sRecord{\svhret, \svhops}}\scons \sk. \\\cps{\Return\;V} \sapp (\sRecord{\reflect((\dlam x\,k.
\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k\;\In\\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \shf), \sRecord{\svhret, \svhops}} \scons \sk)) \\
\el \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)
\el \\
=& \reason{static $\beta$-conversion} \\
& \cps{\Return\;V} \sapp (\sRecord{\bl
\reflect((\dlam x\,k.\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k\;\In \\
\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \sV_{fs}), \\
\el \\
\sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
\el \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk. \kapp\; (\reify \sk)\;\cps{V}) \sapp (\sRecord{\reflect((\dlam x\,k.
\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k\;\In\\
\cps{N} \sapp (\bl
\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \sV_{fs}), \\
\sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)
\el \\
\el \\
=& \reason{static $\beta$-conversion} \\
& \kapp\; (\reify(\sRecord{\reflect((\dlam x\,k.\bl\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k\;\In\\\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \sV_{fs}), \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW))\;\cps{V}\el \\
=& \reason{definition of $\reify$} \\
& \kapp\; (\dRecord{(\dlam x\,k.\bl\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k\;\In\\\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dcons \reify \sV_{fs}, \dRecord{\reify\sV_{ret}, \reify\sV_{ops}}} \dcons \reify\sW))\;\cps{V}\el \\
\reducesto& \reason{$\usemlab{KAppCons}$} \\
& (\dlam x\,k.\bl\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k\;\In\\\cps{N} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k')) \dapp \cps{V} \dapp (\dRecord{\reify \sV_{fs}, \dRecord{\reify\sV_{ret}, \reify\sV_{ops}}} \dcons \reify \sW)\el \\
\reducesto^+& \reason{$\usemlab{App}$, $\usemlab{Split}$} \\
& \cps{N}[\cps{V}/x] \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \reflect \reify \sW)\\
=& \reason{Lemma~\ref{lem:reflect-after-reify-proof} (reflect after reify)} \\
& \cps{N}[\cps{V}/x] \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
=& \reason{Lemma~\ref{lem:subst-gen-cont-proof} (substitution)} \\
& \cps{N[V/x]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
\end{derivation}
\item[Case] $\semlab{Ret}$:
$\Handle^\depth\;(\Return\;V)\;\With\;H \reducesto N[V/x]$, where
$\hret = \{\Return\;x\mapsto N\}$.
\begin{derivation}
& \cps{\Handle^\depth\;(\Return\;V)\;\With\;H} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk. \cps{\Return\;V}\sapp (\sRecord{\snil, \cps{H}^\depth} \scons \sk)) \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \cps{\Return\;V}\sapp (\sRecord{\snil, \cps{H}^\depth} \scons \sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk. \kapp\;(\reify \sk)\;\cps{V})\sapp (\sRecord{\snil, \cps{H}^\depth} \scons \sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \kapp\;(\reify (\sRecord{\snil, \cps{H}^\depth} \scons \sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW))\;\cps{V} \\
=& \reason{definition of $\cps{H}^\depth$ and $\reify$} \\
& \kapp\;(\dRecord{\dnil, \dRecord{\cps{\hret},\cps{\hops}^\depth}} \dcons \dRecord{\reify\sV_{fs}, \dRecord{\reify\sV_{ret}, \reify\sV_{ops}}} \dcons \reify\sW)\;\cps{V}\\
\reducesto& \reason{\usemlab{KAppNil}}\\
& \cps{\hret} \dapp \cps{V} \dapp (\dRecord{\reify\sV_{fs}, \dRecord{\reify\sV_{ret}, \reify\sV_{ops}}} \dcons \reify\sW) \\
=& \reason{definition of $\cps{-}$} \\
& \bl(\dlam x\,k.\Let\,\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k' = k\;\In\;\cps{N}\sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect \vhops}} \scons \reflect k'))\\
\qquad\dapp \cps{V} \dapp (\dRecord{\reify\sV_{fs}, \dRecord{\reify\sV_{ret}, \reify\sV_{ops}}} \dcons \reify\sW)\el \\
\reducesto^+& \reason{\usemlab{App}, \usemlab{Split}} \\
& \cps{N}[\cps{V}/x] \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \reflect \reify \sW) \\
=& \reason{Lemma~\ref{lem:reflect-after-reify-proof} (reflect after reify)} \\
& \cps{N}[\cps{V}/x] \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{Lemma~\ref{lem:subst-gen-cont-proof} (substitution)} \\
& \cps{N[V/x]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
\end{derivation}
\item[Case] $\semlab{Op}$:
$\Handle\;\EC[\Do\;\ell\;V]\;\With\;H \reducesto N_\ell[V/p,\lambda
y.\Handle\;\EC[\Return\; y]\;\With\;H/r]$, where $\ell \not\in BL(\EC)$
and $H^\ell = \{\ell\;p\;r \mapsto N_\ell\}$.
\begin{derivation}
& \cps{\Handle\;\EC[\Do\;\ell\;V]\;\With\;H}\sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk. \cps{\EC[\Do\;\ell\;V]} \sapp (\sRecord{\snil, \sRecord{\reflect\cps{\hret}, \reflect\cps{\hops}}} \scons \sk)) \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \cps{\EC[\Do\;\ell\;V]} \sapp (\sRecord{\snil, \sRecord{\reflect\cps{\hret}, \reflect\cps{\hops}}} \scons \sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{Lemma~\ref{lem:decomposition-gen-cont-proof} (Decomposition)} \\
& \cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \sRecord{\reflect\cps{\hret}, \reflect\cps{\hops}}} \scons \sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW))\\
\reducesto^+& \reason{Lemma~\ref{lem:handle-op-gen-cont-proof} (Handling)} \\
& \bl\cps{N_\ell}[\cps{V}/p,\dlam y\,k.\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k'=k\;\In \\
\cps{\Return\;y} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \cps{H}} \scons \sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect\vhops}} \scons \reflect k'))/r]\\
\el \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\el\\
=& \reason{Lemma~\ref{lem:decomposition-gen-cont-proof} (Decomposition)} \\
& \bl\cps{N_\ell}[\cps{V}/p, \dlam y\,k.\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k'=k\;\In \\
\cps{\EC[\Return\;y]} \sapp (\sRecord{\snil, \cps{H}} \scons \sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect\vhops}} \scons \reflect k')/r] \\
\el \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\el\\
=& \reason{static $\beta$-conversion and definition of $\cps{-}$} \\
\end{derivation}
\begin{derivation}
& \bl\cps{N_\ell}[\cps{V}/p, \dlam y\,k.\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k'=k\;\In \\
\cps{\Handle\;\EC[\Return\;y]\;\With\;H} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect\vhops}} \scons \reflect k')/r]\\
\el \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\el\\
=& \reason{definition of $\cps{-}$} \\
& \cps{N_\ell}[\cps{V}/p, \cps{\lambda y. \Handle\;\EC[\Return\;y]\;\With\;H}/r] \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
=& \reason{Lemma~\ref{lem:subst-gen-cont-proof} (Substitution)} \\
& \cps{N_\ell[V/p, \lambda y. \Handle\;\EC[\Return\;y]\;\With\;H/r]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
\end{derivation}
\item[Case] $\semlab{Op^\dagger}$:
$\Handle^\dagger\;\EC[\Do\;\ell\;V]\;\With\;H \reducesto N_\ell[V/p,\lambda
y.\EC[\Return\; y]/r]$, where $\ell \not\in BL(\EC)$
and $H^\ell = \{\ell\;p\;r \mapsto N_\ell\}$.
\begin{derivation}
& \cps{\Handle^\dagger\;\EC[\Do\;\ell\;V]\;\With\;H}\sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{definition of $\cps{-}$} \\
& (\slam \sk. \cps{\EC[\Do\;\ell\;V]} \sapp (\sRecord{\snil, \sRecord{\reflect\cps{\hret}, \reflect\cps{\hops}^\dagger}} \scons \sk)) \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{static $\beta$-conversion} \\
& \cps{\EC[\Do\;\ell\;V]} \sapp (\sRecord{\snil, \sRecord{\reflect\cps{\hret}, \reflect\cps{\hops}^\dagger}} \scons \sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW) \\
=& \reason{Lemma~\ref{lem:decomposition-gen-cont-proof} (Decomposition)} \\
& \cps{\Do\;\ell\;V} \sapp (\cps{\EC} \sapp (\sRecord{\snil, \sRecord{\reflect\cps{\hret}, \reflect\cps{\hops}^\dagger}} \scons \sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW))\\
\reducesto^+& \reason{Lemma~\ref{lem:handle-op-gen-cont-proof} (Handling)} \\
& \bl\cps{N_\ell}[\cps{V}/p,\dlam y\,k.\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k'=k\;\In \\
\cps{\Return\;y} \sapp (\cps{\EC} \sapp (\scons \sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect\vhops}} \scons \reflect k'))/r]\\
\el \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\el\\
=& \reason{Lemma~\ref{lem:decomposition-gen-cont-proof} (Decomposition)} \\
& \bl\cps{N_\ell}[\cps{V}/p, \dlam y\,k.\bl
\Let\;\dRecord{fs,\dRecord{\vhret,\vhops}}\dcons k'=k\;\In \\
\cps{\EC[\Return\;y]} \sapp (\sRecord{\reflect fs, \sRecord{\reflect \vhret, \reflect\vhops}} \scons \reflect k')/r]\\
\el \\
\qquad \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\el\\
=& \reason{definition of $\cps{-}$} \\
& \cps{N_\ell}[\cps{V}/p, \cps{\lambda y. \EC[\Return\;y]}/r] \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
=& \reason{Lemma~\ref{lem:subst-gen-cont-proof} (Substitution)} \\
& \cps{N_\ell[V/p, \lambda y. \EC[\Return\;y]/r]} \sapp (\sRecord{\sV_{fs}, \sRecord{\sV_{ret}, \sV_{ops}}} \scons \sW)\\
\end{derivation}
\end{description}
\end{proof}
\chapter{Proof details for the complexity of effectful generic count}
\label{sec:positive-theorem}
\newcommand{\HCount}{H_{\Count}}
\newcommand{\Henv}{\env_{\HCount}}
\newcommand{\Pure}{\dec{pure}}
\newcommand{\envt}{\dec{env}}
\newcommand{\hclo}{\chi_{\Count}}
\newcommand{\DTDT}{\dec{DT}}
\newcommand{\CF}{\dec{CF}}
\newcommand{\residual}{\dec{residual}}
%\newcommand{\cont}{\dec{cont}}
\newcommand{\comp}{\dec{comp}}
\newcommand{\purecont}{\dec{purecont}}
\newcommand{\descend}[1]{\dec{env}^{\downarrow}_{#1}}
\newcommand{\ascend}[1]{\dec{env}^{\uparrow}_{#1}}
\newcommand{\initial}{\dec{env}^{\bot}}
\newcommand{\final}{\dec{env}^{\top}}
\newcommand{\ctrl}{\dec{control}}
\newcommand{\dt}{\mathcal{D}}
%
\newcommand{\arrive}{\dec{arrive}}
\newcommand{\depart}{\dec{depart}}
\newcommand{\whereX}[1]{%
\multicolumn{2}{l}%
{\text{where }\bl #1\el}%
}
%
In this appendix I give the proof details and artefacts for
Theorem~\ref{thm:complexity-effectful-counting}.
\paragraph{Relation to prior work} This appendix is imported from
Appendix C of \citet{HillerstromLL20a}.\medskip
Throughout this section we let $\HCount$ denote the handler definition
of $\Count$, that is
%
\[
\HCount \defas
\left\{
\ba[m]{@{~}l@{~}c@{~}l}
\Return~x &\mapsto& \If\;x\;\Then\; \Return~1 \;\Else\; \Return~0\\
\Branch~\Unit~r &\mapsto&
\ba[t]{@{}l}
\Let\; x_\True \revto r~\True \; \In\\
\Let\; x_\False \revto r~\False \; \In\\
x_\True + x_\False
\ea
\ea
\right\}
\]
%
The timed decision tree model embeds timing information. For the proof
we must also know the abstract machine environment and the pure
continuation. Thus we decorate timed decision trees with this
information.
%
\begin{definition}[decorated timed decision trees]
A decorated timed decision tree is a partial function $\tree :
\Addr \pto (\Lab \times \Nat) \times \Conf_q$ such
that its first projection $bs \mapsto \tree(bs).1$ is a timed
decision tree.
\end{definition}
%
We extend the projections $\labs$ and $\steps$ in the obvious way to
work over decorated timed decision trees. We define three further
projections. The first $\comp(\tree) \defas bs \mapsto \tree(bs).2.1$
projects the computation component of the configuration, the second
$\envt(\tree) \defas bs \mapsto \tree(bs).2.2$ projects the
environment, and finally the third
$\Pure(\tree) \defas bs \mapsto \mathsf{head}(t(bs).2.3).1$ projects
the pure continuation.
The following definition gives a procedure for constructing a
decorated timed decision tree. The construction is analogous to that
of Definition~\ref{def:model-construction}.
%
\begin{definition}\label{def:model-construction-extended}
(i) Define $\dt : \Conf_q \pto \Addr \pto (\Lab \times \Nat) \times \Conf_q$ to be the minimal family of
partial functions satisfying the following equations:
%
{\small
\begin{mathpar}
\ba{@{}r@{~}c@{~}l@{\qquad}l@{}}
\dt(\cek{\Return\;W \mid \env \mid \nil})\, \nil &~=~& ((!b, 0), \cek{\Return\;W \mid \env \mid \nil}),
&\text{if }\val{W}\env = b \smallskip\\
\dt(\cek{z\,V \mid \env \mid \kappa})\, \nil &~=~& ((?\val{V}{\env}, 0), \cek{z\,V \mid \env \mid \kappa}),
& \text{if } \gamma(z) = q \smallskip\\
\dt(\cek{z\,V \mid \env \mid \kappa})\, (b \cons bs) &~\simeq~& \dt(\cek{\Return\;b \mid \env \mid \kappa})\,bs,
& \text{if } \gamma(z) = q \smallskip\\
\dt(\cek{M \mid \env \mid \kappa})\, bs &~\simeq~& \mathsf{inc}\,(\dt(\cek{M' \mid \env' \mid \kappa'})\, bs),
&\text{if } \cek{M \mid \env \mid \kappa} \stepsto \cek{M' \mid \env' \mid \kappa'}
\ea
\end{mathpar}}%
Here
$\mathsf{inc}((\ell, s), \mathcal{C}) = ((\ell, s + 1), \mathcal{C})$,
and in all of the above equations $\gamma(q) = \gamma'(q) =
q$. Clearly $\dt(\conf)$ is a decorated timed decision tree for any
$\conf \in \Conf_q$.
%
(ii) The decorated timed decision tree of a computation term is
obtained by placing it in the initial configuration:
%
$\dt(M) \defas \dt(\cek{M, \emptyset[q \mapsto q], \kappa_0})$.
%
(iii) The decorated timed decision tree of a closed value
$P:\Predicate$ is $\dt(P\,q)$. Since $q$ plays the role of a dummy
argument, we will usually omit it and write $\dt(P)$ for $\dt(P\,q)$.
\end{definition}
We define some functions, that given a list of booleans and a
$n$-standard predicate, compute configurations of the effectful
abstract machine at particular points of interest during evaluation of
the given predicate. Let
$\hclo(V) \defas (\emptyset[pred \mapsto \val{V}\emptyset], \HCount)$
denote the handler closure of $\HCount$.
\paragraph{Notation.} For an $n$-standard predicate $P$ we write
$|P| = n$ for the size of the predicate. Furthermore, we define
$\chi_{\text{id}}$ for the identity handler closure
$(\emptyset, \{ \Return~x \mapsto x \})$.
%
\begin{definition}[computing machine configurations]
For any $n$-standard predicate $P$ and a list of booleans $bs$, such
that $|bs| \leq n$, we can compute machine configurations at points
of interest during evaluation of $\Count~P$.
To make the notation slightly simpler we use the following
conventions whenever $n$, $\tree$, and $c$ appear free: $n = |P|$,
$\tree = \dt(P)$, and $c(bs) = \sharp(bs' \mapsto \val{P}~(bs \concat bs'))$.
%
The definitions are presented in a top-down manner.
%
\begin{itemize}
\item The function $\arrive$ either computes the configuration at a
query node, if $|bs| < n$, or the configuration at an answer node.
%
\begin{equations}
\arrive &:& \Addr \times \ValCat \pto \Conf \\
\arrive(bs, P) &\defas& \cek{z~V \mid \env \mid (\sigma, \hclo(P)) \cons \residual(bs, P)}, \quad \text{if } |bs| < n\\
\multicolumn{3}{@{}l@{}}
{\hfill
\text{where } \ba[t]{@{~}l}
z~V = \comp(\tree)(bs), \env = \envt(\tree)(bs), \env(z) = (\initial(P), \Superpoint) \\
?k = \labs(\tree)(bs), \val{V}\env = k, \text{ and } \sigma = \Pure(\tau)(bs)
\ea}\\
\arrive(bs, P) &\defas& \cek{\Return\;W \mid \env \mid (\nil, \hclo(P)) \cons \residual(bs, P)}, \quad \text{if } |bs| = n\\
\multicolumn{3}{@{}l@{}}
{\hfill
\text{where } \Return\;W = \comp(\tree)(bs), \env = \envt(\tree)(bs), !b = \labs(\tree)(bs), \text{ and } \val{W}\env = b}
\end{equations}
%
\item Correspondingly, the $\depart$ function computes the
configuration either after the completion of a query or handling
of an answer.
%
\begin{equations}
\depart &:& \Addr \times \ValCat \pto \Conf \\
\depart(bs, P) &\defas& \cek{\Return\; m \mid \env \mid \residual(bs, P)}, \quad \text{if } |bs| < n\\
\multicolumn{3}{@{}l@{}}
{\hfill
\text{where } \env = \ascend{\False}(bs, P) \text{ and } m = c(bs)}\\
\depart(bs, P) &\defas& \cek{\Return\;m \mid \env \mid \residual(bs, P)}, \quad \text{if } |bs| = n\\
\multicolumn{3}{@{}l@{}}
{\hfill
\text{where }\ba[t]{@{}l}
m = c(bs),
b = \begin{cases} \True & \text{if } m = 1\\
\False & \text{if } m = 0
\end{cases}, \text{ and } \env = \initial(P)[x \mapsto b]
\ea}
\end{equations}
%
The two clauses of $\depart$ yield slightly different
configurations. The first clause computes a configuration inside
the operation clause of $\HCount$. The configuration is exactly
tail-configuration after summing up the two respective values
returned by the two invocations of resumption. Whilst the second
clause computes the tail-configuration inside of the success clause
of $\HCount$ after handling a return value of the predicate.
%
\item The $\residual$ function computes the residual continuation
structure which contains the bits of computations to perform after
handling a complete path in a decision tree.
\begin{equations}
\residual &:& \Addr \times \ValCat \pto \Cont\\
\residual(bs, P) &\defas& [(\purecont(bs, P), \chi_{id})]
\end{equations}
%
%
\item The function $\purecont$ computes the pure continuation.
%
\begin{equations}
\purecont &:& \Addr \times \ValCat \pto \PureCont\\
\purecont (\nil, P) &\defas& \nil\\
%
\purecont (\snoc{bs}{\True}, P) &\defas& \bl (\env, x_\True, \Let\;x_\False\revto r~\False\;\In\;x_\True+x_\False)\\
\cons \purecont(bs, P),\el\\
\multicolumn{3}{@{}l@{}}
{\hfill
\text{where } \env = \descend{\True}(\snoc{bs}{\True}, P)}\\
%
\purecont (\snoc{bs}{\False}, P) &\defas& \bl(\env, x_\False, x_\True+x_\False)\\
\cons \purecont (bs, P),\el\\
\multicolumn{3}{@{}l@{}}
{\hfill
\text{where } \env = \descend{\False}(\snoc{bs}{\False}, P)}\\
\end{equations}
%
%
\item The function $\initial$ computes the initial environment of
the handler. The family of functions $\descend{b\in\mathbb{B}}$
contains two functions, one for each instantiation of $b$, which
describe how to compute the environment prior \emph{descending}
down a branch as the result of invoking a resumption with
$b$. Analogously, the functions in the family
$\ascend{b \in \mathbb{B}}$ describe how to compute the
environment after \emph{ascending} from the resumptive exploration
of a branch.
%
\begin{equations}
\initial &:& \ValCat \to \Env\\
\initial(P) &\defas& \emptyset[pred \mapsto \val{P}\emptyset]\\[1.5ex]
\end{equations}
\begin{minipage}{.5\linewidth}
\begin{equations}
%
\descend{\True} &:& \Addr \times \ValCat \pto \Env\\
\descend{\True}(bs, P) &\defas& \initial(P)[r \mapsto (\sigma, \hclo(P))],\\
\multicolumn{3}{@{}l@{}}
{\qquad
\text{where } \sigma = \Pure(\tree)(bs)}\\[1.5ex]
%
\ascend{\True} &:& \Addr \times \ValCat \pto \Env\\
\ascend{\True}(bs, P) &\defas& \env[x_\True \mapsto i],\\
\multicolumn{3}{@{}l@{}}
{ \qquad\bl
\text{where } \env = \descend{\True}(bs, P)\\
\text{and } i = c(\snoc{bs}{\True})
\el}\\[1.5ex]
%
\end{equations}
\end{minipage}
\begin{minipage}{.5\linewidth}
\begin{equations}
\descend{\False} &:& \Addr \times \ValCat \pto \Env\\
\descend{\False}(bs, P) &\defas& \ascend{\True}\\[1.5ex]
%
\ascend{\False} &:& \Addr \times \ValCat \pto \Env\\
\ascend{\False}(bs, P) &\defas& \env[x_\False \mapsto j],\\
\multicolumn{3}{@{}l@{}}
{\qquad\bl
\text{where } \env = \descend{\False}(bs, P)\\
\text{ and } j = c(\snoc{bs}{\False})
\el}\\
%
\end{equations}
\end{minipage}
\end{itemize}
%
\end{definition}
%
The proof of Theorem~\ref{thm:complexity-effectful-counting} works by
alternating between two different modes of reasoning: intensional and
extensional. The former is used to reason directly about the steps
taken by $\ECount$ program and the latter is used to reason about
steps taken by the provided predicate. The number of steps taken by an
$n$-standard predicate is readily available by constructing its
corresponding decorated timed decision tree model. The model is
constructed using a distinguished free variable $q$ to denote a
point. The following lemma lets us reason about the number of steps
taken by a predicate between its initial application and its first
query, between subsequent queries, and between final query and answer
when $q$ is instantiated to $\Superpoint$.
%
\begin{lemma}\label{lem:inductive-lem-aux}
Suppose $P$ is an $n$-standard predicate, $bs \in \Addr$ is a list
of booleans, and for all $\chi \in \HClosure$ and
$\kappa \in \Cont$. Let $q$ denote the distinguished free variable
used to construct the decorated timed decision tree $\tree$ of $P$.
%
\begin{enumerate}
\item If $|bs| = 0$ then
%
\begin{derivation}
&\cek{pred~q \mid \initial(P)[q \mapsto q] \mid (\nil, \chi) \cons \kappa}\\
\stepsto&^{\steps(\tree)(\nil)}\\
&\cek{z~V \mid \env[q \mapsto q] \mid (\sigma, \chi) \cons \kappa}
\end{derivation}
%
where $z~V = \comp(\tree)(\nil)$, $\env = \envt(\tree)(\nil)$,
$?k = \labs(\tree)(\nil)$, $\val{V}\env = k$, $\env(z) = q$, and
$\sigma = \Pure(\tree)(\nil)$; implies
%
\begin{derivation}
&\cek{pred~(\Superpoint) \mid \initial(P) \mid (\nil, \chi) \cons \kappa}\\
\stepsto&^{\steps(\tree)(\nil)}\\
&\cek{z~V \mid \env[z \mapsto (\initial(P), \Superpoint)] \mid (\sigma, \chi) \cons \kappa}
\end{derivation}
%
\item If $|bs| < n - 1$ then for all $b \in \mathbb{B}$ and $W \in \ValCat$
%
\begin{derivation}
&\cek{\Return\;W \mid \descend{b}(bs,P) \mid (\sigma, \chi) \cons \kappa}\\
\stepsto&^{\steps(\tree)(\snoc{bs}{b})}\\
&\cek{z~V \mid \env[q \mapsto q] \mid (\sigma', \chi) \cons \kappa}
\end{derivation}
%
where $\val{W}(\descend{b}(bs,P)) = b$, $\sigma = \Pure(\tree)(bs)$, $z~V = \comp(\tree)(\snoc{bs}{b})$, $\env = \envt(\tree)(\snoc{bs}{b})$, $\env(z) = q$, $?k = \labs(\tree)(\snoc{bs}{b})$, $\val{V}\env = k$, and $\sigma' = \Pure(\tree)(\snoc{bs}{b})$; implies
%
\begin{derivation}
&\cek{\Return\;W \mid \descend{b}(bs,P) \mid (\sigma, \chi) \cons \kappa}\\
\stepsto&^{\steps(\tree)(\snoc{bs}{b})}\\
&\cek{z~V \mid \env[z \mapsto (\initial(P), \Superpoint)] \mid (\sigma', \chi) \cons \kappa}
\end{derivation}
\item If $|bs| = n - 1$ then for all $b \in \mathbb{B}$ and $W \in \ValCat$
%
\begin{derivation}
&\cek{\Return\;W \mid \descend{b}(bs,P) \mid (\sigma, \chi) \cons \kappa}\\
\stepsto&^{\steps(\tree)(\snoc{bs}{b})}\\
&\cek{\Return\;W' \mid \env[q \mapsto q] \mid (\nil, \chi) \cons \kappa}
\end{derivation}
%
where $\val{W}(\descend{b}(bs,P)) = b$, $\sigma = \Pure(\tree)(bs)$, $\Return\;W' = \comp(\tree)(\snoc{bs}{b})$, $\env = \envt(\tree)(\snoc{bs}{b})$, $\ans b' = \labs(\tree)(\snoc{bs}{b})$, and $\val{W'}\env = b'$; implies
%
\begin{derivation}
&\cek{\Return\;W \mid \descend{b}(bs,P) \mid (\sigma, \chi) \cons \kappa}\\
\stepsto&^{\steps(\tree)(\snoc{bs}{b})}\\
&\cek{\Return\;W' \mid \env \mid (\nil, \chi) \cons \kappa}
\end{derivation}
\end{enumerate}
\end{lemma}
%
\begin{proof}
By unfolding Definition~\ref{def:model-construction-extended}.
\end{proof}
%
Let $\ctrl : \Conf \pto \ValCat$ denote a partial function that hoists
a value out of a given machine configuration, that is
%
\[
\ctrl(\cek{M \mid \env \mid \kappa})
\defas
\begin{cases}
\val{V}{\env} & \text{if } M = \Return\; V \\
\bot & \text{otherwise}
\end{cases}
\]
\paragraph{Notation}
For a given predicate $P$ we write $\hclo(P)^\Return$ to mean
$\hclo(P)^\Return = (\emptyset[pred \mapsto \val{P}\emptyset],
\HCount)^\Return = \HCount^\Return$, that is the projection of the success
clause of $\HCount$.
The following lemma performs most of the heavy lifting for the proof
of Theorem~\ref{thm:complexity-effectful-counting}.
%
\begin{lemma}\label{lem:inductive-bit-of-thm1}
Suppose $P$ is an $n$-standard predicate, then for any list of
booleans $bs \in \Addr$ such that $|bs| \leq n$
\[
\ba{@{}l}
\arrive(bs, P) \reducesto^{T(bs, n)} \depart(bs, P),
\ea
\]
and $\ctrl(\depart(bs, P)) \leq 2^{n - |bs|}$ with the
function $T$ defined as
\[
T(bs, n) =
\begin{cases}
9*(2^{n - |bs|} - 1) + 2^{n - |bs| + 1} + \sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |bs|}\steps(\tree)(bs \concat bs') & \text{if } |bs| < n\\
2 & \text{if } |bs| = n
\end{cases}
\]
\end{lemma}
%
%
\begin{proof}
By downward induction on $bs$.
%
\begin{description}
\item[Base step] We have that $|bs| = n$. Since the predicate is
$n$-standard we further have that $n \geq 1$. We proceed by direct
calculation.
%
\begin{derivation}
&\arrive(bs, P)\\
=& \reason{definition of $\arrive$ when $n = |bs|$}\\
&\cek{\Return\;W \mid \env \mid (\nil, \hclo(P)) \cons \residual(bs, P)}\\
\whereX{\ba[t]{@{~}l}
\Return\;W = \comp(\tree)(bs), \env = \envt(\tree)(bs), !b = \labs(\tree)(bs), \text{ and } \val{W}\env = b
\ea}\\
\stepsto& \reason{\mlab{RetHandler}, $\hclo(P)^\Return = \{\Return~x \mapsto \cdots\}$}\\
&\cek{\If\;x\;\Then\;\Return\;1\;\Else\;\Return\;0 \mid \env'[x \mapsto \val{b}\env'] \mid \residual(bs, P)}\\
\whereX{\env' = \hclo(P).1}
\end{derivation}
%
The value $b$ can assume either of two values. We consider first
the case $b = \True$.
%
\begin{derivation}
=& \reason{assumption $b = \True$, definition of $\val{-}$ (2 value steps)}\\
&\cek{\If\;x\;\Then\;\Return\;1\;\Else\;\Return\;0 \mid \env'[x \mapsto \True] \mid \residual(bs, P)}\\
\stepsto& \reason{\mlab{Case-inl} (and $\log|\env'[x \mapsto \True]| = 1$ environment operations)}\\
&\cek{\Return\;1 \mid \env'[x \mapsto \True] \mid \residual(bs, P)}\\
=& \reason{definition of $\depart$ when $n = |bs|$}\\
&\depart(bs, P)
\end{derivation}
%
We have that $\ctrl(\depart(bs, P)) = 1 \leq 2^0 = 2^{n - |bs|}$.
%
Next, we consider the case when $b = \False$.
%
\begin{derivation}
=& \reason{assumption $b = \False$, definition of $\val{-}$ (2 value steps)}\\
&\cek{\If\;x\;\Then\;\Return\;1\;\Else\;\Return\;0 \mid \env'[x \mapsto \False] \mid \residual(bs, P)}\\
\stepsto& \reason{\mlab{Case-Inl} (and $\log|\env'[x \mapsto \False]| = 1$ environment operations)}\\
&\cek{\Return\;0 \mid \env'[x \mapsto \False] \mid \residual(bs, P)}\\
=& \reason{definition of $\depart$ when $n = |bs|$}\\
&\depart(bs, P)
\end{derivation}
%
Again, we have that $\ctrl(\depart(bs, P)) = 0 \leq 2^0 = 2^{n - |bs|}$.
%
\paragraph{Step analysis}
In either case, the machine uses exactly 2 transitions. Thus we get that
\[
2 = T(bs, n), \quad \text{when } |bs| = n
\]
%
\item[Inductive step] The induction hypothesis states that for all
$b \in \mathbb{B}$ and $|bs| < n$
%
\[
\arrive(\snoc{bs}{b}, P) \reducesto^{T(\snoc{bs}{b}, n)} \depart(\snoc{bs}{b}, P),
\]
%
such that $\ctrl(\depart(\snoc{bs}{b}, P)) \leq 2^{n - |\snoc{bs}{b}|}$.
%
We proceed by direct calculation.
%
\begin{derivation}
&\arrive(bs, P)\\
=& \reason{definition of $\arrive$ when $n < |bs|$}\\
&\cek{z~V \mid \env \mid (\sigma, \hclo(P)) \cons \residual(bs, P)}\\
\whereX{\ba[t]{@{~}l}
z~V = \comp(\tree)(bs), \env = \dec{env}(\tree)(bs)[z \mapsto (\initial(P), \Superpoint)],\\
?k = \labs(\tree)(bs), \val{V}\env = k, \text{ and } \sigma = \Pure(\tree)(bs)
\ea}\\
\stepsto& \reason{\mlab{App}}\\
&\cek{\Do\;\Branch~\Unit \mid \env'[\_ \mapsto k] \mid (\sigma, \hclo(P)) \cons \residual(bs, P)}\\
\whereX{\env' = \initial(P)}\\
\end{derivation}
\begin{derivation}
\stepsto& \reason{\mlab{Handle-Op}, $\hclo(P)^{\Branch} = \{\Branch~\Unit~r \mapsto \cdots\}$}\\
&\left\langle
\ba[m]{@{}l}
\Let\;x_{\True} \revto r~\True \;\In\\
\Let\;x_{\False} \revto r~\False \;\In\\
x_\True + x_\False
\ea
\mid \env[r \mapsto \val{(\sigma, \hclo(P))}\env] \mid \residual(bs, P)
\right\rangle\\
\whereX{\env = \initial(P)}\\
=& \reason{definition of $\val{-}$ (1 value step)}\\
&\left\langle
\ba[m]{@{}l}
\Let\;x_{\True} \revto r~\True \;\In\\
\Let\;x_{\False} \revto r~\False \;\In\\
x_\True + x_\False
\ea
\mid \env' \mid \residual(bs, P)
\right\rangle\\
\whereX{\env' = \env[r \mapsto (\sigma, \hclo(P))]}\\
\stepsto& \reason{\mlab{Let}, definition of $\residual$}\\
&\cek{r~\True \mid \env' \mid \residual(\snoc{bs}{\True} bs, P)}\\
\stepsto& \reason{\mlab{Resume}, $\val{r}\env' = (\sigma, \hclo(P))$}\\
&\cek{\Return\;\True \mid \env' \mid (\sigma, \hclo(P)) \cons \residual(\snoc{bs}{\True}, P)}
\end{derivation}
%
We now use Lemma~\ref{lem:inductive-lem-aux} to reason about the
progress of the predicate computation $\sigma$. There are two
cases consider, either $1 + |bs| < n$ or
$1 + |bs| = n$.
%
\begin{description}
\item[Case] $1 + |bs| < n$. We obtain the following internal node
configuration.
%
\begin{derivation}
\stepsto&^{\steps(\tree)(\snoc{bs}{\True})} \reason{by Lemma~\ref{lem:inductive-lem-aux}}\\
&\cek{z~V \mid \env'' \mid (\sigma', \hclo(P)) \cons \residual(\snoc{bs}{\True}, P)}\\
\whereX{\ba[t]{@{~}l}
z~V = \comp(\tree)(bs), \\
\env'' = \dec{env}(\tree)(\snoc{bs}{\True})[z \mapsto (\initial(P), \Superpoint)],\\
?k = \labs(\tree)(\snoc{bs}{\True}), \val{V}\env'' = k, \text{ and } \sigma' = \Pure(\tree)(\snoc{bs}{\True})
\ea}\\
=& \reason{definition of $\arrive$ when $1 + |bs| < n$}\\
&\arrive(\snoc{bs}{\True}, P)\\
\stepsto&^{T(\snoc{bs}{\True}, n)} \reason{induction hypothesis}\\
&\depart(\snoc{bs}{\True}, P)\\
=& \reason{definition of $\depart$ when $1 + |bs| < n$}\\
&\cek{\Return\;i \mid \env \mid \residual(\snoc{bs}{\True}, P)}\\
\whereX{i = c(\snoc{\snoc{bs}{\True}}{\True}) + c(\snoc{\snoc{bs}{\True}}{\False})\\
\text{ and } \env = \ascend{\False}(\snoc{bs}{\True}, P)}\\
=& \reason{definition of $\residual$ and $\purecont$}\\
&\langle\Return\;i \mid \env \mid [(\bl(\env', x_\True, \Let\;x_\False\revto r~\False\;\In\;x_\True+x_\False)\\ \cons \purecont(bs, P), \chi_{id})]\rangle\el\\
\whereX{\env' = \descend{\True}(bs, P)}\\
\end{derivation}
\begin{derivation}
\stepsto& \reason{\mlab{RetCont}}\\
&\cek{\Let\;x_\False\revto r~\False\;\In\;x_\True+x_\False \mid \env'' \mid [(\purecont(bs, P), \chi_{id})]}\\
\whereX{\env'' = \env'[x_\True \mapsto \val{i}\env']}\\
\stepsto& \reason{\mlab{Let}}\\
&\cek{r~\False \mid \env'' \mid [((\env'', x_\False, x_\True + x_\False) \cons \purecont(bs, P), \chi_{id})]}\\
=& \reason{definition of $\purecont$ and $\residual$}\\
&\cek{r~\False \mid \env'' \mid \residual(\snoc{bs}{\False}, P)}\\
\stepsto& \reason{\mlab{Resume}}\\
&\cek{\Return\;\False \mid \env'' \mid (\sigma, \hclo(P)) \cons \residual(\snoc{bs}{\False}, P)}\\
\whereX{\sigma = \Pure(\tree)(bs)}\\
\stepsto&^{\steps(\tree)(\snoc{bs}{\False})} \reason{by Lemma~\ref{lem:inductive-lem-aux}}\\
&\cek{z~V \mid \env \mid (\sigma, \hclo(P)) \cons \residual(\snoc{bs}{\False}, P)}\\
\whereX{\ba[t]{@{~}l}
z~V = \comp(\tree)(bs),\\
\env = \envt(\tree)(\snoc{bs}{\False})[q \mapsto (\initial(P), \Superpoint)],\\
?k = \labs(\tree)(\snoc{bs}{\False}), \val{V}\env = k, \text{ and } \sigma = \Pure(\tree)(\snoc{bs}{\False})
\ea}\\
=& \reason{definition of $\arrive$ when $1 + |bs| < n$}\\
&\arrive(\snoc{bs}{\False}, P)\\
\stepsto&^{T(\snoc{bs}{\False}, n)} \reason{induction hypothesis}\\
&\depart(\snoc{bs}{\False}, P)\\
=& \reason{definition of $\depart$ when $1 + |bs| < n$}\\
&\cek{\Return\;j \mid \env \mid \residual(\snoc{bs}{\False}, P)}\\
\whereX{j = c(\snoc{\snoc{bs}{\False}}{\True}) + c(\snoc{\snoc{bs}{\False}}{\False})\\
\text{and } \env = \ascend{\False}(\snoc{bs}{\False}, P)}\\
=& \reason{definition of $\residual$ and $\purecont$}\\
&\cek{\Return\;j \mid \env \mid [((\env'', x_\False, x_\True + x_\False) \cons \purecont(bs, P), \chi_{id})]}\\
\stepsto& \reason{\mlab{RetCont}}\\
&\cek{x_\True + x_\False \mid \env''[x_\False \mapsto \val{j}\env''] \mid \residual(bs, P)}\\
\stepsto& \reason{\mlab{Plus}}\\
&\cek{\Return\;m \mid \env''[x_\False \mapsto \val{j}\env''] \mid \residual(bs, P)}\\
&\text{where}
\ba[t]{@{~}l@{~}l}
m &= c(\snoc{\snoc{bs}{\True}}{\True}) + c(\snoc{\snoc{bs}{\True}}{\False}) \\
&+ c(\snoc{\snoc{bs}{\False}}{\True}) + c(\snoc{\snoc{bs}{\False}}{\False})\\
&= c(\snoc{bs}{\True}) + c(\snoc{bs}{\False}) = c(bs) \leq 2^{n - |bs|}
\ea
\\
=& \reason{definition of $\depart$ when $|bs| < n$}\\
&\depart(bs, P)
\end{derivation}
%
\paragraph{Step analysis} The total number of machine steps is
given by
\begin{derivation}
&9 \bl+ \steps(\tree)(\snoc{bs}{\True}) + T(\snoc{bs}{\True}, n)\\
+ \steps(\tree)(\snoc{bs}{\False}) + T(\snoc{bs}{\False}, n)\el\\
=& \reason{reorder}\\
&9 \bl+ T(\snoc{bs}{\True}, n) + \steps(\tree)(\snoc{bs}{\False})\\
+ \steps(\tree)(\snoc{bs}{\True}) + \steps(\tree)(\snoc{bs}{\False})\el\\
=& \reason{definition of $T$}\\
& 9 \bl+ 9*(2^{n - |\snoc{bs}{\True}|} - 1) + 9*(2^{n - |\snoc{bs}{\False}|} - 1)\\
+ 2^{n - |\snoc{bs}{\True}| + 1} + 2^{n - |\snoc{bs}{\False}| + 1}\\
+ \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |\snoc{bs}{\True}|}\steps(\tree)(\snoc{bs}{\True} \concat bs')\\
+ \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |\snoc{bs}{\False}|}\steps(\tree)(\snoc{bs}{\False} \concat bs')\el\\
&+\steps(\tree)(\snoc{bs}{\True}) + \steps(\tree)(\snoc{bs}{\False})\\
=& \reason{simplify}\\
& 9 \bl+ 9*(2^{n - |\snoc{bs}{\True}|} - 1) + 9*(2^{n - |\snoc{bs}{\False}|} - 1) + 2^{n - |bs| + 1}\\
+ \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |\snoc{bs}{\True}|}\steps(\tree)(\snoc{bs}{\True} \concat bs')\\
+ \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |\snoc{bs}{\False}|}\steps(\tree)(\snoc{bs}{\False} \concat bs')\\
+\steps(\tree)(\snoc{bs}{\True}) + \steps(\tree)(\snoc{bs}{\False})\el\\
=& \reason{merge sums}\\
& 9 \bl+ 9*(2^{n - |\snoc{bs}{\True}|} - 1) + 9*(2^{n - |\snoc{bs}{\False}|} - 1) + 2^{n - |bs| + 1}\\
+ \left(\displaystyle\sum_{bs' \in \Addr}^{2 \leq |bs'| \leq n - |bs|}\steps(\tree)(bs \concat bs')\right)\\
+ \steps(\tree)(\snoc{bs}{\True}) + \steps(\tree)(\snoc{bs}{\False})\el\\
=& \reason{rewrite binary sum}\\
&9 \bl+ 9*(2^{n - |\snoc{bs}{\True}|} - 1) + 9*(2^{n - |\snoc{bs}{\False}|} - 1) + 2^{n - |bs| + 1}\\
+ \displaystyle\sum_{bs' \in \Addr}^{2 \leq |bs'| \leq n - |bs|}\steps(\tree)(bs \concat bs')
+ \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq 1}\steps(\tree)(bs \concat bs')\el\\
=& \reason{merge sums}\\
&9 \bl + 9*(2^{n - |\snoc{bs}{\True}|} - 1) + 9*(2^{n - |\snoc{bs}{\True}|} - 1) + 2^{n - |bs| + 1}\\
+ \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |bs|}\hspace{-0.5cm}\steps(\tree)(bs \concat bs')\el\\
\end{derivation}
\begin{derivation}
=& \reason{factoring}\\
&9 + 2*9*(2^{n - |bs| - 1} - 1) + 2^{n - |bs| + 1} + \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |bs|}\steps(\tree)(bs \concat bs')\\
=& \reason{distribute}\\
&9 + 9*(2^{n - |bs|} - 2) + 2^{n - |bs| + 1} + \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |bs|}\steps(\tree)(bs \concat bs')\\
=& \reason{distribute}\\
&9 + 9*2^{n - |bs|} - 18 + 2^{n - |bs| + 1} + \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |bs|}\steps(\tree)(bs \concat bs')\\
=& \reason{simplify}\\
&9*2^{n - |bs|} - 9 + 2^{n - |bs| + 1} + \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |bs|}\steps(\tree)(bs \concat bs')\\
=& \reason{factoring}\\
&9*(2^{n - |bs|} - 1) + 2^{n - |bs| + 1} + \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |bs|}\steps(\tree)(bs \concat bs')\\
=& \reason{definition of $T$}\\
&T(bs, n)
\end{derivation}
%
\item[Case] $1 + |bs| = n$. We obtain the following
configuration.
%
\begin{derivation}
\stepsto&^{\steps(\tree)(\snoc{bs}{\True})} \reason{by Lemma~\ref{lem:inductive-lem-aux}}\\
&\cek{\Return\;W \mid \env'' \mid (\nil, \hclo(P)) \cons \residual(\snoc{bs}{\True}, P)}\\
\whereX{\ba[t]{@{~}l}
\Return\;W = \comp(\tree)(\snoc{s}{\True}), !b = \labs(\tree)(\snoc{bs}{\True}),\\
\env'' = \envt(\tree)(\snoc{bs}{\True}), \text{ and } \val{W}\env'' = b
\ea}\\
=& \reason{definition of $\arrive$ when $1 + |bs| = n$}\\
&\arrive(\snoc{bs}{\True}, P)\\
\stepsto&^{T(\snoc{bs}{\True}, n)} \reason{induction hypothesis}\\
&\depart(\snoc{bs}{\True}, P)\\
=& \reason{definition of $\depart$ when $1 + |bs| = n$}\\
&\cek{\Return\;i \mid \env \mid \residual(\snoc{bs}{\True}, P)}\\
\whereX{i = c(\snoc{bs}{\True}) \leq 2^{n - |\snoc{bs}{\True}|} = 1 \text{ and } \env = \initial(P)}\\
=& \reason{definition of $\residual$ and $\purecont$}\\
&\langle\Return\;i \mid \env \mid [(\bl(\env',x_\True,\Let\;x_\False\revto r~\False\;\In\;x_\True+x_\False)\\ \cons \purecont(bs, P), \chi_{id})]\rangle\el\\
\end{derivation}
\begin{derivation}
\stepsto& \reason{\mlab{RetCont}}\\
&\cek{\Let\;x_\False \revto r~\False\;\In\;x_\True + x_\False \mid \env'[x_\True \mapsto \val{i}\env'] \mid [(\purecont(bs, P), \chi_{id})]}\\
=& \reason{definition of $\val{-}$ (1 value step)}\\
&\cek{\Let\;x_\False \revto r~\False\;\In\;x_\True + x_\False \mid \env'' \mid [(\purecont(bs, P), \chi_{id})]}\\
\whereX{\env'' = \env'[x_\True \mapsto i]}\\
\stepsto& \reason{\mlab{Let}, definition of $\residual$}\\
&\cek{r~\False \mid \env'' \mid \residual(\snoc{bs}{\False}, P)}\\
\stepsto& \reason{\mlab{Resume}}\\
&\cek{\Return\;\False \mid \env'' \mid (\sigma, \hclo(P)) \cons \residual(\snoc{bs}{\False}, P)}\\
\whereX{\sigma = \Pure(\tree)(bs)}\\
\stepsto&^{\steps(\tree)(\snoc{bs}{\False})} \reason{by Lemma~\ref{lem:inductive-lem-aux}}\\
&\cek{\Return\;W \mid \env \mid (\nil, \hclo(P)) \cons \residual(\snoc{bs}{\False}, P)}\\
\whereX{\ba[t]{@{~}l}
\Return\;W = \comp(\tree)(\snoc{bs}{\False}), !b = \labs(\tree)(\snoc{bs}{\False}),\\
\env = \envt(\tree)(\snoc{bs}{\False}), \text{ and } \val{W}\env = b
\ea}\\
=& \reason{definition of $\arrive$ when $1 + |bs| = n$}\\
&\arrive(\snoc{bs}{\False}, P)\\
\stepsto&^{T(\snoc{bs}{\False}, n)} \reason{induction hypothesis}\\
&\depart(\snoc{bs}{\False}, P)\\
=& \reason{definition of $\depart$ when $1 + |bs| = n$}\\
&\cek{\Return\;j \mid \env \mid \residual(\snoc{bs}{\False}, P)}\\
\whereX{j = c(\snoc{bs}{\False}) \leq 2^{n - |\snoc{bs}{\False}|} = 1 \text{ and } \env = \initial(P)}\\
=& \reason{definition of $\residual$ and $\purecont$}\\
&\cek{\Return\;j \mid \env \mid [((\env',x_\False,x_\True+x_\False) \cons \purecont(bs, P), \chi_{id})]}\\
\whereX{\env' = \descend{\False}(bs, P)}\\
\stepsto& \reason{\mlab{RetCont}}\\
&\cek{x_\True + x_\False \mid \env'' \mid [(\purecont(bs, P), \chi_{id})]}\\
\whereX{\env'' = \env'[x_\False \mapsto \val{j}\env'] = \env'[x_\False \mapsto j]}\\
\stepsto& \reason{\mlab{Plus}}\\
&\cek{\Return\;m \mid \env'' \mid [(\purecont(bs, P), \chi_{id})]}\\
\whereX{m = c(\snoc{bs}{\True}) + c(\snoc{bs}{\False}) \leq 2^{n - |bs|}}\\
=& \reason{definition of $\residual$ and $\depart$ when $|bs| < n$}\\
&\depart(bs, P)
\end{derivation}
%
\paragraph{Step analysis} The total number of machine steps
is given by
\begin{derivation}
&9 \bl + \steps(\tree)(\snoc{bs}{\True}) + T(\snoc{bs}{\True}, n)\\
+ \steps(\tree)(\snoc{bs}{\False}) + T(\snoc{bs}{\False}, n)\el\\
=& \reason{reorder}\\
&9 \bl + T(\snoc{bs}{\True}, n) + T(\snoc{bs}{\False}, n)\\
+ \steps(\tree)(\snoc{bs}{\True}) + \steps(\tree)(\snoc{bs}{\False})\el\\
=& \reason{definition of $T$ when $|bs| + 1 = n$}\\
&9 + 2 + 2 + \steps(\tree)(\snoc{bs}{\True}) + \steps(\tree)(\snoc{bs}{\False})\\
=& \reason{simplify}\\
&9 + 2^2 + \steps(\tree)(\snoc{bs}{\True}) + \steps(\tree)(\snoc{bs}{\False})\\
=& \reason{rewrite $2 = n - |bs| + 1$}\\
&9 + 2^{n - |bs| + 1} + \steps(\tree)(\snoc{bs}{\True}) + \steps(\tree)(\snoc{bs}{\False})\\
=& \reason{multiply by $1$}\\
&9*(2^{n - |bs|} - 1) + 2^{n - |bs| + 1} \bl + \steps(\tree)(\snoc{bs}{\True})\\
+ \steps(\tree)(\snoc{bs}{\False})\el\\
=& \reason{rewrite binary sum}\\
&9*(2^{n - |bs|} - 1) + 2^{n - |bs|} + \displaystyle\sum_{bs' \in \Addr}^{1 \leq |bs'| \leq n - |bs|} \steps(\tree)(bs \concat bs')\\
=& \reason{definition of $T$}\\
&T(bs, n)
\end{derivation}
%
\end{description}
\end{description}
\end{proof}
%
The following theorem is a copy of
Theorem~\ref{thm:complexity-effectful-counting}.
\begin{theorem}\label{thm:complexity-effectful-counting-copy}
For all $n > 0$ and any $n$-standard predicate $P$ it holds that
%
\begin{enumerate}
\item The program $\ECount$ is a generic count program
%
\item The runtime complexity of $\ECount~P$ is given by the following formula:
%
\[
\displaystyle\sum_{bs \in \Addr}^{|bs| \leq n} \steps(\tr(P))(bs) + \BigO(2^n)
\]
\end{enumerate}
\end{theorem}
\begin{proof}
The proof begins by direct calculation.
\begin{derivation}
&\cek{\ECount\,P \mid \emptyset \mid [(\nil, \chi_{id})]} \\
=& \reason{definition of $\residual$}\\
& \cek{\ECount\,P \mid \emptyset \mid \residual(\nil, P)} \\
\stepsto& \reason{\mlab{App}, $\val{\ECount}\emptyset = (\emptyset, \lambda pred. \cdots)$}\\
& \cek{\Handle\;pred~(\Superpoint)\;\With\;\HCount \mid \env \mid \residual(\nil, P)}\\
\whereX{\env = \initial(P)}\\
\stepsto& \reason{\mlab{Handle}}\\
& \cek{pred~(\Superpoint) \mid \env \mid (\nil, (\env, \HCount)) \cons \residual(\nil, P)}\\
=& \reason{definition of $\hclo$}\\
&\cek{pred~(\Superpoint) \mid \env \mid (\nil, \hclo(P)) \cons \residual(\nil, P)}\\
\stepsto&^{\steps(\tree)(\nil)} \reason{by Lemma~\ref{lem:inductive-lem-aux}}\\
&\cek{z~V \mid \env' \mid (\sigma, \hclo(P)) \cons \residual(\nil, P)}\\
\whereX{\ba[t]{@{~}l}
z~V = \comp(\tree)(bs), \env' = \envt(\tree)(\nil)[q \mapsto (\initial(P), \Superpoint)],\\
?k = \labs(\tree)(\nil), \val{V}\env' = k, \text{ and } \sigma = \Pure(\tree)(\nil)
\ea}\\
=& \reason{definition of $\arrive$}\\
&\arrive(\nil, P)\\
\stepsto&^{T(\nil, n)} \reason{by Lemma~\ref{lem:inductive-bit-of-thm1}}\\
&\depart(\nil, P)\\
=& \reason{definition of $\depart$}\\
&\cek{\Return\;m \mid \env \mid \residual(\nil, P)}\\
\whereX{\env = \initial(P) \text{ and } m = c(\nil) \leq 2^{n - |bs|} = 2^n}\\
=& \reason{definition of $\residual$}\\
&\cek{\Return\;m \mid \env \mid [(\nil, \chi_{id})]}\\
\stepsto& \reason{\mlab{Handle-Ret}, $H_{id}^{\dec{val}} = \{\Return~x \mapsto \Return\;x\}$}\\
&\cek{\Return\;x \mid \emptyset[x \mapsto m] \mid \nil}
\end{derivation}
%
\paragraph{Analysis}
The machine yields the value $m$.
%
By Lemma~\ref{lem:inductive-bit-of-thm1} it follows that
$m \leq 2^{n - |bs|} = 2^{n - |\nil|} = 2^n$. Furthermore, the total
number of transitions used were
\begin{derivation}
&3 + \steps(\tree)(\nil) + T(\nil, n)\\
=& \reason{definition of $T$}\\
&3 + \steps(\tree)(\nil) + 9*2^n + 2^{n + 1} + \displaystyle\sum_{bs' \in \mathbb{B}^{\ast}}^{1 \leq |bs'| \leq n}\steps(\tree)(bs')\\
\end{derivation}
\begin{derivation}
=& \reason{simplify}\\
&3 + \steps(\tree)(\nil) + 9*2^n + 2^{n + 1} + \displaystyle\sum_{bs' \in \mathbb{B}^{\ast}}^{1 \leq |bs'| \leq n}\steps(\tree)(bs')\\
=& \reason{reorder}\\
&3 + \left(\displaystyle\sum_{bs' \in \mathbb{B}^{\ast}}^{1 \leq |bs'| \leq n}\steps(\tree)(bs')\right) + \steps(\tree)(\nil) + 9*2^n + 2^{n + 1}\\
=& \reason{rewrite as unary sum}\\
&3 + \left(\displaystyle\sum_{bs' \in \mathbb{B}^{\ast}}^{1 \leq |bs'| \leq n}\steps(\tree)(bs') + \displaystyle\sum_{bs' \in \Addr}^{0 \leq |bs'| \leq 0}\steps(\tree)(bs')\right) + 9*2^n + 2^{n + 1}\\
=& \reason{merge sums}\\
&3 + \left(\displaystyle\sum_{bs' \in \mathbb{B}^{\ast}}^{0 \leq |bs'| \leq n}\steps(\tree)(bs')\right) + 9*2^n + 2^{n + 1}\\
=& \reason{definition of $\BigO$}\\
&\left(\displaystyle\sum_{bs' \in \mathbb{B}^{\ast}}^{0 \leq |bs'| \leq n}\steps(\tree)(bs')\right) + \BigO(2^{n})
\end{derivation}
%
\end{proof}
\chapter{Berger count}
\label{sec:berger-count}
In this appendix I will give a brief presentation of the
$\BergerCount$ program alluded to in Section~\ref{sec:pure-counting},
in order to fill out our overall picture of the relationship between
language expressivity and potential program efficiency.
\paragraph{Relation to prior work} This appendix imported from
Appendix D of \citet{HillerstromLL20a}. The code snippets in this
appendix are based on an implementation of Berger count in SML/NJ
written by John Longley. I have transcribed the code snippets, and in
certain places tweaked it for presentation.\medskip
\citeauthor{Berger90}'s original program~\cite{Berger90} introduced a
remarkable search operator for predicates on \emph{infinite} streams
of booleans, and has played an important role in higher-order
computability theory~\cite{LongleyN15}. What we wish to highlight
here is that if one applies the algorithm to predicates on
\emph{finite} boolean vectors, the resulting program, though no longer
interesting from a computability perspective, still holds some
interest from a complexity standpoint: indeed, it yields what seems to
be the best available implementation of generic count within a
PCF-style `functional' language (provided one accepts the use of a
primitive for call-by-need evaluation).
Let us consider an adaptation of Berger's search algorithm on finite
spaces.
%
\[
\bl
\bestshot_n: \Predicate_n \to \Point_n\\
\bestshot_n~pred \defas \bestshot'_n~pred~\nil \medskip\\
\bestshot'_n : \Predicate_n \to \List_\Bool \to \Point_n\\
\bestshot'_n~pred~start \defas
\ba[t]{@{}l}
\Let\; f \revto \dec{memoise}~(\lambda\Unit. \bestshot''_n~pred~start)\; \In\\
\Return\;(\lambda i. \If\; i < |start| \;\Then\; start.i \;\Else\; (f~\Unit).i)
\ea
\el
\]
%
\[
\bl
\bestshot''_n : \Predicate_n \to \List_\Bool \to \List_\Bool\\
\bestshot''_n~pred~start \defas
\ba[t]{@{}l}
\If\; |start| = n \;\Then\; \Return\; start\\
\Else\;
\ba[t]{@{}l}
\Let\; f \revto \bestshot'_n~pred~(\dec{append}~start~[\True])\;\In\\
\If\; pred~f \;\Then\; \Return\; [f~0,\dots,f~(n-1)]\\
\Else\; \bestshot''_n~pred~(\dec{append}~start~[\False])
\ea
\ea
\el
\]%
%
Given any $n$-standard predicate $P$ the function $\bestshot_n$
returns a point satisfying $P$ if one exists, or dummy point
$\lambda i.\False$ if not. It is implemented by via two mutually
recursive auxiliary functions whose workings are admittedly hard to
elucidate in a few words. The function $\bestshot'_n$ is a
generalisation of $\bestshot_n$ that makes a best shot at finding a
point $\pi$ satisfying given predicate and matching some specified
list $start$ in some initial segment of its components
$[\pi(0),\dots,\pi(i-1)]$. It works `lazily', drawing its values from
$start$ wherever possible, and performing an actual search only when
required. This actual search is undertaken by $\bestshot''_n$, which
proceeds by first searching for a solution that extends the specified
list with true; but if no such solution is forthcoming, it settles for
false as the next component of the point being constructed. The whole
procedure relies on a subtle combination of laziness, recursion and
implicit nesting of calls to the provided predicate which means that
the search is self-pruning in regions of the binary tree where the
predicate only demands some initial segment $q~0$,\dots,$q~(i-1)$ of
its argument $q$.
The above program makes use of an operation
%
\[
\dec{memoise} : (\One \to \dec{List}~\Bool) \to (\One \to \dec{List}~\Bool)
\]%
%
which transforms a given thunk into an equivalent `memoised' version,
i.e. one that caches its value after its first invocation and
immediately returns this value on all subsequent invocations. Such an
operation may readily be implemented in $\BCalcS$, or alternatively
may simply be added as a primitive in its own right.
%(we omit the details).
The latter has the advantage that it preserves the purely `functional'
character of the language, in the sense that every program is
observationally equivalent to a $\BCalc$ program, namely the one
obtained by replacing $\dec{memoise}$ by the identity.
We now show how the above idea may be exploited to yield a generic
count program (this development appears to be new).
%
\[
\bl
\BergerCount_n : \Predicate_n \to \Nat\\
\BergerCount_n~pred \defas \Count'_n~pred~[]~0 \medskip\\
\el
\]
\[
\bl
\Count'_n : \Predicate_n \to \List_\Bool \to \Nat \to \Nat\\
\Count'_n~pred~start~acc \defas
\ba[t]{@{}l}
\If\; |start| = n\; \Then\; acc +
(\bl\If\; pred\,(\lambda i. start.i) \;\Then\;\Return\;1\\\Else\;\Return\;0)\el\\
\Else\;
\ba[t]{@{}l}
\Let\; f \revto \bestshot'_n~pred~start\; \In\\
\If\; pred~f \;\Then\; \Count''_n~start~ [f~0,\dots,f~(n-1)]~acc \\
\Else\;\Return\;acc
\ea
\ea \medskip\\
\Count''_n : \Predicate_n \to \List_\Bool \to \List_\Bool \to \Nat \to \Nat\\
\Count''_n~pred~start~leftmost~acc \defas
\ba[t]{@{}l}
\If\; |start| = n \;\Then\; acc+1\\
\Else\;
\ba[t]{@{}l}
\Let\; b \revto leftmost.|start|\; \In\\
\Let\; acc' \revto \Count''_n~pred~\bl(\dec{append}~start~[b])\\leftmost~acc\; \In\el\\
\If\; b \; \Then\; \Count'_n~pred~(\dec{append}~start~[\False])~acc'\\
\Else~\Return\;acc'
\ea
\ea
\el
\]%
%
Again, $\BergerCount_n$ is implemented by means of two mutually
recursive auxiliary functions. The function $\Count'_n$ counts the
solutions to the provided predicate $pred$ that start with the
specified list of booleans, adding their number to a previously
accumulated total given by $acc$. The function $\Count''_n$ does the
same thing, but exploiting the knowledge that a best shot at the
`leftmost' solution to $P$ within this subtree has already been
computed. (We are visualising $n$-points as forming a binary tree
with $\True$ to the left of $\False$ at each fork.) Thus, $\Count''_n$
will not re-examine the portion of the subtree to the left of this
candidate solution, but rather will start at this solution and work
rightward.
This gives rise to an $n$-count program that can work efficiently on
predicates that tend to `fail fast': more specifically, predicates $P$
that inspect the components of their argument $q$ in order $q~0$,
$q~1$, $q~2$, \dots, and which are frequently able to return $\False$
after inspecting just a small number of these components. Generalising
our program from binary to $k$-ary branching trees, we see that the
$n$-queens problem provides a typical example: most points in the
space can be seen \emph{not} to be solutions by inspecting just the
first few components. Our experimental results in
Section~\ref{sec:experiments} attest to the viability of this approach
and its overwhelming superiority over the \naive functional method.
By contrast, the above program is \emph{not} able to exploit parts of
the tree where our predicate `succeeds fast', i.e.\ returns $\True$
after seeing just a few components. Unlike the effectful count
program of Section~\ref{sec:effectful-counting}, which may sometimes
add $2^{n-d}$ to the count in a single step, the Berger approach can
only count solutions one at a time. Thus, supposing $P$ is an
$n$-standard predicate the evaluation of $\Count_n~P$ that returns a
natural number $c$ must take time $\Omega(c)$. These observations
informally indicate the likely extent of the efficiency gap between
effectful and purely functional computation when it comes to
non-$n$-standard predicates.
%
%% If you want the bibliography single-spaced (which is allowed), uncomment
%% the next line.
%\nocite{*}
\singlespace
%\nocite{*}
%\printbibliography[heading=bibintoc]
\bibliographystyle{plainnat}
\bibliography{\jobname}
%% ... that's all, folks!
\end{document}