Functional Reactive Programming

27 thoughts
last posted Sept. 9, 2015, 6:02 a.m.

10 earlier thoughts

0

FRP Performance Trade-Offs

For FRP (and self-adjusting computation), performance and memory usage depends on many factors.

In order to be successful, an FRP system must be able to tune and optimize these aspects of the program to strive towards having the same performance characteristics as an application where updates are processed by hand by a programmer.

Signal Granularity

In the simplest form, the whole program can be re-run completely on every update. In this case there may be some inefficiency as some values will be recomputed unnecessarily. Also, if events are represented as streams, then the entire history of events might be reprocessed. This is the original approach to implementing Classic FRP, also referred to as "pull FRP". The drawback here is that recomputing everything becomes too slow.

On the other side of the spectrum, every calculated value might be a separate signal / subprogram and only the values that depend on a changed input are recalculated on an update. However, in this case there is a very high overhead in keeping track of every intermediate value and its dependents, typically exceeding the savings had from avoiding unnecessary computation. This is more like the initial implementation of FrTime, it might be referred to referred to as "push FRP".

More recent works in FRP try to find a way to combine and balance the push and pull approach, typically by making the programmer manually specify which is which (as in Elm) or by matching it to some existing boundary, like a function boundary (in the lowering transformation for FrTime).

Signal Dependencies / Evaluation Strategy

As part of slicing a program into signals, there is some flexibility available in terms of how a computation can be broken down into steps (sequentialization) or signals (dependency analysis).

For example, if the application performs a transformation of each element of a list and returns a new list, the elements of the new list could depend on the entire input list (corresponding to an eager evaluation strategy) or only on the corresponding element in the input list (corresponding to a lazy evaluation strategy).

A more lazy evaluation strategy should reduce the cost of an incremental update, but it may also result in a higher memory overhead to keep cached values and dependencies.

16 later thoughts