Reflecting on Transducers

So over the holidays I have had something of a little bug in my brain: transducers. It started when I was thinking of working on an unrelated side-project, but I more or less got frustrated and asked myself “why doesn’t Scheme have a library that is as good as Rust’s Iterator trait?” I am a strong proponent of Rust both at work and outside of work; however, I don’t always want to use Rust. Sometimes I’m scripting something fairly quick-and-dirty, but largely I’m part of the Lisp Cult1 and I have fun writing Scheme. This isn’t about picking languages or favourites, it’s about finding out why, in the case of my question above.

Needless to say, this eventually led me to writing my own egg (module) for transducers in Scheme. If all has gone well, it should already be available to download and install by running:

chicken-install -s transducers

I only note this because I think this blog post will serve as a good companion piece of documentation for the egg. I think the rationale is especially worth digging into, because this isn’t new ground and through the process of writing this egg I really felt like I was reinventing the wheel.

Likewise: before we start, I should note here that I’m going to mostly talk about CHICKEN Scheme here since that’s the Scheme I’m most familiar with. I do think that almost everything I write here is pretty applicable to R5RS / R7RS-small Scheme; however, if your specific Scheme system has some (non-standard) way of handling anything I talk about here please do forgive me — I am very likely not aware of it. I’ll also be talking heavily about concepts such as fold, map, etc. If you don’t know what those are and haven’t done any kind of functional programming before you may want to skip this post. I’d be happy to write a tutorial if needed, but I’ll avoid doing that here for brevity.

Rationale — Why transducers?

Okay, so back to the bug in my brain: why transducers? Well, this journey for me actually started with me thinking about Rust’s Iterator trait. This trait is excellent and is probably one of the traits I use the most when programming at work. Regardless of the type that I start with, Iterator allows me to write code using the same set of meta-functions (map, filter, etc.) without having to write each implementation individually for each data structure. Just by providing a next implementation, I get access to all of the goodies provided by the Iterator trait for free relatively cheap.

But Scheme doesn’t really have anything like this! Rust can manage the above because traits are a construct that happens at the type level. They’re a way of being able to abstract types in a polymorphic way. You can write a map such that it works for all T that are Iterator — the equivalent in Scheme isn’t really possible. Scheme is a dynamic language, so this kind of polymorphism isn’t gonna fly.

The idea of “iterate over an entire collection” is a pretty common one in programming. For the sake of brevity, I’m not going to justify why I think this is important. But let’s look at a couple examples of where Scheme fails because it doesn’t have some generic way to operate over different collections.

A list? of Scheme’s failures to abstract

There is a -map / -fold / -etc for every data structure, except when there isn’t

The most recent SRFI (Scheme Request for Implementation)2 for working with Scheme vectors is SRFI-133 - Vector Library (R7RS Compatible). This library standardizes some of the API for working with vectors. Operations such as folding, unfolding, copying, etc. are all made available by importing srfi-133.

Except for one - vector-filter. Unlike in other languages, vectors in Scheme are dynamically allocated but fixed in size. This means that you can’t just push or push_back on a Scheme vector like you can in e.g. Rust or C++. Because of this, SRFI 133 does not provide a procedure to filter elements, because the final result of such a procedure could have an unknown size, and any vector-filter would need to know how many elements to allocate in the vector before it ran.

This is very very very annoying. It’s not that amortized O(1) pushing is hard to implement, but Scheme as a language tries as hard as it can to remain small. As a result, this kind of stuff is left to the last developer in the chain to work around. Now every time you want to do something like the following Rust code:

fn filter_odd_values(v: Vec<u8>) -> Vec<u8> {
     .filter(|t| t % 2 != 0)

You have to make the following choice:

  1. Implement and carefully manage an implementation of vector-filter, as well as add tests, documentation, etc.
  2. Convert to another data type first, run that data type’s filter (e.g. the list version of filter in SRFI-1), and then convert back to a vector.
  3. Something else???

These kinds of operations (map, filter, etc.) are not universal across all Scheme types, usually because there’s a missing piece (cannot do amortized O(1) push to vector, has hidden internal state, etc). Every new collection has to define these operations every time a new type is added. This is a lot of work for very little gain!

Now let’s actually talk a little more in-depth about that second point, which is another failure of Scheme…

Using lists as intermediates

I’m going to quote here from the Google LISP guidelines:

You must select proper data representation. You must not abuse the LIST data structure.

Even though back in 1958, LISP was short for “LISt Processing”, its successor Common Lisp has been a modern programming language with modern data structures since the 1980s. You must use the proper data structures in your programs.

You must not abuse the builtin (single-linked) LIST data structure where it is not appropriate, even though Common Lisp makes it especially easy to use it.

You must only use lists when their performance characteristics is appropriate for the algorithm at hand: sequential iteration over the entire contents of the list.

Just because Scheme (and Lisps) are primarily based around lists does not mean that we should still be abusing the list data structure so. Let me clarify the actual point I am making here: I think library authors aren’t coordinating and as a result downstream users are forced to abuse the list data structure even when they don’t want to.

I am explicitly NOT saying that people writing Scheme today aren’t using appropriate data structures. There is certainly some of that happening in undergraduate classes somewhere, but that is irrelevant to my point. Instead, I want to point out how almost every SRFI or Scheme-specific module (such as CHICKEN eggs) provides the following two procedures:

(type->list t)

(list->type lst)

And yet, despite everything you almost never see type->vector, type->u8vector, type-1->type-2, etc. There are some rare exceptions to this (e.g. SRFI 158), but often even those exceptions are providing type->vector by first doing:

(define (type->vector t)
    (type->list t)))

This… is exhausting. No matter what you do in Scheme you just can’t get away from this. We have a fragmented language, and that is very much by design. The fact that there are multiple implementations of the R5RS/R6RS/R7RS standards that can all be considered “Scheme” is not a problem. The problem is that when we build libraries to extend the core language we do so poorly and incentivize users to go through lists as an intermediate as much as possible.

To emphasize I want to point out what Rust does because the standard library and traits are cohesive in a way that Scheme isn’t. Look at the following snippet:

let new_collection = values
    .map(/* ... */)
    .filter(/* ... */)

This snippet of code shows so much about language design and the amount of thought that went into developer ergonomics just to map, enumerate, and filter over a collection of items. You don’t even know what values is, but it doesn’t matter! You don’t know what new_collection is, but it doesn’t matter! That code will still work as long as the type of values implements IntoIterator and the type of new_collection implements FromIterator.

Fold ordering is a (f k x)ing nightmare

Let’s talk one more time about fold, vector-fold, et al. These functions provide a “folding” given some function f, a sentinel value k, and a collection xs. Usually the signature is:

(fold f k xs)

But what about that function f? What is its signature?

(f x k)

Okay, that can sometimes look a lot like cons, which takes in a value (x) and a sentinel (k) and does something with it. You can then write:

(fold cons '() (list 1 2 3))
; => (3 2 1)

which will reverse a list. Okay, great! Let’s do the same thing for a vector of xs and reverse it into a list:

(import srfi-133)

(vector-fold cons '() (vector 1 2 3))
;=> (((() . 1) . 2) . 3)

What the hell? Why isn’t that the same? Aren’t these supposed to be abstractions over folding? Well, it’s because fold in SRFI-1 takes in an f with the ordering (f x k) and vector-fold in SRFI-133 takes in an f with the ordering (f k x). SRFI-113 provides set-fold that takes in the ordering (f x k) whereas SRFI-158 provides generator-fold that takes in the ordering (f x k) whereas SRFI-160 provides u8vector-fold that takes in the ordering (f k x)… I could keep going.

This is the dumbest inconsistency in the language and we’re all losing brain cells thinking about it. Universally I think cons-like ordering is the wrong ordering, but here’s the thing: I wouldn’t even care as long as we picked one and were consistent with it.

My point in all of this is to show that as it stands there’s a lot of complexity and a lot of annoyances in trying to use the right data structure(s) in Scheme. Even if we dodge the whole type->list and list->type anti-pattern, we still have to be vigilant about fold ordering, which procedures are available (filter may not be!) and this all takes patience, time, experience, and for a whole lot of nothing.

One last point…

One last point I want to make is that often times libraries often come with a collection of their own map / fold / filter etc. procedures, but it is never really clear what the performance of these is. I mean, one can generally think of map as being O(n), but you can’t be sure that the person who implemented the standard was careful and wrote the best map possible.

A lot of SRFI’s provide a reference implementation that can work on just about any Scheme, but will aim to be portable before it aims to be fast or efficient. I think among the complaints I could have this is actually not a big deal. Benchmark and then make it faster based on what you measure, etc. But the key here is that we probably should be in a position where folding / mapping / filtering are optimized already, and we shouldn’t be suspecting those are our bottleneck. I think generally this can be assumed true because most SRFIs are pretty high quality, but everytime I switch between data structures I now have to ask myself:

  • What should I expect for performance here?
  • What should I expect in terms of documentation?
  • What should I expect in terms of convention?
  • What should I expect in terms of …?

I fully admit that you can’t abstract across different types without having to ask this in some form. This is a lot to deal with when programming though. I don’t ask these kinds of questions in Rust or C++ — Iterator::map and std::transmute operate the same as long as the type implements the correct trait / interfaces (e.g. std::begin and std::end). Incrementing a pointer or calling an expensive Iterator::next are possible, but it’s a lot easier to know where to look if you find iteration to be slow!

While I love what one can do with Scheme, I hate that this is the mind-space I have to put myself in every time I open my editor. We absolutely should strive for better, can do better, and we deserve better.

Why not use…?

So I started seeking out if there was something in Scheme that allowed me to solve this problem. There were a lot of different ways to crack it, but I want to focus on two specifically that ended up being the closest to something that actually begins to touch on the issues I outlined.

Why not use SRFI-158 (Generators and Accumulators)?

SRFI-158 is the latest attempt by the SRFI committee3 to abstract over things that “generate” and “accumulate” values. Let’s consider an example: a list can generate values by popping off the head of the list (car), and the rest of the values are held in the cdr. Lists can also be accumulated by consing the different parts together. You often have to reverse a list after consing all the parts, but you can do that as a finalization step.

Generators and Accumulators abstract over the generation of values and the accumulation of values. This can be done for just about any type, because generators are just regular thunks, that is, functions that take in zero arguments. I suggest reading the SRFI, it is very instructive and the document explains its expected use-case very well. It expounds on the previous SRFI-121, which only outlined generators, but there’s notes about that too.

My main problem with generators and accumulators is that they basically replace all our existing data types with a new type (generators) that can then be mapped / filtered over in a unified way. After one is done with gmap / gfilter / etc. they can then convert this generator back into some kind of type using an accumulator or one of the built in generator->type procedures. This solves a problem of abstraction by routing around it. Rather than worry about what operations are defined on a type, we instead just create a type that has all operations work on it.

This kind of approach is a good first abstraction, but fails because it is generally impossible to make this fast. It also doesn’t solve the fact that we have type->list and list->type proliferation. If anything, it makes it worse because most libraries are not generator-aware, and writing generators correctly can be tricky. That’s not to say writing any code cannot be tricky, but the obvious ways to write a generator are often using make-coroutine-generator which uses call/cc4 and as a result is pretty slow on most Schemes.

Why not use SRFI-171 (Transducers)?

Aha, so we finally get to transducers! Well, I think if you’re familiar with Clojure’s transducers, you’re already seeing where I was heading with this. But if SRFI-171 exists and works on CHICKEN Scheme, then why did I go ahead and make my own library? Why am I rambling about this here?

Well, I had some problems with SRFI-171. Mostly, I was unhappy with the interface and the sparsity of the API. It provides some basic transduction operations like list-transduce, vector-transduce, etc. as well as a bunch of different transducers (map / filter / etc). However, it is missing some key components:

  • It cannot collect vectors, and in particular it does not provide any reducer / collection procedure to do so.
  • It fails to correctly parameterize what makes list-transduce different from e.g. vector-transduce. I will get into this a bit more later.
  • It provides transducers that definitely incentivize users to do the wrong thing with their programs. In particular it provides tdelete-neighbor-duplicates and tdelete-duplicates. This is very much in the realm of my opinions but if you want to delete duplicates what you want is a “set” data structure of some kind. SRFI-113 provides a data structure in that vein (based on SRFI-69 hash-tables) but one could very well imagine a bunch of different set-like data structures that one might use depending on the context of the problem.5
  • Documentation is sorely lacking. I actually read the SRFI documents and the mailing lists and I still came away thinking “this is it?”. I think this can be fixed, and this comment definitely deserves a blog post on its own but SRFI documents are not sufficient enough on their own to constitute good documentation for a library / API.6

SRFI-171 was actually not what I wanted to even use at first either, but I slowly realized that transducers were the only form that would tick all of the boxes for me. I tested it for performance and well… It remains faster than SRFI-158. There’s at least one benchmark example on the mailing list where SRFI-158 is slower. I’ve seen this across a lot of examples in my own testing, so I’m not convinced it’s a fluke. In fact, I actually think I know why SRFI-171 is faster, but let’s table that for now.

Needless to say, I felt that there was something lacking in SRFI-171 and that I could do better. Unfortunately the things I wanted to change weren’t really possible to retro-actively adapt into SRFI-171. Unlike a normal library, the whole point of SRFIs is that once they’re finalized, that SRFI is locked in stone. You can’t just add or remove or deprecate functions. You can change the default implementation if there’s a bug or add a post-finalization note about how the implementation should behave if there’s ambiguity, but you can’t just start picking and ripping at an SRFI. This kind of attitude towards software / APIs is anathemic to change. This can be seen as good or bad, but I think this kind of ideology is mostly dead outside of Scheme (and maybe parts of C++). I want to clarify that I don’t disparage the SRFI team, I think there’s a lot in the community who want to standardize across Scheme implementations, but this is not how any other language works with their libraries in the year 2023.

Transducers in Clojure

Okay, okay, let’s shift the tone a bit. I have been complaining a lot about Scheme and our libraries (with love), but let’s finally talk about what transducers actually are. Transducers were an “invention” of Rich Hickey, the creator of Clojure. I put “invention” in quotes here because I think that a lot of the groundwork and ideas somewhat predate Rich Hickey in the literature that he references in the original video I linked. But for the sake of making this easier to refer to, we’ll say “invented.”

Anyways, the main trick with transducers is that we can treat almost any map / filter / etc. operation as a left-fold over some data structure. That data structure must be some kind of “totally ordered multi-set,”7 that is:

  • It must have some total ordering (i.e. we can get the items in it out in some order)
    • We don’t actually need that ordering to be anything specific, it just needs to exist.
    • Lists have an ordering if you start from the head and move to the tail
    • There doesn’t just have to be one ordering, vectors can be ordered forward and in reverse for example.
  • It must contain elements (items) that we can sequentially operate (fold) over.

Anyways, given some “multi-set” whose items “can be ordered” we can basically replace any map / filter / etc. with an equivalent fold operation. For example one could take the fold from SRFI-1 and implement map:

;; Assume some mapping function `f`

(fold cons
      (fold (lambda (x k) (cons (f x) k))
            (list 1 2 3 4 5))

I added in the reverse above as another fold as well. Anyways, the key is that “map” really only depends on the reducing function (in the above example, cons) to be implemented. So the difference between e.g. SRFI-1 map and SRFI-133 vector-map is really just parameterizing on how we “reduce” across the fold. If it’s cons, then we get SRFI-1 map. If it’s something more complicated (like creating a vector and pushing items into it), then we could get a vector-map instead.

So the first trick to transducers is that every “transducer” function (map / filter / etc.) is parameterized based on the “reducer.” The video goes into good detail about this, but basically we change map from something that looks like fold into:

(define (map f)
  (lambda (reducer)
      (() (reducer))
      ((result) (reducer result))
      ((result item)
       (reducer result (f item))))))

Notice the ordering of our reducer is different than cons ordering, but we’ll ignore that for now. The second trick can also be seen in the above snippet as well: by using case-lambda we can have a few different forms for our transducer. If I were to give names to them I’d call them:

  • No arguments (sentinel): The sentinel value to use. As we can see here it depends on the reducer to decide what that default value is.
  • One argument (finalize / collect): This is what to do once we have our final “almost reduced” value. In the case of the example I gave earlier this was just returning the list, but one could imagine reversing the list, shrinking a vector, etc. You can see map depends on the reducer here too.
  • Two arguments (reduce-step): This is what we typically think of when thinking of the reduce function. In the original example I gave for mapping-as-a-fold this was (lambda (x k) (cons (f x) k)).

Okay, so that’s about the two tricks that one needs to know about transducers in order to understand how to use them in Clojure. Mostly, you can take any “collection” in Clojure and you can then use the following form:

(transduce (comp
             (filter odd?)
             (map (lambda (x) (* 3 x))))
           (range 5))

or, you can provide a sentinel value (e.g. 100) directly:

(transduce (comp
             (filter odd?)
             (map (lambda (x) (* 3 x))))
           (range 5))

Okay, cool. This does seem to solve what I want in Scheme, at least in principle. Let’s tick off some boxes:

  • All transducers are independent of the input collection / type
  • We don’t use lists as intermediates, in fact we can ignore lists entirely and use whatever data structures make the most sense.
  • [?] Fold ordering is consistent. It is in Clojure, but mostly because when we define transducers or collectors our reduce-step must always be in the same order. In the map definition I gave above it is always (result item) not (item result) like it is with cons. This could go either way but transducers only work if that is uniform!
  • Performance is linked to the type we’re iterating over, but map and filter aren’t getting faster or slower as a result of that. We avoid inefficiency in intermediate types / copies though, and we don’t have to invent new type-1->type-2 methods everywhere as we go.

Transducers in Scheme

So transducers are really the solution I’ve been looking for. SRFI-171 suggests they are fast, and Clojure shows that they have most if not all of the properties I’m looking for. But SRFI-171 is limited and doesn’t provide everything I’m looking for. Additionally, Clojure seems to be able to do some magic. For example, why can I do the following in Clojure:

(transduce (map add1) + <list-or-vector-or-hashmap-or...>)

but in Scheme I seem to have to do:

(list-transduce (map add1) + <list>)


(vector-transduce (map add1) + <vector>)


It certainly seems like Clojure is doing some magic underneath the hood, but what is it?

Polymorphism and monomorphization

Well, in short, Clojure can provide a form of parametric polymorphism via Clojure Protocols. This allows them to dispatch the transduce call according to what type the argument in the last place of the call-site is. So if you passed a list, you get effectively list-transduce, and if you pass a hashmap, you’d get hashmap-transduce.

This works well enough in Clojure because protocols leverage the JVM; specifically, they basically map 1:1 with a Java Interface on the backend. But Scheme isn’t on the JVM (at least, not all Schemes), and implementing some kind of dynamic-dispatch is going to introduce some overhead. Some Scheme systems have object systems like COOPS or GOOPS, but this is:

  1. A lot of code to depend on
  2. Not going to be generally portable

Portability wasn’t really my concern here. After all, I’m really only concerned with CHICKEN Scheme. If I wanted to use Guile, I’d use it, but it is not the Scheme I reach for usually. Nevertheless, I don’t want to dive into something so CHICKEN-specific here. The point I really want to make is one that’s not too dissimilar to what Rich Hickey made about how map / filter et al. are just parameterizations of a left-fold: Clojure’s polymorphic dispatch is just an assumed parameterization of the transduce form.

I wrote some of these thoughts on Mastodon but to summarize here: I think the main difference here is that Scheme is aggressively monomorphic by default. As a dynamic language that doesn’t have a single implementation or host platform / environment, it is always going to be. So we have to think about transduce not as it is in Clojure, but as it should be if we parameterized the assumed traversal of the data structure. Thinking this way gets us a new kind of transduce:

(transduce folder     ; A procedure that knows how to fold across a data structure
           transducer ; The xforms / transducers used in clojure
           collector  ; The reducer that "collects" the final result
           sentinel   ; A sentinel value (optional) to seed the collector
           data)      ; The data to fold over

This gives us a transduce that shows us a pipeline of different pieces we can put together:

  • folder: Think of this as the routine that can pick apart our data
  • transducer: These are the operations that happen on our data as it passes through the pipeline.
  • collector: This is the structure in which we want to collect our output.

Folder, transducer, and collector all happen in that order, so I have placed the fold operation as the first argument. In my egg, one can then do:

(transduce list-fold
             (filter odd?)
             (map add1))
           (list 1 2 3 4 5 6 7 8 9))

What does this do under the hood? Well, here’s the code:

(define transduce
    ((folder xform collector iterable)
     (transduce folder xform collector (collector) iterable))
    ((folder xform collector sentinel iterable)
     (let* ((xf (xform collector))
            (result (folder xf sentinel iterable)))
       (xf result)))))

The important bit is in that second case-lambda arm. We leverage the folder to be specific to the iterable (list, vector, set, mapping, etc.). This maintains the monomorphic dispatch that Scheme requires, without losing generality. We don’t have to proliferate multiple transduce procedures that then need to be tested / documented / remembered. Instead, we just need a special fold for each type.

(define (list-fold f sentinel xs)
  (if (null? xs)
    (let ((x (f sentinel (car xs))))
      (if (reduced? x)
        (unwrap x)
        (list-fold f x (cdr xs))))))

In particular, this fold is a bit different than what is usually written. The trick here is that we have one more branch operation for checking if a value is reduced? or not. Some algorithms quit early without traversing a whole list. For example, if we wanted to find the first odd number in a list we wouldn’t want to have to traverse the rest of the list after we find it. Likely, you’ve been trained to use call/cc for this kind of early exit, but transducers don’t need it. If any transducer (or collector) returns (make-reduced x) early, then the transduction will end at that item.

By branching on reduced? in the above list-fold, we’re able to quit-early in a generic and call/cc-free way. No continuations means that our code is very easy to convert into continuation-passing-style (CPS) in a very straightforward-to-optimize way. In practice, there isn’t really any material performance hit for doing this. It is required to make our transduce behave correctly, so we need new “fold” operations for every type in order to use transduction over that type. This admittedly isn’t ideal, but I’ve reduced the problem space from “reinvent every map / filter / fold / any / every / chain / flatten / zip” operation to “reinvent fold to allow early termination.”

In truth, that’s not the whole story because flatten / chain / interleave / zip operations are type specific as well. However, these can be managed mostly through macros (which are publicly exported and documented with clear examples). I would like to come up with a better solution here, but I’m not sure that I will anytime soon. I’ve spent enough time faffing about on it already so I think I’ll take a break and accept what I have.

On Naming

I’m admittedly taking a deviation in normal Scheme naming in some places. More importantly though, I’ve chosen to not provide map / filter / etc. under prefixed names. I strongly believe we need to move out of the era of pretending that lists are the default for these. Transducers should be the default, and we should just train people to use them.

Are they confusing? Certainly at first they are very confusing. If you’ve made it this far into the post though I’m certain you (the reader) are not that far from understanding it completely.8


Performance is great. I’m going to mostly avoid benchmarks and let you do your own, because there’s no way I’ll be able to fairly represent whatever you’re trying to do. But just for fun run this:

$ cat transducer-bench.scm
(import transducers)

      (filter odd?)
      (map (lambda (x) (* 3 x))))
    (iota 100000000)))

Note that iota above is not from SRFI-1. On my machine I get the following output:

$ csc -O3 -static transducer-bench.scm
$ time ./transducer-bench
3.518s CPU time, 0.001s GC time (major), 1/108400 GCs (major/minor), maximum live heap: 877.83 KiB

real	0m3.540s
user	0m3.535s
sys	0m0.004s

That’s 3.540s to filter, multiply, and sum 100_000_000 numbers. That’s pretty good! Admittedly this isn’t representative of a real workflow in any way, but the generator equivalent is much slower:

$ cat gen-bench.scm
(import srfi-158)

    (gmap (lambda (x) (* 3 x))
          (gfilter odd?
                   (make-iota-generator 100000000)))))
$ csc -O3 -static gen-bench.scm
$ time ./gen-bench
9.666s CPU time, 0.001s GC time (major), 100000002/81404 mutations (total/tracked), 5/256433 GCs (major/minor), maximum live heap: 368.68 KiB

real	0m9.674s
user	0m9.607s
sys	0m0.064s

Why are transducers faster than generators?

I mentioned earlier that I think I know the reason transducers are faster than generators (generally speaking). In the definition of transduce:

(define transduce
    ((folder xform collector iterable)
     (transduce folder xform collector (collector) iterable))
    ((folder xform collector sentinel iterable)
     (let* ((xf (xform collector))
            (result (folder xf sentinel iterable)))
       (xf result)))))

We generate the procedure xf by calling (xform collector). The final function that we call on every item is the composition and execution of all the xforms over the collector. By the time we start to fold over some data type by calling the folder our function xf is:

  1. Already finally defined - it can thus be compiled and inlined pretty aggressively
  2. Local - we don’t have to call multiple functions to do all our mapping / filtering / etc. Generators like gmap have to loop internally and manage internal state, which means that they’re calling other generators, which might be calling other generators, and so on. Transducers loop over data locally and call one function (which can be cached / inlined) that does everything across the whole algorithm. Transducers’ speed is defined by the complexity of the process, not by the number of intermediate layers taken to express that process.

Now I might be wrong on one or both of those points (after all, I did not inspect the compiled output to really confirm this), but my intuition is telling me I can’t be that far off. Overall, I’m not sure that a well-written transducer is ever going to be slower than an equivalent generator. Many in the Scheme community won’t care about performance as much as I do here, so it may be immaterial. ¯\_(ツ)_/¯

Other Goodies

Amortized O(1) collection of any vector type

Let’s say that the entire Scheme community looks at this project and thinks: yeah no thanks. If you’re going to dismiss what is done here PLEASE PLEASE PLEASE just see src/transducers.vectors.scm and how I add vector collection. With this library you can do:

(import transducers)

(transduce range-fold
             (filter odd?)
             (map add1))
           (iota 100))

The magic here is (collect-u8vector #!optional (size-hint 0)), which will push all results to a vector in amortized O(1) time, using a strategy reminiscent to what Rust’s Vec and C++’s std::vector do. This operation (or at least one like it, named differently) is available for all:

  • Scheme vectors (aka vector)
  • SRFI-4 & SRFI-160 vectors (including c64 and c128 vectors)

There is no intermediate list built, there is no conversion from another data structure. When the vector being collected runs out of size and needs to be extended, it uses 2× the capacity (so exponential growth) and will copy over the data using what effectively amounts to a memcpy. It is fast.

If nothing else these very same procedures work with SRFI-171 Transducers as well. I have released the egg under the MIT license so copy and paste them into your code if you must, but can we please incorporate this more wholly into the Scheme ecosystem?

Lastly - this can be optimized even further if one decides to add a size-hint to the collector:

(import transducers)

(transduce range-fold
             (filter odd?)
             (map add1))
           (collect-u8vector 100)
           (iota 100))

While the size may not be precise (filter could allow through 100% of the data or 0% of the data), the size-hint can help you reduce the amount of re-allocation you’re doing if you think you have an idea of the size of the final collection. Also notice that iota is from the transducers library. It’s a custom range structure that should be faster to use with transducers than constructing a full list with SRFI-1’s iota procedure.

reader-fold works with generators today

I probably won’t use SRFI-158 (generators and accumulators) ever again but you might already have incorporated them into some of your code. The reader-fold procedure works with generators as they are today, as well as procedures like read / read-char / read-byte / etc.

Transducers are general enough that they can support generators without having to do anything special:

(import transducers srfi-158)

(transduce reader-fold
           (make-iota-generator 10))

So if you have a generator today I suspect that switching will be easy.

Some open problems with the transducers egg

transducers isn’t really perfect — far from it. There are still a lot of things that I’m not sure about. A short list of my gripes and open questions with the code I’ve written so far:

  1. Is passing the folder procedure into transduce ergonomic or should I be making list-transduce / vector-transduce / reader-transduce and such like SRFI-171 does? I like how explicit the current form is so I am likely to leave it, but I’m open to hearing if the Scheme community has strong dissent here.
  2. I currently have not made this an R7RS module. Do folks outside of the CHICKEN community want me to make this more general? Some shims would be needed (mostly for the check-errors egg), but I think it could be done. I’ve avoided it for the most part because CHICKEN’s module system has been enough for me, but if there’s interest I can look into it.
  3. Macros for different flatten / chain / interleave / zip transducers. What to do with these? Is the macro-based approach the most extensible for outside types? I feel like while the macros I’ve written aren’t too hard to come to terms with it can be a bit dizzying to get started with someone else’s macros vs. just implementing the procedure yourself.
  4. Strings. One thing missing from all of this is the fact that I haven’t provided string-fold / collect-string variants in the same way that I did for vectors. This is mostly because strings suck, and I’m not sure how to best dance on the UTF-8 / non-UTF-8 minefield that CHICKEN has going on. If one wants to the reader-fold operations can be used with the corresponding read-char and with-input-from-string procedures to get string support for transducers, but that seems like it’s likely to encourage bad performance. Hopefully there’s a version of CHICKEN someday where I can just ship UTF-8 string support only. That’s a possibility, maybe?

SRFIs and Standardization Efforts

Before I’m done, I do want to clear some air on my thoughts on the SRFI and standardization efforts within the Scheme community. First and foremost, I want to be clear that I do not disparage those who work on SRFIs nor do I think the work they are doing is useless or otherwise harmful to Scheme as a community. I believe some of my post will come off as rather critical of the SRFI process, which I think is appropriate (in measures, at least).

I am not going to name or point to any particular member of the SRFI process, nor am I going to try and blame them all for my criticisms here. I think of a lot of this as evolutionary, not revolutionary. But to get to the point, my feedback for the SRFI process:

  • There is too much wasted focus on -map / -filter / etc. procedures as part of each SRFI process. This is especially true when discussing data structure SRFIs. It’s a waste of time - transducers have all but won (in my mind at least, and in the mind of the Clojure community). I think given the level of abstraction allowed by transducers, some version of them (similar to my egg or otherwise) should probably be the base standard to which we hold our collections.
  • As a corollary to the above point, there are things transducers should not do. I didn’t really go into it because it’s a bit of a distraction from the article I was writing, but e.g. procedures like set-union, sort!, etc. are prime examples. For these kinds of algorithms, we should absolutely spend more time fleshing out SRFIs to figure out how to make those work across different Scheme implementations in an ergonomic way. But there’s a big difference between operating on the whole collection vs. type-specific functionality.
  • More thought and leadership needs to be put in to direct the “standard library” of R7RS-large (or R8RS-large or whatever) to be more interoperable and compatible across the board. Rust is a good example here — while the language is young the core team goes to brutal extents to make sure that traits and features integrate well together. It’s a lot of work (and I realize I’m trying to both beg and choose here) but the current state of affairs isn’t enough in my opinion. This isn’t even some argument or doom-saying about commercial use of Scheme either, because regular users have to put up with inconsistencies like (f x k) vs (f k x) ordering all the dang time! I know we’re all hacking for fun and education and to scratch our own itches but we really ought to get the core bits right.

Does this mean we need a regular Scheme conference where we can all get together and help guide this? Are we not inter-mingling ideas across different Schemes and Lisps and languages and programs enough? I won’t prescribe an answer but I can for sure say that I have hope that the SRFI process can adapt.

Overall I don’t even think most of the SRFIs are useless or anything; heck, I use a few of them in some capacity as dependencies of transducers. I don’t even think things like vector-copy! or reverse! need to be in Scheme proper but ultimately we should make sure that we’re not directing users to interfaces like vector-map or vector-append when in practice the code that folks want to write is a transducer.9

Final thoughts

In short: use transducers, whether it be the egg I just published or SRFI-171 or something you’ve built yourself. They’re performant, they’re composable, and they’re a hell of a lot easier to test. They stop us all from having to reinvent the wheel that is mapping / folding / filtering every time a new data type comes along. You and your code and your users will thank you.

I set out this blog post trying to understand why Scheme didn’t have anything as powerful or flexible as Rust’s Iterator trait. I don’t know if my transducers egg is as powerful or flexible, but it certainly is a step towards it. Scheme is a great language regardless of the critcisms or holes I’ve poked about different features of the language and community here, and it’s helped me conceptualize a lot of important ideas that I’ve taken with me to other languages. My hope is that by introducing a different kind of transducer into Scheme I’ll be helping someone else skip past a lot of the churn that comes with learning and working with Scheme.

To end this, I want to provide a few helpful URLs:

I highly suggest giving those a read over and of course feel free to leave feedback (either on the repo or the CHICKEN bug tracker) if you find anything good worth discussing. You can also find me in #chicken or #scheme on LiberaChat.

  1. I don’t know if I’m really a believer but I love the sense of community. 

  2. Scheme-Requests-for-Implementation is a process in which libraries are “standardized” in the Scheme world. I’m not sure other language communities have anything like it. Most of the t ime it’s easiest to think about this as a process in which one can submit a library, discuss the API and how it would function across a number of different Scheme implementations, and eventually standardize the library by offering a reference implementation that works on one or more Scheme systems.

    Ultimately the SRFI process is driven by the person who submits their library, so in many case the result is dependent on which authors present what work to the mailing list. You can see previous SRFIs and get a better understanding of the process at

  3. Again, this is a bit air-quotey. I don’t want to disparage individuals on this because I think generators are a bit of a clever solution to laziness but ultimately not what I was looking for. 

  4. There’s a lot of take-downs of call/cc, such as Oleg Kiselyov’s famous writing on the subject. call/cc is almost always going to cause weirdness either with your program or with the garbage collector. Delimited continuations are universally better on that front, but I think that’s mostly just my opinion so I’ll avoid scrawling a diatribe against call/cc across the entire article :) 

  5. Yeah so this one is admittedly a flimsy point. Maybe there’s a joke in here about how I just wanted to reinvent the wheel in my own image, but I will fully defend the point on tdelete-duplicates. If you want to do something like this you want a collector / reducer not a transducer.

    i.e. you should call:

    (transduce list-fold
               (map add1)
               collect-set      ; NOTE: doesn't exist in transducers as of 0.1.0
               (list 1 1 2 3 4 2 1 3 4))


    (transduce list-fold
               (compose tdelete-duplicates (map add1))
               (list 1 1 2 3 4 2 1 3 4))

    Of course, most transducers in SRFI-171 are interchangeable with my own egg, so one could probably port it over if they really feel differently. 

  6. I maintain several SRFI eggs for CHICKEN Scheme (including SRFIs 113, 116, 133, and more) and I have absolutely just copied the SRFI document over to the CHICKEN wiki, same as SRFI-171 here. So understand that when I levy the critcism of “poorly documented” at someone else’s library, I’m actually more or less thinking of my own crimes here.

    I do really like Scheme and want it to be better and more accessible, so I’m going to leave the criticism in there even if I don’t have an immediately better solution in place today. 

  7. Thanks to J✦rg Pre✦send✦rfer 🇪🇺🏳️‍🌈 for pointing me towards this series paper. It ultimately didn’t help what I was doing with transducers, but it did open my brain a bit. So maybe it did help, but not in the way that I was hoping / expecting! 

  8. I generally think that people are pretty smart and that this isn’t that difficult of a concept to figure out. If you can work your way through the Little Schemer then you’re more than capable of figuring out transducers. Even more-so if you’re using them everywhere.

    Scheme is often used as an instructional language, but there’s a good group of us who are trying to use it as a general-purpose programming language. There’s no reason Scheme can’t be both, so lets dispense with any ideas that transducers are “too hard” to add to Scheme or that providing map outside of SRFI-1 is blasphemy. I… am projecting some arguments I’ve seen hashed out several times, sorry. 

  9. You may think that I’m wrong here and that the code that users want to write shouldn’t be restricted to transducers (at least, in the case of map / filter / append / etc). However, even SRFI-171 represents a huge step towards the kind of future I’m hoping Scheme moves towards. One where we don’t have to import half a dozen different SRFIs in order to be able to map and convert across different types made available to a particular Scheme implementation. 

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.