Skip to content

Instantly share code, notes, and snippets.

@jakearchibald
Last active August 13, 2023 06:46
Show Gist options
  • Star 46 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save jakearchibald/4202cae11022defd7c13b37005704e36 to your computer and use it in GitHub Desktop.
Save jakearchibald/4202cae11022defd7c13b37005704e36 to your computer and use it in GitHub Desktop.
Call operator vs pipeline

Using a call operator

function addToEach(add) {
  return this.map(val => val + add);
}

function sum() {
  return this.reduce((a, b) => a + b);
}

const result = [1, 2, 3]::addToEach(10)::sum();

Using pipeline

const addToEach = add => arr => arr.map(val => val + add);
const sum = arr => arr.reduce((a, b) => a + b);

const result = [1, 2, 3] |> addToEach(10) |> sum;

Analysing the complexity of the call operator

// Complexity: what is `this`?
// Answer: Like all other instances of `this` in JS, it's the context object, or the global,
// unless it's been explicitly set by call/apply/bind.
function addToEach(add) {
  return this.map(val => val + add);
}

function sum() {
  return this.reduce((a, b) => a + b);
}

// Complexity: With ::, what is relation between the left and right hand side?
// Answer: It's sugar for func.call, so foo::bar(10) desugars to bar.call(foo, 10).
// This means bar is called with argument 10, and within it `this` is set to foo.
const result = [1, 2, 3]::addToEach(10)::sum();

The complexities here already exist in JS, so you may already know them. If not, learning them will come in handy with other JavaScript patterns.

Analysing the complexity of pipeline

// Complexity: What's with the two instances of =>?
// Answer: It's a function that returns a function. That'll become clear(ish) later.
// Complexity: Why is the subject (arr) after the thing that'll happen to it (add)?
// Answer: It just has to be backwards to work (despite pipeline aiming to solve this
// same problem with function nesting).
const addToEach = add => arr => arr.map(val => val + add);
// Complexity: Why doesn't this have two =>?
// Answer: Because it doesn't have args, it doesn't need a function within a function.
const sum = arr => arr.reduce((a, b) => a + b);

// Complexity: With |>, what is relation between the left and right hand side?
// Answer: The right-hand side is called with the left-hand side as its single argument.
// Complexity: Why is addToEach called as a function, whereas sum is just passed as a value?
// Answer: The left-hand side is called with the right, so addToEach(10) is a function that
// returns a function, whereas sum doesn't return a function so you just use its value.
const result = [1, 2, 3] |> addToEach(10) |> sum;

Although the character count is lower, the complexity is higher & pipeline-specific.

Other benefits of the call operator

You can use other instance methods directly:

const { map, sort } = Array.prototype;

const headings = document.querySelectorAll('a')
  ::map(el => el.textContent)
  ::sort();

This is because instance methods already use this.

Pipeline is really easy to implement yourself

Since it's a functional pattern, you can implement it as a function:

const pipe = (val, ...funcs) => funcs.reduce((val, func) => func(val), val);

const result = pipe([1, 2, 3], addToEach(10), sum);
@juliensnz
Copy link

I find the pipeline way less easy to read at first but way more natural after a while. I really don't like the fact that we need this in the call operator method. this is already too complex and weird for newcomers, let forget about it an never use it. Also, the call operator method will not allow the use of fat arrow functions.

@keithamus
Copy link

keithamus commented Jul 31, 2018

All of these operators can exist in the same space. But additionally I'd like to point you to the "partial application" operator which can solve the issue of complexity of pipeline without using the call operator:

// Complexity: nothing really. They're just functions. No functions returning functions, no `this`, arguments can be ordered however you like.
const addToEach = (arr, add) => arr.map(val => val + add);
const sum = (arr) => arr.reduce((a, b) => a + b);

// Complexity: what does `?` mean?
// Answer: `addToEach(?, 10)` becomes a function that waits for the first argument. `?` in this context means the argument needs to be "filled in" - it gets filled in from the pipeline value
const result = [1, 2, 3] |> addToEach(?, 10) |> sum;

// This could also be written as
const result = [1, 2, 3] |> addToEach(?, 10) |> sum(?);

@fvsch
Copy link

fvsch commented Jul 31, 2018

The pipeline example could be made less complex if you don’t create reusable functions:

const result = [1, 2, 3]
  |> arr => arr.map(val => val + 10)
  |> arr => arr.reduce((a, b) => a + b);

Most examples I’ve seen look like this. Could be that in a big application it results in a lot of duplication, I’m not sure.

I like the example of the call operator for using Array.prototype methods, though.

@jakearchibald
Copy link
Author

@keithamus

I'd like to point you to the "partial application" operator which can solve the issue of complexity

Additional syntax raises complexity.

@tabatkins
Copy link

@keithamus

What I really don't like about the partial syntax is that it's unclear to me precisely what its "scope" is. If I write foo(bar(?, 2), 3), am I immediately calling foo and passing a function as its first argument, or am I defining a function that will later call foo with the result of bar(x,y) as its first argument?

I'm certain there's an answer to this, but it's not clear from reading what that answer would be.

(Plus you can only use each arg once, and have to take them in the precise order that they happen to be used. There are variants that solve these, but that raises the complexity too.)

@tabatkins
Copy link

@jakearchibald

Using jschoi's syntax makes the pipeline operator much simpler. (I agree that the "basic" pipeline operator doesn't really pay for itself - what it gains in conceptual simplicity it loses, badly, in practical simplicity.)

function addToEach(arr, add) {
  return arr.map(val => val + add);
}

function sum(arr) {
  return arr.reduce((a, b) => a + b);
}

const result = [1, 2, 3] |> addToEach(#, 10) |> sum;

The only differences from your call operator example is that the functions take their argument directly, rather than implicitly via this, and then when calling addToEach you have to indicate that argument explicitly. (You can skip it with sum as it's just a single name taking the piped value as its sole argument, thus satisfying the constraints of the "bare syntax", but you could also write that as |> sum(#) if you wanted; whatever's more readable at that moment.)


I agree that pulling methods out of prototypes or modules and then calling them on arbitrary objects is a pretty good benefit, and I think it might justify the call operator on its own. Using .bind/.call/.apply is sufficiently painful it's worthwhile to bake this into syntax imo. The call operator only actually solves .call (and .apply with spread syntax) tho; .bind is still left unsolved and annoying. jschoi's pipeline solves that with the additional +> operator; arr.map(+>foo.bar) == arr.map(x=>foo.bar(x)).

@albanx
Copy link

albanx commented Jul 31, 2018

My first thought is do we really need both pipeline and call?

@jadbox
Copy link

jadbox commented Jul 31, 2018

Hrmm.. to @keithamus point, while the call operator seems cleaner for these small examples, it doesn't play well with the current JS ecosystem. If I want to use Lodash to transform data, I couldn't just use a Call Operator, because the lodash functions don't look at this for it's input values. The pipeline operator (+"partial application" operator) would allow using transformers like lodash right out of the box at your call sites.

@mAAdhaTTah
Copy link

I think @fvsch covers my primary objection to this characterization. I'm also not entirely convinced of the complexity of x => y => { ... }, given how widespread this pattern has become in the React community.

I'd also point out (as noted on Twitter) that there's a wide ecosystem of packages that export functions, all of which are immediately useful in a pipeline, even without the gymnastics of the x => y => { ... }. An ecosystem of packages that export reusable methods for bind would have to spring up after / as the proposal progresses.

Lastly, as a consumer, x |> map(x => x + 1) and x::map(x => x + 1) are effectively the same thing; I personally think functions as a consumer are going to be more flexible than methods, so I'm inclined to see the former as more useful. I would definitely expect to see libraries export methods for use with the pipeline operator, and I don't really mind the (...args) => instance => { .... } stuff inside a library.

@TehShrike
Copy link

I agree pretty well with @tabatkins and @mAAdhaTTah.

The call operator would push developers to write functions that would be strange to use without the call operator.

These two functions could be put in a shared file and used anywhere:

const multiply = (x, y) => x * y
const strong = x => `<strong>${x}</strong>`

const output = 3 |> _ => multiply(_, 2) |> strong

By taking their main argument in via this, these functions don't make much sense outside of the context of the call operator:

function multiply(y) {
	return this * y
}

function strong() {
	return `<strong>${this}</strong>`
}

const output = 3::multiply(2)::strong()

It creates two weird sets of functions - some that can be called like multiply(2, 3) and some that have to be called like multiply.call(2, 3).

@dman777
Copy link

dman777 commented Aug 1, 2018

I agree, I like to call Operator more. Code should be about Easy-to-Read. Let's character count is nice but never at a cost at easy to read.

@WebReflection
Copy link

Object.defineProperty(
  Object.prototype,
  '::',
  {
    configurable: false,
    get: function () {
      var self = this;
      return function (callback) {
        return function () {
          return callback.apply(self, arguments);
        };
      };
    }
  }
);

const {map, sort} = Array.prototype;
document.querySelectorAll('*')
  ['::'](map)(el => el.nodeName)
  ['::'](sort)();

\ o /

jokes a part, I think why not having both ? it's good to have options for both functional and OO programmers, IMO.

@jakearchibald
Copy link
Author

@WebReflection I've been told it's one or the other due to a syntax budget. I agree we could have both.

@WebReflection
Copy link

@jakearchibald those two cover different use cases though, one is for methods, one is for pure functions, as mentioned in here.

It's like saying we don't want both . to access a property and + to concatenate strings 🤷‍♂️

@raphaeleidus
Copy link

to @jadbox's point about the JS lib ecosystem not using this
it would be trivial to do something like this play on a partial pattern

function callable(fn) {
  return function () {
    return fn(this, ...arguments);
  };
}

function addToEach(add) {
  return this.map(val => val + add);
}

function sum() {
  return this.reduce((a, b) => a + b);
}

const result = [1, 2, 3]::callable(addToEach)(10)::callable(sum)();

now any function can be transformed to be used in with the call operator with whatever is in this as the first parameter instead

@jkoudys
Copy link

jkoudys commented Aug 1, 2018

I really dislike all that placeholder (1, ?) syntax - it looks like a SQL prepared statement and sort of feels like a new type of "binding". It's like a bind, because what you're making when you say foo(1, ?) is a new function to pass back, but since you have parens in it it looks like it's a function call (which it's not). I don't love filling my pipes up with arrow functions, but it's readable and consistent.

Ultimately this becomes a debate between binding a context, and currying, so it makes no sense to compromise with the placeholders and re-introduce a pseudo-binder.

Nobody's really "right" in an absolute sense between which should be used, because it's really a question of values. If you're looking backwards, and thinking in terms of previous patterns, the :: wins hands-down. Being able to code like this is beautiful:

const { map } = Array.prototype;
const shoutingText = document.querySelectorAll('p.foobar')
  ::map(({ textContent: t }) => t.toUpperCase());

but, this only works for things that follow the this approach. This is especially clunky when working with collections you hope will work like Arrays in their context but don't, e.g.

const things = new Set([1, 2, 3]);
const stuff = things::map(v => v * 2); // oops, it's []! Empty Array

My favourite thing with :: is its behaviour when prefixed, so ::console.log is console.log.bind(console). This is something I see popping up constantly - pretty much anywhere I pass in a function into an event handler.

If we're looking forward, even with class and everything, the concept of this seems to be less popular every year. Curried, context-free functions are easy to reason about, test, etc. and gaining in use.

Personally I'm on team ::bind right now, but maybe I'll have a different opinion next year. Especially when dealing with async types -- Promises and Observables -- passing along a return as a single argument in a pipe is pretty normal. If your one arg is the Observable, worrying about placeholders is moot. Pipes are built to carry streams, after all.

const yelling$ = number$
  |> map(v => v * 2)
  |> delay(1000)
  |> map(async v => someAsyncOp(v))
  |> map(v => `We have ${v}!!!!`)

You could have almost identical syntax with ::, but reads backwards to me. The pipe's saying "take these results and pass them along" (makes sense for async results), but the :: is saying "take this observable, and give it a method it can run".

@appsforartists
Copy link

Not to bikeshed, but I think you've strawmanned the pipeline a bit by favoring terseness over clarity in your sample code. You used a less dense style in your call example, which (unfairly) makes it look more approachable.

function addToEach(amount) {
   return function (numbers) {
     return numbers.map(number => number + amount);
  }
}

function sum(numbers) {
  return numbers.reduce((a, b) => a + b);
}

const result = [1, 2, 3] |> addToEach(10) |> sum;

I've used functions here, because that's what you used in the call example. You could, of course, make addToEach return an arrow instead.

The conceptual tradeoffs you've pointed at (understanding higher-order functions vs. understanding this) are fair. The pipeline approach favors explicitly declared arguments over an implicit this. I've noticed that JS developers tend to avoid this, except inside class declarations, because it can be a bit of a footgun.

I suspect folks will get comfortable with either approach over time.

@jakearchibald
Copy link
Author

jakearchibald commented Aug 2, 2018

@appsforartists

Not to bikeshed, but I think you've strawmanned the pipeline a bit by favoring terseness over clarity in your sample code. You used a less dense style in your call example, which (unfairly) makes it look more approachable.

If I'd used functions in my examples like that, FP folks would have said "but you can just use arrow functions here! You're deliberately avoiding one of the benefits of pipeline!".

@satya164
Copy link

satya164 commented Aug 2, 2018

The bind operator proposal works great for prototype methods, but what about non-prototype methods which the pipeline operator solves? For example, a utility library.

Say I have a utility library which exports various methods. The following will work fine with the bind operator proposal:

import { map, filter } from 'utils'

items::map(x => x + 2)::filter(x => x % 2)

But what about the following?

import * as utils from 'utils'

items::utils.map(x => x + 2)::utils.filter(x => x % 2)

The code stops working depending on the way we import the utilities because this will now refer to module context. Granted this is a nuance of how this works, but I think it's safe to assume that it may not be immediately obvious and potentially confusing. There is no way to prevent such invalid usage either.

Agreed that developers should be familiar with this because it already exists, but it doesn't mean that this is very approachable. I spent a significant time understanding the nuances of this when learning, but I still run into mistakes time to time involving this even if I understand how it works.

I'm just a random developer, I'm not even an "FP folk". I find the pipeline operator easier to use. Sure to use them with existing prototype methods, it's a bit more work when declaring the utility functions (const map = (...args) => arr => Array.prototype.map.apply(arr, args)), but you define them only once and use them several times.

Also regarding complexity part, you mention that the complexity of this already exists in JavaScript, so we should learn it, however regarding pipeline's complexity, a complaint is the usage of closures. Closures also exist in JavaScript, so why prefer the complexity of this over the complexity of closures? One distinction to make is that this shifts the complexity towards the call site, whereas closures shift the complexity to the declaration site.

I've used functions here, because that's what you used in the call example. You could, of course, make addToEach return an arrow instead.

That's unfair. You need to use a non-arrow function with the bind operator, but it's fine to use an arrow function with pipeline operator. Examples should represent how people will use it in the real world, and I'd never use a normal function over an arrow function at least in the anonymous function you have there.

@mAAdhaTTah
Copy link

Also worth mentioning that the current pipeline proposals also include support for handling await within a pipeline whereas the :: operator does not (cannot?).

@spion
Copy link

spion commented Aug 2, 2018

We already have a solution for pipelining for promises. Its called then:

x.then(f).then(g).then(h)

@spion
Copy link

spion commented Aug 2, 2018

For everyone complaining about using this with lodash: if there can be lodash/fp, there can also be lodash/this

@Fishrock123
Copy link

We already have a solution for pipelining for promises. Its called then:

x.then(f).then(g).then(h)

This is how to throw your memory and cpu perf into the dumpster.

Us support companies see this crap in the wild. It's awful, performs awful, debugs awful, and people have a terrible time with it.

@spion
Copy link

spion commented Aug 2, 2018

It performs great with Bluebird and has decently nice long stack traces. That its still awful with native promises means there is more work to be done on that front, in node and V8. The same work would need to be done for the pipeline operator too.

I'm not sure what you're comparing with, but async/await still debugs awful in node. Its been a year, yet still no stack traces after the first await. I wish I didn't have to use Bluebird and chain then calls, but the alternative is still unusable.

@mAAdhaTTah
Copy link

@spion:

We already have a solution for pipelining for promises. Its called then:

Chaining promises & supporting async / await aren't the same thing:

array::map(x => x + 1)
  ::requestAsync()
  .then(handleResponse) // we can't use the bind operator on the response :(
  .then(value => value::filter(x => x > 1)) // or you're basically doing the same inline arrows as |>

vs

array |> map(x => x + 1)
  |> requestAsync // or |> await requestAsync(#) in smart
  |> await
  |> handleResponse // don't need to operate on the promise directly
  |> x => x.filter(x => x > 1)

For everyone complaining about using this with lodash: if there can be lodash/fp, there can also be lodash/this

The point isn't that we couldn't do it; the point is that it doesn't already exist and would need to be built, whereas not only does lodash/fp already exist, but there's a massive ecosystem of libraries that are already functions and already work with |>.

@spion
Copy link

spion commented Aug 3, 2018

lodash/fp did not exist before, but it does now. So could lodash/this - its not that difficult (in fact it can be automatically generated, too). The ecosystem would quickly adjust - its not a fundamental, difficult incompatibility, just a mechanically different calling convention.

@spion
Copy link

spion commented Aug 4, 2018

I am completely confused about how x |> await |> handleResponse even works. What does x |> await even do? Does it unpack a promise? What is the return type of that expression, standalone? Can it even exist stand-alone, or is it a syntax error not to add |> anotherFunction after it?

edit: oh I get it, its actually a special case of passing an operator instead of a function, but it looks like (p:Promise<T>) => T. Clever.

People did have misgivings about the other clever meaning of ::bindOperator. Wonder how they will feel about the above gymnastics.

@jichang
Copy link

jichang commented Aug 5, 2018

Nowadays, If you write code with ES6 syntax, I bet everyone has write nested arrow functions, so does the complexity of pipeline operator really comes from that ? Also, changing order of parameters feels easier with pipeline operator than call operator

@theJian
Copy link

theJian commented Aug 5, 2018

I do prefer the pipeline operator. It's more readable for me and for people who were familiar with FP. Is it possible that feeling pipeline more complex is because of unfamiliarity?

@mAAdhaTTah
Copy link

mAAdhaTTah commented Aug 6, 2018

@spion You're missing my point; I'm not suggesting it cannot exist. I'm pointing out that because it does not currently exist, and will not broadly exist until bind gets ratified (or advances), pipeline has an advantage coming into an existing ecosystem already compatible with it. At best, there are a handful of libraries of that currently exist that work w/ this (trine being the only one I know of); basically everything that currently exists on npm right now will work w/ pipeline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment