The Immutability Fad

Introduction

The programming community goes through fads regularly, and they start repeating after a while. It’s very similar to the fashion industry, where at any moment there is a force to be both not wildly different than what everyone else is doing (out of fashion) while at the same time being slightly different (fashion forward), which sets up an oscillator on variables like tie width, tight vs. loose fit, etc.

I suspect the programming community is going to see-saw back and forth between the object-oriented (OOP) fad and the functional (FP) fad in perpetuity. OOP was “in” in the 90s and 00s, and FP started taking over in the 10s and is now very much “in”. Just look at all the new GUI frameworks these days. Despite the fact the programming is aimed at solving an objective problem in a way that fashion is not, it may seem strange that an engineering discipline would experience the same aesthetics-driven phenomenon. But the same forces are at play: a balance of conservatism (this is the official way everyone does it, if you do it differently other coders won’t be familiar with it) and progressivism (programming is hard, and if it’s at all the fault of patterns we’re following that it’s this hard, doing things differently is the way to make it not hard).

They don’t balance, so a gradual evolution occurs. But it’s not the Whig theory of history: the progressivism doesn’t just plainly make things better over time. The source of today’s difficulty is misdiagnosed, a lot of it is simply wishful thinking (trying to make programming Simply Not Hard™ anymore), so it really does come down to “different for the sake of different”, and because it doesn’t really make things any less hard, once those different patterns become the norm, and it turns out things are still hard, people are going to want to be different going back in the opposite direction.

The primary mechanism by which this occurs in programming is that old patterns get followed for years, giving birth to large, complex years-old codebases that follow those patterns, implementing the behavior of very complex, sophisticated programs (because entire “product” departments of businesses are paid to do nothing but invent more complexity to add), and gradually accumulating cruft, tech debt and design warts due mostly to a combination of being rushed by the business and being built by novice or low-skill programmers.

The revolutionaries present these complex codebases alongside tiny, trivial “sample apps” contrived solely for the purpose of demonstrating their patterns as better, and say “behold! Look how much simpler this tiny sample app is compared to the production codebase you’ve been working on!” People buy this argument, spend years building real production apps with the new patterns, they degrade for the same reasons, and people learn the hard way that they, too, are complex and messy. Rinse and repeat.

This is obviously what’s going on because the propaganda of “Paradigm X is dead, long live Paradigm Y” is laced with wild promises (precisely that the new paradigm will make programming Simply Not Hard™). As time goes on, these sales pitches become easier to believe because they’re mostly refuting the unrealistic sales pitch of the previous paradigm shift. OOP promised to make programming Simply Not Hard™, it certainly didn’t do that, so FP is now here demonstrating (correctly) that OOP failed its promise, giving the discussion an air of authority, and making it easier to believe that FP is the true way to make programming Simply Not Hard™.

I’m not suggesting genuine progress doesn’t happen. It obviously does. No one’s writing articles titled “Compilers are Dead, Long Live Assembly”. There’s little controversy over genuine progress. The controversy is over stuff that’s either largely a matter of personal taste, or a battle of mutual false promises driven by people who don’t understand why programming is hard and that it never won’t be (it’s hard because it’s logical problem solving, where all tiny details need to be worked out and accounted for. This is never going away).

Mutability: Ontological Evil Manifest

One major, if not the central, feature of the FP fad is immutability. Apparently, mutability in computer programs is the source of pretty much anything bad. Mutability causes logic errors, destroys thread safety, makes programs harder to reason about, contributes to global warming and probably harbors extremist political views.

You know this belongs to the “oscillating fashion trend” category because of these ridiculous promises, like that if you program in an immutable functional style you’ll basically never have logic errors or thread safety issues. I’ll explain later the specific fallacy people use to justify this claim.

I’m not saying this because I’m an OOP fanatic. I think both of them are fads.

I’ve always found “mutability is bad” to be a strange assertion. Taken to its logical conclusion, yes a program working with only immutable state would surely be much, much easier to reason about and not suffer from any logic errors or race conditions… because it wouldn’t do anything. A completely immutable GUI app is otherwise known as a PDF. It just sits there. It displays state that doesn’t and can’t change while the program is running.

But this has to be a straw man. The FP crowd isn’t telling people to stop writing, well… programs. They’re claiming that somehow behavior, dynamism, change, or what I ask you to please recognize is a synonym for all of those: mutation, can be achieved in a paradigm of total immutability. It sounds like a naked contradiction. What’s going on here?

Well, if I have a some state, S1, even if it’s immutable, I can create new state, S2, by calling a function that takes S1 as an input and returns S2 as an output. I didn’t mutate S1, I just created a similar but different in a few ways state S2. S2 is also immutable. If I want something in my program to change, I create another new app state, S3, from S2.

Okay fine, each instance of this app state is immutable, but if what I’m seeing on screen is supposed to change, then certainly the state of the UI is changing. Somewhere the UI has to track the current app state, and that is going to mutate from S1 to S2 to S3, right? I haven’t eliminated immutability, I just concentrated it in the reference to a “current” value instead of in the individual fields of the value (we’ll explore the implications of this difference more below).

Ah, but we continue the argument into the UI! The UI state isn’t mutating, there’s another function that takes an app state S and returns a UI state U. For each app state S1, S2 and S3, each one is put into this function to create corresponding UI states U1, U2, and U3. Just as the app state is not mutating but simply new slightly different copies are being created, so too is the app state not mutating, new copies are just getting created.

Um… okay (I’m starting to get slightly annoyed now, you’ll see why soon). So you make new immutable copies of the UI state every time the app changes (*cough* mutates *cough*) what it displays. But you’ve just shifted the problem again, not eliminated it. Only one of those states is going to be sent to the frame buffer, which is what the computer hardware is actually drawing on my computer monitor. So the frame buffer is still going to have a current UI state, and that is going to be mutated by setting it to U1, then U2, then U3, etc.

But you don’t understand! We simply keep pushing forward in the same way. The frame buffer isn’t mutable! There’s a function that takes a UI state as an input and returns a frame buffer as an output. Each time a new copy of the UI state is created we just use that function to make a new copy of the frame buffer.

Oh for f*ck’s sake…

Alright, so you’ve got copies of the frame buffer being created. You still have to decide which one is read by the computer hardware and sent to the monitor at any given moment. You have to have a current frame buffer, and that will mutate!

This is where it’s going to get plainly ridiculous. This isn’t a straw man. Hold onto your hats. Here we go:

There isn’t one monitor! As every moment in time passes, a new copy of your computer monitor is created, each one potentially being slightly different than the last copy. New copies of your monitor are being created on every tick of time in the physical universe in which we live. Those copies are created by calling a function that takes the copy of the frame buffer created from the copy of the UI state created from the copy of the app state, and since each copy can be slightly different than the previous one, each copy of the monitor for each moment in time will be displaying something slightly different.

Reality is, in fact, entirely immutable! Mutation is just a pedestrian way of modeling things. The brilliant FP philosophers have realized that nothing changes, instead for every instant in time there is a copy of the entire universe, created from a function that takes the previous copy as an input. “Mutability” would mean you could go back in time and edit the past. But you can’t! The past, meaning already constructed copies of the universe, is immutable. You can only move forward in time, which means creating more slightly different copies of the universe that existed in those past moments.

I have a phrase that I do not hesitate to use when I start hearing stuff like this, and I spent four years in college, so I’ve heard plenty of it:

“This is so stupid only a smart person could have come up with it.”

I mean, seriously, does anyone really conceive of the world like this? If you paint your car, you aren’t changing the color of your car, you’re just causing the copies of the your car that get produced every Planck time unit to be slightly different from the previous one by having one extra microscopic stroke of new paint? And you believe that’s anything but an extremely convoluted and confused way to say “you changed the f*cking color of your car”?

But let’s be fair, I’m just having an emotional reaction to this. One that I think is entirely warranted (who’s more foolish, the fool or the fool who debates him?), but nevertheless my “reeee”ing doesn’t count as a refutation. So let’s formally refute this once and for all.

Even if it is true that the universe creates copies of the entire state of all physical objects and sticks them in a dictionary mapping instants in time to states of all physical objects, this still fundamentally cannot eliminate mutability. First of all, does this dictionary mutate each time the clock ticks and a new moment in time is created? Well, maybe you’re an ultra-Calvinist and you think everything is perfectly predetermined, the dictionary is already fully built up from the beginning to the end of time itself. If not, you’d have to concede that on every tick, new copies of this dictionary are created, discarding the previous one that contained all time-state pairs up to the previously latest moment in time, with the new one containing all the same time-state pairs as the previous, plus a new one for the now latest moment in time.

Regardless, we now come to the crux of the matter. Whether new dictionaries are being created constantly, or there’s one ultra-predetermined dictionary, there’s still one thing in this ultra-immutable universe that must be mutable:

Now

Whatever instant in time is considered to be “now” has to mutate. The clock has to tick. Trying to model this as immutable gets stuck in an infinite regression. “No the moment that is now is copied on every new moment”, but you have to pick which one is the actual “now”, instead of a historical “now”. If all the “now”s are a stored in an array where each new copy is one larger than the last, you still have to say that the true “now” is the last element of the array… but which array (what if you hold all the arrays in another array, and say the current one is the last element? Again, that array changes when a new array is added, and we’re back to the same problem)? All the historical ones are still out there. You have to assign a “latest”, and that’s a mutating variable.

Honestly we’re doing rigorous super-abstract philosophizing to remind ourselves of what everyone already knows: things change. You aren’t experiencing all moments in time. You experience a particular moment at a particular moment (this is already becoming unworkably circular), and that particular moment changes. That’s what time means. Time is the parameterization of mutation (our minds recording snapshots of the changing state of physical objects and storing them in order). Modeling the universe as copies being keyed by moments in time is failing to recognize what time is. At best those keys are proxies for time, not time itself.

Yes you store all the frames of a movie sequentially in memory somewhere, where the index of a frame indicates the time of the frame. But that data is static. Those characters aren’t alive, they aren’t moving, they don’t do anything. The index isn’t literally time, it’s literally an index in an array, an offset (meaning a location in space) in memory. The only way to make it come alive is to play the movie, which means to create a connection between that index and literal time: you use actual time, what an actual clock tells you, to pick a frame to show. Because clocks tick, because the time they read changes, the displayed frame changes. Suddenly the movie becomes dynamic, a living simulation of whatever was filmed, not a dead one that sits still in memory banks.

You can never eliminate the mutability of “now”. If you did, the universe would simply be the frames of a movie sitting in memory banks somewhere, but with no external time (there’s nothing external, this array of frames is everything that exists), there’s no way, it’s not even meaningful, to “play” the movie. This describes a dead universe where nothing happens, nothing changes, nothing mutates.

Of course it does. Of course immutability implies nothing mutates! Come on!!

What Does “Immutable” Actually Mean?

Now, what was the point of that exercise? I thought we were trying to write software, not get stoned and talk about the meaning of existence.

Strained sophistry notwithstanding, it’s plainly not true that there are multiple frame buffers. There’s one (well, probably two or three, to prevent visual tearing), it has an address in memory, and its content is changed (mutated) by your running app. The “interpretations” to make that immutable only prove that “viewing things as immutable” in this way is actually not different than viewing things as mutable. The whole distinction becomes almost meaningless. Where and how it remains meaningful is what we need to talk about now.

It’s not hard to spot the obvious contradictions in articles about this stuff. Look at this article, not just at the title but within:

However, if you’re using Redux, you need to know how to modify your state immutably (without changing it).

Does anyone else not find this sentence hilarious? How to change something unchangingly without changing it!?

This plainly contradictory phrase appears in none other than the Redux docs themselves:

It’s also not obvious just by looking at the code that this function is actually safe and updates the state immutably

Update something immutably!? What are these guys smoking?

Yes I know what it actually means, but they’re obviously using the wrong words for it. None of this has anything to do with immutability. To say that in redux, you can only mutate the app state through reducers, is saying something meaningful, and in fact very important, it just isn’t about immutability. The app state can obviously mutate. But no one has unfettered permission to do so.

See, that’s the point. Mutability is controlled. You don’t just give naked write access to all the fields inside your app state to everyone who wants to read the app state. You make them go through functions that limit how mutations occur. You protect your app state by making sure it mutates only in a limited set of ways that you allow.

Now someone’s going to tell me I’m wrong, that in Redux the app state really is immutable, and that reducers don’t mutate state, they emit new values of state. Yes there’s a real distinction there, we’ll talk about that below, but I’m not wrong to say the state that comes out of the reducer gets reassigned back to the mutable reference-semantics state variable inside the Store, and thereby mutates what the app considers to be the state.

The difference in Redux isn’t the mutability of the app state. It’s who is allowed to do what mutations on the app state. We have a word in programming for this concept, and it’s not “immutable”. It’s “encapsulation”.

All this is telling us is that the app state is encapsulated. It’s a specific type of encapsulation in redux patterns: reading is not encapsulated, but writing is. You could choose to encapsulate reads as well, but the overarching point of this really seems to be that encapsulating mutation is far far more important than encapsulating reading.

I absolutely agree.

But see, “encapsulation” is one of the “four principles of object-oriented programming” (whether it’s legitimate for OOP to lay claim to it is another matter), so FP fanboys can’t be caught using that word.

That’s what’s kind of funny, and at times confusing, about these discussions. They get framed as big philosophical defenses of paradigms that, taken at face value, are totally wrong, but the framing itself is also wrong, and these two wrongs end up producing rights: it’s wrong to say that immutability solves these problems, and it’s wrong to frame redux’s limitation of mutation as a form of immutability (obviously), and these combine to a correct statement “it’s good to protect writing to your app state much more strongly than reading from it”.

Of course there are plenty of ways to do this that don’t require reducers. Make your fields public get and private set, and define your “reducers” to just be public functions on your app state type. The only difference between this and redux is that you don’t save copies of your state before updating it. But you easily could, just make it observable, subscribe to it and save every published update to an array somewhere.

I’m not saying you should or shouldn’t pick one or the other, I’m just saying the raison dêtre of redux is not at all an essential part of that pattern, it’s something that’s achievable in a myriad of ways in all widely used programming languages in the industry. Once that’s clear, it really makes you wonder: what’s the point of all of it? If reducers and actions is just a (very convoluted if you ask me) way to create data that’s publicly readable but not publicly writable, with some public methods for modifying it, why not just make a class?

Well, there’s a frequent assertion I hear from the “OOP sucks, let’s all write enterprise software in Haskell” people: the the hallmark of OOP, and the principle reason it sucks so hard, is that it “pairs data and functions together”. The article by Joe Armstrong is a canonical example.

The complaint isn’t about the irrelevant syntax difference between x.doThis(y) and doThis(x, y), or the fact that functions for a class are written inside the class, next to data. No, functions and data are “paired” in a more significant way than the mere mechanics of OOP languages: the methods are the only functions that pierce the encapsulation boundary of the object. Any language that supports encapsulating data will support pairing functions with that data (otherwise, that data becomes literally inaccessible to every part of the program, and might as well not exist).

Now, the weird part about this is that the functional languages they praise often support this. Both Haskell and OCaml call them “modules”, and they exist to create encapsulation, which is the fundamental mechanism of creating abstractions, i.e. an X that is simply an X and not the sequence of bytes that happen to be used, today, to implement an X. This is literally pairing data with functions, in the exact same way than OOP class closes over its fields (data) and members (functions). I don’t think it has ever been stated by FP language designers that encapsulation, which involves pairing data and functions, is somehow incompatible with a paradigm of immutable values and pure functions. The two seem quite unrelated.

In that article Joe Armstrong, after declaring that data and functions “should not be bound together” (because they aren’t identical concepts… I’m not sure how “X should not be bound with Y” logically follows from “X is not Y”), goes on to directly attack encapsulation. This section is honestly headache inducing. The argument seems to be: FP languages eliminate all state from a computer program by exposing all state as input parameters and return values of every function, which hides all the state by giving you full access to the state, which is good because hiding state is bad.

It was reading stuff like this that made me start to think all the Redux immutable functional stuff I keep encountering, and consistently not knowing what the hell the point is, is just a bunch of deeply confused gibberish from people who’ve managed to convince themselves they write computer programs that do something other than mutate state, and defend this by saying everyone through all of history has fundamentally misunderstood how the universe works and really there’s no such thing as state or change at all (if that’s true, surely OOP programs aren’t mutating state either, because state and mutation don’t exist, they’re just emitting new copies of the heap and stack on every clock cycle).

The Real Problem: Reference Semantics

Anyways, I can’t help but conclude that the insistence to do Redux instead of just writing a type with private fields and controlled methods for modifying those methods is due only to this confusion that: pairing mutable state and functions into a class is an anti-pattern that causes unnecessary bugs, but we need encapsulated data that can change over time so we still need a way to do this but in a way where we can, with a liberal dose of squinting, tell ourselves that everything is immutable and we’ve kept data and functions completely separate.

Look at this diagram:

This is literally just a class. The “state” is a private field, and the “actions” are methods:

class State {
  #account = { ... };

  deposit(amount) {
    #account.balance += amount;
  }

  ...
}

Wrap it in an Rx Observable (or some comparable equivalent) and now you’ve got your subscribeTo capability, which you can use to store a growing array of the historical values.

There’s a problem though.

Recall that Redux originated in JavaScript. Well, because of the way objects in JavaScript (and, in fact, most OOP languages) work, simply wrapping a State instance an RxJS Observable won’t correctly publish an update event when we call one of the methods:

const state = BehaviorSubject(new State());

state.subscribe(state => console.log(`New state: ${state}`));

state.value.deposit(10); // Doesn't trigger the log statement

Why not? Well, because of “mutability”… but that’s not the whole story, and saying it like that is confusing because we’re trying to mutate the state, how could mutability be the problem!?

No, the problem is that the mutability is on the internal #account field. Actually that’s not even right. What we’re mutating is the balance member inside the #account. The State instance itself doesn’t change, not even #account changes. Notice that we’re not calling .next anywhere to publish a new value. We’re reaching inside the BehaviorSubject, getting its value, which is “read-only”, but we are just reading it, but we’re calling deposit, which doesn’t mutate this, it mutates stuff inside of this.

Really, making value read-only is almost useless, at least when it’s holding an array or object. Claiming that the value is read-only, but that you can reach inside of it and change whatever you want (as far as the type’s encapsulation allows) is borderline fraudulent. It’s not necessarily bad modelling that I can go through an object’s member and make modifications to it while claiming I didn’t modify the object, but in many cases it most certainly is bad modelling.

For example, consider me as a value. I have two members: my left arm, and my best friend. If you replace my original left arm with a cyborg arm, have you mutated me? Absolutely! If you go to my best friend and replace his arm with a cyborg arm, have you mutated me? No. What’s the difference? My leftArm member is a value, the arm itself, but my bestFriend member is not the person himself (including all his members, like his left arm), it’s a reference to the person. Changing something about the person who happens to be my best friend doesn’t change me. What would change me is to change who my best friend is: that is, to change the reference itself stored in bestFriend.

This is the well-known problem of “value vs. reference types”. Objective-C, Java, C#, Swift and JavaScript treat the distinction on the metatype level (they have types of types that are values are references respectively, i.e. class and struct), and only C# and Swift let you define custom value types (unless you count custom C structs in Objective-C, which you usually don’t because they don’t interop well with Objective-C). For the rest, the only value types are the handful of primitives that come with the language. C/C++ treat the distinction on the type level (all types have value semantics, but some primitive types, namely pointers, provide referential access to other values. C++ references are a sort of unique oddball). Kotlin defines a value type as a class that obeys certain conventions (strictly speaking they always have reference semantics, but if you make them immutable enough you can’t tell that they have reference semantics… interesting, there’s that word “immutable” showing up in a discussion of references vs. values).

In JavaScript, an object has to be a “reference type”, which means you can’t actually work directly with object values. When you assign a variable to an object, you’re really assigning the variable to a reference to an object instance. This means you have no choice but to model members of one object that are themselves objects as being references. If you model an arm in JavaScript as an object, then my leftArm can’t be the arm itself, it has to be a reference to an arm, in the same way bestFriend is a reference to my best friend. Then, just as modifying my best friend doesn’t modify me, neither does modifying my arm modify me. Sure, replacing my arm, as in assigning my leftArm to refer to another arm instance modifies me, but I can go inside the existing arm reference and change stuff without changing me.

The problem repeats on every level. If an arm is an object made up of members, if those members are also objects, you can once again go inside the referenced instances and change stuff without changing the arm, and repeat if those objects’ members are once again objects.

Being forced to model all complex objects, on every level, as references, is a major restriction in a language’s ability to model things. And it is a direct consequence of OOP fanaticism in the 90s: if “everything is an Object“, well what exactly is an Object? It’s not just a synonym for “thing”, that would make it meaningless. This means a lot of things, including that literally every type you can define inherits an Object type… and this implies one key capability of Objects is inheritance, which is assumed to imply polymorphism (that if I have a variable of type BaseType, I can assign to a DerivedType), and the straightforward way to implement variables that work this way is as references (literally pointers is how C++ and Objective-C did it, and Java imitated C++ in this way).

So, there’s a legitimate point buried under all the pseudo-philosophical gobbledygook about “stateless immutable programs”: a misplaced insistence that “everything must be an object”, which specifically means every variable must really be a polymorphic reference, caused mutability to be shattered and spread across the model when in many cases this is incorrect and mutability of parts should be equivalent to mutability of the whole, which also implies those parts should have value semantics: copying the whole should also copy the parts.

The problem is not the existence of “mutable state”: that’s called a computer program. The problem is an overly restricted, and therefore often wrong, model of mutability, wherein the parts of something can always be mutated without considering it a mutation of the thing itself, and there’s no way in these languages to express that no, that’s not right, in this case they are one and the same.

Since we can’t express this directly in these languages, what do we do instead? We enforce it in a more ad-hoc manner. We simply can’t make object members have value semantics. But what problems does that cause? It makes it possible to mutate those objects even on “immutable” (read-only) instances of the containing object, and it means when we copy the containing object the member objects don’t also get copied.

The second problem isn’t really a problem because it’s reference semantics all the way: the containing object never gets copied in the first place, unless we specifically call a copy function in our code. Since we must define this copy function ourselves, we can decide there that it will copy the member objects too.

The first problem is about mutability of the members. We fix this with encapsulation, in order to make anything that mutates them without also mutating the containing object impossible to do from the outside, and then we just avoid doing those things in the class itself. What does that take? In our example above, the trouble comes from the deposit function that mutates the #account field. We must not write any methods that mutate fields. We instead replace the “mutating” method with an “immutable” flavor of it, which creates a new value:

class State {
  #account = { ... };

  deposit(amount) {
    var result = new State();
    result.#account = #account;
    result.#account.balance += amount;
    return result;
  }

  ...
}

But this isn’t correct yet. We’re assigning the #account of the copy to our #account, which means we’re assigning the reference. That means we’re sharing the value: when we mutate balance, it affects both the copy and us. We have to also copy the value of #account first. Well how do you do that? If you did a memberwise assign with a spread ({ ... #account }) you copy the top-level object, but not any members that are themselves objects. You’d have to do the same to those, and so on.

Is there a way to do this in general for any object in JavaScript, so that a deep copy is made on all levels of members, ensuring you have a fully independent instance? Yes: serialize it to JSON, then deserialize it back to an object. But that also might not be correct. Some of those members might genuinely be references (this doesn’t make the containing object mutable as long as the object being referred to follows the immutability rules itself), and you don’t want to copy those. Ultimately, you might have to make a bespoke copy function for your structure that understands at each level whether an object is a reference or a value. This is because the language has no idea about the distinction and can’t discover it.

This is already turning into a major pain in the a**. So I’m starting to have more sympathy for JavaScript developers inventing the Redux stuff instead of dealing with all of this.

Anyways, once we are properly copying the #account object, we have to actually assign this value, which for the BehaviorSubject means calling .send:

const state = BehaviorSubject(new State());

state.subscribe(state => console.log(`New state: ${state}`));

state.send(state.value.deposit(10)); // Triggers the log statement!

What if we want to be able to read part of the state, like the value of #account, from the outside, but not be able to write to it? If #account is an object, returning it in a getter returns a reference to it, and then we can fiddle with its insides. Instead we need to return a copy. Luckily we already had to figure out how to copy the thing, so we just use that:

class State {
  #account = { ... };

  get account() {
    return #copyAccount();
  }

  deposit(amount) {
    var result = new State();
    result.#account = #copyAccount();
    result.#account.balance += amount;
    return result;
  }

  ...

  #copyAccount() {
    ... // Do whatever it takes here
  }
}

This way, the object that comes out is guaranteed to be independent of the one inside the State object, so no fiddling with it will affect that object.

Alternatively, and I think this is a better approach, we can define classes on every level, and design them all to be immutable. Then instead the custom copying logic, i.e. what is value vs. reference semantics, manifests in what members are immutable types or not. If you want to alter an immutable-type member inside of an immutable type, you have to make a new instance of the member type, and then make a new instance of the parent type. That means you have to write a similar function on the member type that creates a near-copy, or “mutator” (it returns a new value that is identical except for one part that you specify to be different), and this repeats until you get down to primitives.

This shifts the custom copying logic from deciding which members, upon making a copy, need to be deep-copied (and those members then determine what deep copying means), to different copies sharing instances of members but those instances cannot be modified in any way, so they act as semantic copies (you can’t tell they’re being shared), and you only make a deep copy when you need to make a modified instance. Both achieve the same result of distinguishing members that are the values themselves from members that are really references.

Here’s an example with the “mutator” methods that produce near-copies:

class State {
  #account;
  ...
  
  constructor(
    account, 
    ...
  ) {
    this.#account = account;
    ...
  }

  get account() {
    return #account; // No need to copy, because Account is immutable
  }

  deposit(amount) {
    return new State(
      this.#account.deposit(amount),
      ...
    );
  }

  ...
}

class Account {
  #balance;

  constructor(balance) {
    this.#balance = balance;
  }

  get balance() {
    return #balance;
  }

  deposit(amount) {
    return new Account(this.#balance + amount);
  }
}

If we upgrade to TypeScript (which of course you’ve done, right? Right?), it’s even better because we can actually (sort of) enforce some of this “immutability”, or at least express that that’s the intention, and that eliminates the need for getters, we can expose fields directly:

class State {
  readonly account: Account

  constructor(account: Account) {
    this.account = account
  }

  deposit(amount): State {
    return State(account: this.account.deposit(amount))
  }
}

class Account {
  readonly balance: number // Oh yeah, storing a bank balance in floating point format, what could go wrong!?

  constructor(balance: number) {
    this.balance = balance
  }

  deposit(amount: number): Account {
    return Account(balance: this.balance + amount)
  }
}

You can be sure that any class you write is immutable as long as it is composed only of primitives and readonly class/array members… but keep in mind that those class/array members will have reference semantics unless you ensure the type of that member is an immutable type. For example, the array type you get out of the box in JavaScript is not an immutable type, so if you have an array member of your class, even marked as readonly, it is a reference to a particular array. If you can obtain the member through a getter, you can then start modifying the array.

If you want an array (or tuple) value type, TypeScript gives you one and even supports nice syntax for it. They also give you read-only (immutable) versions of other collections like Map and Set, and a Readonly<T> wrapper for any type.

If you aren’t using TypeScript you have to build a read-only array yourself:

class ReadonlyArray {
  constructor(values) {
    this.#values = values;
  }

  *[Symbol.iterator]() {
    yield* this.#values;
  }

  get(index: number) {
    return this.#values[index];
  }

  set(index, value) {
    let values = [...this.#values];
    values[index] = value;
    return new ImmutableArray(values);
  }

  append(values) {
    return new ImmutableArray(...this.#values, ...values);
  }

  remove(index) {
    let values = [...this.#values];
    values.splice(index, 1);
    return new ImmutableArray(values);
  }

  #values;
}

Before you get any crazy ideas (and despite what the FP fanatics say), you can’t, and wouldn’t want to try to, make everything an immutable value type. Your program is going to need to work with identifiable things. You need to choose judiciously which members of which types are values (they obey the immutability rules) and which are references (they are mutable arrays/objects), which you do by choosing the types of those members to be immutable types or mutable types. But what if you want to define a particular type and use it as a value in some places and a reference in others? Following this approach you’d have to write two versions of the type, an immutable flavor and a mutable one (much like the two flavors of arrays in TypeScript).

Another approach is to define all of your types as immutable, making them represent values not references, then define a dedicated type for a reference:

class Reference<T> {
  protected _value: T

  constructor(value: T) {
    this._value = value
  }

  get value(): T {
    return this._value
  }
}

class MutableReference<T> extends Reference<T> {
  set value(value: T) {
    this._value = value
  }
}

This allows you to more finely control encapsulation. You can define a MutableReference<T> instance somewhere, and hand that out to whoever should have write access to that identifiable object. Then you can hand it out upcasted to a Reference<T> to anyone that should only be allowed to read it (but will see modifications those with write access make to it). Done with inheritance this is easy to defeat by simply downcasting, but if that’s a concern to you, you can do this with composition as well (get rid of the inheritance and make Reference<T> hold a private MutableReference<T> member that it forwards the value getter to).

This approach is really an attempt to emulate what C++ gives you out of the box: define all your types like MyType to have value semantics then use MyType* (or a smart pointer like shared_ptr<MyType>) as a mutable reference and const MyType* (or a smart pointer like shared_ptr<const MyType>) as an immutable reference.

Well, if the solution to the problem is to make State (and sub-objects, and so on) an “immutable” type, then was the problem actually “mutability”? Well, not the existence of mutability, but its distribution. We aren’t eliminating mutability, we’re bringing several little islands of mutability together into a big island nation of mutability.

Since the language makes it so difficult to model deep trees of aggregates as a single value whose parts cannot change without that constituting a change to the whole, yeah… Redux starts to make a little more sense. But it’s ultimately not solving the problem. In Redux you get a “copy” of the current state passed into the reducer. Well that’s whatever type of object you initially specified, and if you fiddle with it’s insides, you’re fiddling with the internals of the store’s private state instance. And then stuff just breaks. It still comes down to “just don’t do that”. If you want to prevent this, you have to go through the trouble of defining classes that follow the immutability rules.

So… this doesn’t really make things any better, and you have to do silly stuff like define enums with a 1-1 correspondence to functions, and then write out the switch statement confirming that the two are, in fact, 1-1 (why define a new concept just to make it equivalent to an existing concept and then spell out the identity mapping between them?). Maybe this is because people got a vague notion that the original problem has something to do with mutability, they heard about these “functional languages” where everything is immutable and it’s impossible to have mutability bugs, and the way you model the state (which mutates as the inputs come into the program) in a functional program is as a list built out of a reducer over the sequence of program inputs. So then the solution to this mutability issue in JavaScript must be to model application state with reducers the same way you’d model it in a Haskell program.

Blaming mutability is not thinking about this carefully and understanding the problem is not mutability itself but reference semantics and the implication that has for how mutability is distributed and linked (or not linked) together. Understanding that this is the root cause of the problem makes it clear the proper solution is figuring out how to model aggregates as value types, and the fundamental mechanism of value semantics is copying, so you have to either define the copy function for your state object, or use immutability to ensure that shared references act as semantic copies (you can’t tell they’re not literal copies because you can’t modify one “copy” and see the modification in the other “copy”) and write “near-copy” (mutator) functions for mutations, which require any mutation to something deep inside your app state to cascade all the way up to a mutation of the entire app state.

Is It Just JavaScript?

How do you deal with this in other languages? In Objective-C and Java, it’s the same as JavaScript: you make “immutable” classes with hand-written copy-mutate methods and compose them. Objective-C makes this pattern more canonical as its core collection types (NSArray, NSDictionary and NSSet) are immutable (and have those hand-written copy methods). In C#, you get true value type aggregates called structs, which alleviates you of having to write the copy function yourself because the compiler can tell (by your choice of struct vs. class) whether a variable is the value itself or a reference and can therefore write the correct copy function for you.

In all these cases, syntactically this means replacing in-place mutation style code with whole-value replacement through immutable methods. This:

state.account.amount += 10

Becomes this:

state = state.withAccount(state.account.withAmount(state.account.amount + 10))

This is pretty close to the TypeScript example above. You’ll probably write a lot of helper functions in places where you have to reach down through multiple levels. That’s what the deposit methods are.

C# has this awkward problem where by default you can write the code in the first example, but it ends up creating a copy, mutating it and throwing it away. More specifically, when you call .account, assuming the result is a struct, you get a copy, not the one inside state, you change it’s amount, but it’s an unnamed temporary and just gets immediately thrown away. I’m pretty sure the compiler at least warns you about this. You can prevent this by marking the structs as readonly, and then this line becomes fully banned and you have to write the second line or some equivalent. It’s unfortunately this isn’t the default, so the conventional wisdom now is to just mark all your structs as readonly.

C++ is on par with C# here: you pick values vs. references by making the types of your members either T itself or T* (or some pointer-like type such as unique_ptr<T> or shared_ptr<T>). But it’s more powerful in two ways: first, it writes the default copy constructor for you, but you’re allowed to customize it too. You need to make sure you follow const correctness, i.e. if you return references to private fields, make sure it’s a const reference. Second, you immediately get to declare any field of any type to be a value or a reference, by choosing between T itself as the type or one of those above-listed “reference to T” types you get for free. Since it’s trivial to give a type reference semantics, a good rule of thumb in C++ is make all your types have value semantics (if it doesn’t make sense to copy it, delete the copy constructor), instead of writing classes that have reference semantics you can’t strip off.

As I mentioned, you can emulate this in C# (and Swift, which has the same struct vs. class dichotomy), by making everything a struct and writing a generic Reference<T> class that just holds a struct member, to use wherever you need reference semantics. You can’t change the fact other code doesn’t do this, but you can make your code follow this pattern. It’s an idea I’ve been increasingly exploring, particularly in Swift. The main downside is you can’t suppress copying… but Swift 5.9 introduced non-copyable structs!

The language that I think best solves this problem is Swift. The language designers realized that people want to write the in-place mutation style of code, but have that actually mean whole-value replacement that doesn’t sneak around the const-ness of the top-level value. So they defined mutating functions on structs (value types, like in C#) to work this way via a special inout function parameter. Since the “immutable style” so often involves reading a value, calling a function to produce a new related value, then assigning it back to where it came from, they formalized this pattern directly into the language. This way, you can write the in-place mutation line and it will, in fact, replace the whole value and call whatever setter the top-level read-write variable defines… which implies if the top-level variable is read-only (doesn’t have a setter) you aren’t allowed to write this line.

This:

struct State {
  private(set) var account: Account = .init()

  mutating func deposit(amount: Decimal) {
    account.deposit(amount: amount)
  }
}

struct Account {
  private(set) var balance: Decimal = .init() // Ahh, that's better.  No more mystery pennies.

  mutating func deposit(amount: Decimal) {
    balance += amount
  }
}

var state = CurrentValueSubject<State>()
let subscription = state.sink { state -> print("New State: \(state)") } 
state.value.deposit(amount: 10) // Line is printed!

Is equivalent to this:

struct State {
  let account: Account

  init(account: Account = .init()) {
    self.account = .init()  
  }

  func deposit(amount: Decimal) -> State {
    .init(account: account.deposit(amount: amount))
  }
}

struct Account {
  let balance: Decimal

  init(balance: Decimal = .init()) {
    self.balance = balance
  }

  func deposit(amount: Decimal) -> Account {
    .init(balance: balance + amount)
  }
}

var state = CurrentValueSubject<State>()
let subscription = state.sink { state -> print("New State: \(state)") } 
state.value = state.value.deposit(amount: 10) // Line is printed!

Really, the compiler turns the former into the latter. In the latter example, all the members of the structs are lets so they are formally “read-only” structs. But structs in Swift (unlike in C#!) are always read-only. Making “mutable lookalike” structs with var members just generates all these helper copy-mutate methods automatically and let’s you call them with mutation-style syntax.

Notice that the fact the CurrentValueSubject subscription fires is pretty obvious in the second example, because it’s explicit: we’re assigning value. But in the first example (without helpers we might have written state.value.account.balance += 10) it’s perhaps surprising, especially if you’re familiar with other languages and the fact the syntactically equivalent code in other languages would not fire the subscriber. That’s what is cool about this approach. You get this behavior you probably want for free, which is fundamentally because Swift lets you write the direct mutation syntax you want but ensures the side effects of mutating the whole occur even if you (apparently) mutate the part. No sneaking around invariants and triggers by reaching inside of stuff.

What’s funny is I’ve seen Swift devs scream “mutability, nooooo!!!” when they see the first code, after reading about how evil “mutable state” is, and then demand this be replaced with the immutable style of creating new values and assigning them back to the read-write value at the top… not realizing apparently they’re the exact same thing: Swift’s “mutation” of value types is just a more clear and straightforward syntax for achieving the exact behavior that the immutable “assign back” code spells out more explicitly.

“Eliminating” Bugs?

Now, here’s the fallacy that almost everyone uses when they make the “my pattern eliminates this whole class of bugs from your programs” arguments: it is usually true by definition, because that classification of bugs requires that you not follow their pattern. When you do follow their pattern, it doesn’t eliminate bugs, it just reclassifies them.

If we follow the pattern of making our state immutable, which we now understand doesn’t really mean immutable, it means particular instances of that state are immutable but the values meaningful to the program are emitted as copies-with-mutation by special functions like reducers, then yes, this will eliminate the “improper mutation” class of bugs in a formal sense. And it will replace them with “improper emission of an incorrect copy by a broken reducer” bugs.

Redux (even ignoring that it doesn’t stop you from reaching inside the state instances handed to you in the reducers) doesn’t stop you from writing the wrong reducer or dispatching the wrong action. So any idea that it somehow prevents you from incorrectly mutating the application state is obviously mistaken. Since the mechanism of achieving what is needed changes from X to Y, the bugs get reclassified from improper X bugs to improper Y bugs.

This same claim is made about reference semantics. If you force everything to be value semantics, well I’ve seen claims as fantastical as “you’ll never have concurrency issues again”. Plus reference semantics is “hard to reason about” and it creates a bunch of surprising and implicit coupling between far away parts of the program, and that means side effects (another naughty term in FP circles, synonymous with bad) that are not clear when reading part of a program (you see something being changed, who knows who else is referring to that thing and will be affected by the change).

The solution is value semantics, i.e. giving each part of the program its own copy, which ensures they aren’t coupled to each other (no side effects).

Well, okay, but what if they should be coupled to each other? Again, a program with no coupling between any parts is a program that doesn’t do anything interesting. Why are we calling everything a “side” effect? Aren’t some of those just effects? If I take ibuprofen, I don’t call the headache relief a side effect!

And that’s the problem. Value semantics doesn’t automatically eliminate bugs, it reclassifies bugs. After all, if you need coupling, i.e. if it’s correct for A to affect B by mutating X, then how would you do this in FP? The answer is by nesting a bunch of monads inside each other and zipping or otherwise merging together multiple lists to express that one list (representing what one part of the program is doing) is in fact a function of another list (representing what another far away part of the program is doing).

Once you’ve got that mindf*ckery in every place where you need coupling, I’ll bet it’s pretty damn easy to get it wrong. Then guess what: incorrect coupling. You’ve turned “A shouldn’t be coupled to B” into “I flatMapped over the wrong higher order function that produces the wrong list and zipped it into the wrong other list”. Yes, this makes the coupling explicit: you just read the program there and see that A and B are coupled… if you can read any of that in the first place, that is.

What’s more likely is that using value semantics will in fact reduce coupling in your program as intended, and do so far beyond what is correct. You’ll turn “too much coupling” bugs into “not enough coupling” bugs. Specifically, now that you’re passing copies of data around everywhere (that’s the whole point of value semantics, isn’t it?), your “side effects” bugs become “stale data” bugs. You end up with copies you don’t want, because there really should be one source of truth for some data, and you forget to update everyone’s copy.

These kinds of bugs caused by way too much caching (holding copies) of data in GUI apps are pervasive. All the React-ish UI frameworks are designed to prevent them (i.e. forgetting to update your view when state changes, which these new frameworks are supposed to achieve automagically). They don’t accomplish that with value semantics, they accomplish it with observability (i.e. reactivity) on reference-semantics “component state”.

The idea that all coupling is locally explicit might seem nice, but I’ll bet it doesn’t scale at all. There’s tons of (appropriate) coupling in very large, sophisticated applications. I don’t want to have to change tons of places to add a new explicit expression of coupling every time more coupling is added, and then be blasted with all that information every time I’m reading any part of it. That’s actually bad for the level of abstraction. Explicitness is another way of saying “implementation details spelled out”, which is the exact opposite of the direction you want to go in when you scale up software.

Then again, these guys explicitly don’t like encapsulation, so they probably think this is good.

Inappropriate coupling isn’t something I think you can solve with some blanket programming paradigm. It requires careful analysis of the system you’re building, and I think it’s primarily addressed by naming things well. If you’re coupling to something you shouldn’t be, it should hopefully be self-evident by reading the name of the what you’re coupling to. A big part of this is a tendency of coders to name variables after their type instead of what they represent. But this is confusing because it doesn’t tell you what that variable means to the program, and it’s especially bad for reference types because it isn’t telling you what that specific reference means to the entire program. It’s a specific instance that is used in other parts of the program. Instead of calling it theScreenCoords, you should call it primaryCursorScreenCoords, to make it clear it’s a reference to the special instance that means something special. If you follow that rule, hopefully you’ll notice when one class is either inappropriately holding a reference to that instance when it shouldn’t, or it is inappropriately mutating it.

Both OOP and FP are Turing complete language paradigms. Any OOP programming, including all of its bugs caused by mutability and reference semantics, can be translated perfectly into an equivalent FP program that behaves exactly the same, including all those bugs. How is this possible? How would that program have mutability and reference semantics bugs when it’s written in a language that does not have mutability or reference semantics? Because it translates those things into different constructs. Mutable state is transformed into a reducer over a list of inputs that produces a list of the different values that state takes on as the program runs. So mutation bugs get transformed into improper reduction bugs. Reference semantics is transformed into more deeply nested monads of lists that eventually get flatMapd and zipd together. So reference semantics bugs get transformed into improper flatMaping of monads and ziping of lists.

Now, maybe it’s the case that the buggy parts of the OOP program, which look totally innocuous in OOP code, are translated into super convoluted and plainly wrong FP code that no one would write. That’s the hope: that while it isn’t impossible to recreate the undesired behavior of the OOP program, it’s very easy to create it in the OOP language but very hard to create it in the FP language, and when you do, it’s pretty self-explanatory that you did so.

That’s possible in principle, and I don’t buy for a second that it ever works that way in practice.

Every useful program mutates state and requires two distant places to communicate with each other by sharing a reference to something. If the FP way of modeling this is convoluted, then every nontrivial program will be extremely convoluted if written in a FP language. That’s not going to eliminate bugs, it’s going to generate them.

I’m not saying I’m opposed to the FP paradigm. That’s the funny part: I probably bring in functional style constructs (gratuitous use of monadic transforms like map and flatMap on all sorts of types, from collections to optionals to observables, and so on) more than any industry developer I’ve worked with… but I see right through nonsense like “state is the root of all evil” (what could that possibly even mean?).

I just don’t think FP is the One True Way and I think it’s a terrible idea to a restrict a language to only work in that paradigm. I have the same opinion of the OOP paradigm. The problem is actually the “One True Way” part, so it’s funny that I’m reading people telling me that solution is to declare FP the actual One True Way. Why won’t these guys just let go of this search for a silver bullet? Programming Is Hard. You’re not going to get anywhere declaring that every computer program that can ever be created ever can be written with a single tool or concept or approach. We already know what that universal tool is: it’s called a computer, and the language for it is machine language. Everything the software industry has done for the last near-century is a desperate attempt to avoid having to write code with this universal tool.

I very much want a Swiss Army Knife programming language that supports as many paradigms as possible and allows them to interact with each other. This is why I’m such a fan of C++, whose designers share this philosophy. And of course I see endless whining about it for being “too complex” and hear that teams at major companies like Google declare large parts of the language as off limits, fracturing it into endless dialects because having to learn multiple dialects is apparently “easier” than just learning the language (I still have to learn everything because I’m not going to spend my entire career on one team, and additionally I have to learn how each team restricts things and how I’m supposed to do something that I would normally do with some part of the language they ban).

Plus, I mean, anyone suggesting that programming would get easier if we all just wrote Haskell… LOL. Have these guys ever seen Haskell? It’s a fascinating language, meant for language research, not building Microsoft Word, and for that purpose it’s been extremely fruitful, producing several concepts of type theory that eventually made their way into industry languages. Since it’s an extremely abstract research vehicle that requires you to be familiar with a bunch of super abstract math like category theory, the last word in the entire English language I would ever use to describe it is “easy” (for who, the most Asperger’s-y idiot savant who’s ever lived!?).

I should note that there seems to be some overlap of two groups here: one like Joe Armstrong who makes these nonsensical philosophical claims and says FP is the One True Way, and the other who is angry programming languages evolved past the 1970s and think everything should be written in C. They agree that OOP is a “disaster” but I suspect they’d hate each others’ paradigms even more. Addressing the people who want me to write GUI apps in C (which will, I guarantee, result in building an ad-hoc object model with abstraction, inheritance and dynamic dispatch… why do I have to build all that myself when C++ ships with it and its compiler understands it?) is a whole other topic.

Conclusion

These are the key takeaways:

  • When someone says a tool will eliminate an entire class of bugs, that doesn’t imply you’ll have fewer bugs, they’ll just be classified differently.
  • There is no One True Way to write computer software. Carpenters don’t look for one tool to bring to the job, they bring an entire truck full of different tools, each with high specialization, and walk around with an entire toolbelt strapped to their waist.
  • Anyone who genuinely thinks they’re writing computer programs that don’t have state and don’t mutate it are smoking some sh*t I wish I could have found in college.
  • OOP obsession and the languages influenced by it did create a problem of improper modeling of mutability. The solution is to make judicious selection between value and reference types, and in languages where you can’t make custom aggregate value types, use a combination of encapsulation and hand-written copy constructors or copy-mutators to simulate aggregate value types.

I promise at some point, people are going to look at giant apps written with stuff like Redux, freak out and start writing articles saying “FP is Dead, Long Live OOP” (this has already sort of happened with ReactiveX, which is a highly functional paradigm, it got massively overused and misused and now lots of people hate it). It’s going to go right back to where it started, no one’s going to learn the real lesson (stop looking for silver bullets!), and while history may not repeat, it’s going to rhyme.

That cynicism (it’s not so bad if you take the correct approach of just laughing at it) notwithstanding, I think programming languages have made genuine progress recently. Supporting a whole plethora of approaches, including value and reference types, imperative and declarative, procedural, object-oriented and functional, threads and coroutines, etc., lots of advanced work in generics, higher-order type systems and metaprogramming… this is all genuine progress, and this is happening in several languages. I’m sure at every point a group of people are going to be too proud or lazy to learn the new tools and complain about it… don’t be one of them and don’t let them drag you down with them.

At the same time, be prepared to look at a new trendy tool (especially ones that sell themselves as “simple” and work by banning large swaths of techniques that are available elsewhere) skeptically and recognize the patterns of a programming community latching suddenly onto a new concept and become outright drunk over its wild promises, and anticipate the epic hangover that’s about to strike them. In my opinion it’s very rare that a tool or technique gets added to a language or library that’s outright useless. Rather tools and techniques simply get used inappropriately, beyond their range of utility (i.e. hammers aren’t useless, but they’re not good screwdrivers). When entire communities start declaring a widely used tool ought to have never been invented and needs to be removed, that’s when the bender is starting, and that’s when you can bow out so you don’t wake up 3 years later with a pounding headache, no memory of what you did and codebase you can’t believe you thought was a good idea to write.