Current JavaScript
Implicit type coercion weird Parts: wtfjs.com
Implicit type coercion happens with ==
, +
, /
, etc. or if (value) {...}
where value is coerced to boolean
. Here are some examples:
[] + []; // ""
[] + {}; // "[object Object]" (string saying [object Object])
{
}
+[]; // 0
{
}
+{}; // NaN or "[object Object][object Object]" (string saying [object Object][object Object]) depending on browser
true + false; // 1, because converted to 1 + 0
"" == "0"; // false, because both strings are different
0 == ""; // true, because 0 is converted to false
Three types of conversions: string, boolean and number
In JavaScript there are only 3 types of conversion:
- to string
- to boolean
- to number
These conversions will be applied either to primitives or to objects through explicit (with String()
, Boolean()
or Number()
) or implicit coercion (using operators: ==
, +
, |
etc.).
Primitives and objects have different rules for conversion. But both primitives and objects can only be converted in those three ways.
The rules can be found here: http://www.ecma-international.org/ecma-262/5.1/#sec-11.9.3
Primitive conversion
We can carry primitive conversion to each of the 3 types in the following way:
- to string:
x + ""
or explicitly withString(x)
123 + ""
:"123"
-12.3 + ""
:"-12.3"
null + ""
:"null"
undefined + ""
:"undefined"
true + ""
:"true"
false + ""
:"false"
String(Symbol("my symbol"))
:"Symbol(my symbol)"
, doesn't work with implicitSymbol("my symbol") + ""
:TypeError
- to boolean:
x || y
,x && y
,!x
, logical contextif (x) ...
or explicitly withBoolean(x)
2 || "hello"
:true || true
:2
, both2
and"hello"
are converted toboolean
and then the first truthy value is returned, here2
.let x = "hello" && 123
:true && true
:123
,x
is123
but both are converted to boolean first.- Falsy values:
Boolean('')
:false
Boolean(0)
:false
Boolean(-0)
:false
Boolean(NaN)
:false
Boolean(null)
:false
Boolean(undefined)
:false
Boolean(false)
:false
- Truthy values:
- All non falsy values
Boolean({})
:true
Boolean([])
:true
Boolean(Symbol())
:true
!!Symbol()
:true
Boolean(function() {})
:true
- to number explicit
Number()
, but implicit is tricky it is triggered in more cases:- comparison operators
>
,<
,<=
,>=
4 > "5"
:4 > 5
:false
- bitwise operators
|
,&
,^
,~
true | 0
:1 | 0
:1
- arithmetic operators
-
,+
,*
,/
,%
- unary
+
+"1"
:1
+
Exception 1 binary+
does not trigger numeric conversion when any operand is a string.
- unary
- loose equality operator
==
or!=
123 != "123"
:123 != 123
:false
==
Exception 1==
does not trigger numeric conversion when both operands are string"123" != "123"
:"123" != "123"
:false
==
Exception 2==
does not trigger numeric conversion onundefined
ornull
null == 0
:false
,null
is not converted to0
null == null
:true
,undefined == undefined
:true
null == undefined
:true
- Primitives conversion to numbers:
Number("\n")
:0
Number(false)
:0
Number("")
:0
Number(null)
:0
, different thanundefined
Number(undefined)
:NaN
, different thannull
Number("12s")
:NaN
Number(true)
:1
Number(" 12 ")
:12
Number(" -12.34 ")
:-12.34
- Symbols do not convert to numbers even explicitly, AND and a
TypeError
is thrown instead of returning aNaN
Number(Symbol(''))
:TypeError
+Symbol('')
:TypeError
- comparison operators
Special Rules related to numbers:
- When applying
==
tonull
orundefined
, numeric conversion does not happen,null == undefined
is directlytrue
by definition, see==
Exception 2 (not the same with===
) NaN
does not equal anything even itself:NaN !== NaN
, so to test that a value isNaN
, we do:if (value !== value) console.log('value is NaN')
Object conversion
When the engine encounters objects in expressions like [1, 2] + [3]
it:
- Converts the object to a primitive. Done using an internal method
[[ToPrimitive]]
which is passed an input value and optionally apreferredType
of conversion:Number
orString
(Boolean
is alwaystrue
)- Conversion is then carried by two methods of the input object:
valueOf
andtoString
which are declared onObject.prototype
and thus are available for any derived types, such asDate
,Array
etc. - Conversion goes like this:
- If input is primitive, return input
- Call
input.toString()
, if result is primitive, return it - Call
input.valueOf()
, if result is primitive, return it throw TypeError
- Conversion is then carried by two methods of the input object:
- Converts that primitive using the rules in primitives conversion above
So to do step 1. and convert an object:
- to boolean
- any non primitive value is coerced to
true
, no matter if an object or anarray
is empty or not
- any non primitive value is coerced to
- to string
- using
input.toString()
orinput.valueOf()
if former did not return a primitive
- using
- to number
- using
input.valueOf()
orinput.toString()
if former did not return a primitive ``
- using
Most builtin types do not have valueOf
implemented, or when they do, it returns this
, which is still not primitive so the algorithm ends up calling toString
.
Different operators can trigger either numeric or string conversion with a help of preferredType
parameter. But there are two exceptions which trigger the default conversion algorithm because the do not use the optional parameter:
Operators using Default conversion algorithm:
- loose equality operator:
==
- binary
+
operator
ES6 Symbol.toPrimitive method
In ES5 you can hook into object-to-primitive conversion logic by overriding toString
and valueOf
methods.
In ES6 you can go farther and completely replace the internal [[ToPrimitive]]
routine by implementing the [Symbol.toPrimitive]
method on an object.
Examples: Primitive & Object Coercion
true + false // notString binary+ notString => numeric conversion: 1 + 0: 1
12 / "6" // arithmeticOperator => 12 / Number("6"): 12 / 6 : 2
"number" + 15 + 4 // string binary+ any, left to right => string conversion: (String("number") + String(15)) binary+ 4: ("number" + "15") binary+ 4: "number15" + 4: String("number15") + String(4): "number154"
15 + 4 + "number": // notString binary+ notString => numeric conversion: (15 + 4) + "number": 19 + "number" => any binary+ string => string conversion: "19number"
[1] > null // Object arithmetic notString => numeric conversion ([1].valueOf() > Number(null): 1 > 0: true
"foo" + + "bar" // "foo" binary+ (unary+"bar") => "foo" + (number conversion): "foo" + (NaN) then string binary+ any => string conversion: "fooNaN"
"true" == true // not both operands are strings => numeric conversion: Number("true") == Number(true): NaN == 1: false
false == 'false' // not both operands are strings => numeric conversion: Number(false) == Number("false"): 0 == NaN: false
null == "" // not both operands are strings BUT null does not trigger numeric conversion with == so => null == Number(""): null == 0: false
!!"false" == !!"true" // !any => boolean conversion: !!Boolean("false") == !!Boolean("true") !!true == !!true: !false == !false: true == true: true
["x"] == "x" // Object == string => Exception for == and objects => default algo, start with toString()) ["x"].toString(): "x" == "x" => both operands are strings, so no numeric conversion: "x" == "x": true
[] + null + 1 // (Object binary+ any) binary+ number => (Exception Default conversion algo so start with toString()): [].toString(): "" + null +_1 => one operand is string so string conversion: "" + String(null) + String(1): "" + "null" + "1": "null1"
[1,2,3] == [1,2,3] // Object == Object => Exception Default conversion algo: [1,2,3].toString() == [1,2,3].toString(): "1,2,3" == "1,2,3": true
{}+[]+{}+[1] // ((Object + Object) + Object) + Object => Exception Default conversion algo: ((({}).toString() + [].toString()) + Object.toString()) + Object.toString(): (("[object Object]" + "") + "[object Object]") + "1": "[object Object][object Object]1"
!+[]+[]+![] // ! (unary+ Object) ... => !+[].valueOf() ...: !+[].toString() ...: !+Number("") + ...: !0 + [] +... : binary+ exception default algo: true + [].toString() + ...: true + "" ... => one operand is string so: String(true) + "" + ...: "true" + ![]: boolean conversion of [] which is always true: "true" + !true: "true" + false: one operand is string so string conversion: "true" + String(false): "true" + "false": "truefalse"
new Date(0) - 0 // Object arithmetic any => numeric conversion first: (new Date(0)).valueOf() - 0: 0 - 0: 0
new Date(0) + 0 // Object binary+ any => Default algorithm, toString first: (new Date(0)).toString() + 0: "Thu Jan 01 1970 ... Time)" + 0: one operand is string so string conversion: "Thu Jan 01 1970 ... Time)" + String(0): "Thu Jan 01 1970 ... Time)0"
Equality ==
vs. ===
==
does implicit type coercion between both sides: converts string to a number.
===
does not attempt to do type coercion.
5 == "5"; // true
5 === "5"; // false
Structural equality
To compare to objects for having the same structure, using ==
or ===
is not enough. Use the deep-equal
npm package:
import * as deepEqual from "deep-equal";
console.log(deepEqual({ a: 123 }, { a: 123 })); // true
but usually when need to compare objects the best practice is to give them some ID
's and compare that instead of the whole structure.
References and Equality
Beyond primitives, any Object in JavaScript (including functions, arrays, regexp etc.) are references.
let foo = {};
let fooRef = foo;
foo.hey = "hey";
fooRef.hey; // "hey"
===
let foo = {};
let fooRef = foo;
let bar = {};
foo === fooRef; // true
foo === bar; // false
foo == bar; // == Default conversion algorithm: foo.toString() == bar.toString(): "" == "": "" === "": true
"" === ""; // primitives so no reference involved and no conversion so: true
Null vs. Undefined
null
and undefined
are intended to mean different things:
undefined
: uninitializednull
: not available now
Also note that typeof null == "object"
, whereas typeof undefined == "undefined"
To check for the presence of any:
undefined == null; // true: Exception, this is a special rule in the SPEC: if x is undefined and y is null: return true
undefined === null; // false
undefined === undefined; // true
null === null; // true
Convention: use == null
or != null
to check for presence of both null
or undefined
EXCEPT for root level stuff.
Checking for root level undefined
: typeof
operator
Before using == null
or === undefined
or any operator (other than typeof
), you need to be sure that the variable was defined. Otherwise you get a ReferenceError
.
To check if a variable is defined you cannot do:
someRandomName === undefined; // [js] Uncaught ReferenceError: someRandomName is not defined at ..
Instead use:
if (typeof someRandomName != "undefined") {
// someRandomName is safe to be used inside here
}
JSON and serialization
The JSON standard supports encoding null
but id does not support undefined
. So when passing an object with an attribute undefined
, that attribute will not be encoded to JSON and completely skipped. Whereas a null
attribute will be encoded properly:
let s = JSON.stringify({ a: undefined, b: null });
console.log(s); // "{"b":null}"
Just know that when setting to undefined
an attribute will not be transmitted through JSON. Whereas using null
it will transmit null
.
So for example use null
to signal that you want to clear the value on the back end. And use undefined when you do not want to pass that attribute to the backend.
Final thoughts on null
and undefined
TypeScript team doesn't use null
in the development of TypeScript: they use undefined
.
Douglas Crockford does not use null
, because:
- bottom value is a single concept, so a single value should signify it
- it removes the
typeof null === typeof {a:"a"}
which is"object"
. Raises ambiguity with other non bottom objects. Whereasundefined
does not.
But those arguments can be countered by these:
"use strict";
let b = undefined;
if (typeof a == "undefined" && typeof b == "undefined") {
a = 10; // Uncaught ReferenceError: a is not defined
b = 10;
}
if (typeof b == "undefined") {
let b = 222;
}
console.log(b); // 10
Douglas Crockford
use functional programming: never use for
or while
loops, use forEach
instead
In ES6 functional programming will be as fast as the iterative type. So tail recursion should be used instead of while
s:
function repeat(func) {
while (func() !== undefined) {}
}
// same using tail recursion
function repeat(func) {
if (func() !== undefined) {
return repeat(func);
}
}
Classes vs. Prototypes vs. Nothing
Classes are bad because you need to think of how to break things up, and taxonomy. Taxonomy is how each class relates to each other (inheritance and all). And because you decide about the division and taxonomy of concepts at a moment where you least understand your project (at the beginning) you will get it wrong all the time. So you end up wishing for multiple inheritance. And eventually you simply need to refactor which is a pain. And can introduce errors.
So if you get rid of classes, you get rid of all these problems. Getting rid of classes also means getting rid of classification and taxonomy.
So the recent introduction of class
to javascript is actually a bad idea, because people will not understand that they do not need to do that crap.
So Crockford used to be an advocate of prototypical inheritance. Whose principal benefit is memory conservation. Ex: the advantage of using Object.create()
instead of Object.copy()
is that you save memory. And that may have made sense in 1995 but it doesn't anymore, we have loads of ram, and if you are not creating 1000 objects trying to optimise for that is not really any gain.
Also Prototypal inheritance also introduces its own errors and confusion. You have own and inherited properties. And they are kind of the same but different, and sometimes that confusion causes bugs. It also exhibits Retroactive Heredity in what an object inherits can be changed after its creation. Crockford has never seen any value from that property, but many problems. One of the worst one is that it is performance inhibiting: modern javascript engines can go faster by making assumptions on the shapes of objects, but they have to be pessimistic about what is in the prototype, which is making the language slower.
Crockford is now an advocate not of prototypal inheritance, but rather of class free object oriented programming. That is what's javascript contribution to humanity.
The addition of let
statement brings block scope. We've always had function scope: when an inner function can see the scope of the outer function but not the other way around. It can be expressed as sets. This we know as Closure and that is one of the best ideas of humankind. It was brought up in Scheme. The trick part with this, was to answer to the question: "what happens if the inner function outlives the outer function?"
function green() {
let a;
// a of green
return function yellow() {
// a of green
let b;
// b of yellow
};
// a of green
}
If you are in a stack based environment, when green
reference is cleared all it's variables are killed (like a
), but if we still have a reference to yellow
, and yellow needs a
, we are in trouble. However it was solved using a heap based environment it is all fine.
Note: the stack and heap memory differ in a few senses:
-
Stack
- Accessed: Directly
- Size: Static size memory allocation: size is decided at compile time and cannot be changed.
- Using varibles in C
-
Heap
- Accessed: Indirectly via a pointer which is a stack variable containing the address of the dynamic memory allocation address in the heap.
- Size: Dynamic size allocation, size is decided at runtime. Only a pointer is stored in the stack in order to keep the address of the heap memory allocation.
- Using
malloc
and pointers
Here is how Crockford constructs objects now:
// single spec parameter that gets destructured into the individual named params
function constructor(spec) {
let {member} = spec,
{other} - other_constructor(spec),
method = function () {
// member, other, method, spec
};
return Object.freeze({
method,
other,
});
}
Once the returned object is frozen, nobody can corrupt it, confuse it or get at its stuff. And that is the only way in javascript to get the kind of security/reliability properties that we really need. That their interfaces cannot be broken or confused. Which is extremely valuable.
Object oriented programming's base idea was to have a kind of record, to which we would attach methods to act on that data. But mixing both concepts of data and methods in the same object wasn't a good direction to take. Instead we should have two kinds of objects:
- Method objects, which can be shared with applications
- Data objects, with only data
Crockford: Int are really bad
Having int's really brings a lot of errors. There shouldn't for starters be so many number types in any programming language. They were invented in such a way because of the cost of vacuum tubes back in the 50's but now memory is cheap, so we should reconsider them.
When an int has an overflow problem, it can either:
- Halt the machine
- throw an error
- be converted to NaN
- discard the overflow and say nothing about it
The worst case is when the overflow gets discarded and no one knows about it.
Fireship: JavaScript do This, Not that
Debugging like a pro
const foo = { name: "tom", age: 30, nervous: false };
const bar = { name: "dick", age: 40, nervous: false };
const baz = { name: "harry", age: 50, nervous: true };
// BAD code : we do not know the name of the variable when this gets logged
console.log(foo);
console.log(bar);
console.log(baz);
// GOOD code: using computed property names
console.log({ foo, bar, baz });
// BETTER code: using some CSS style
console.log("%c My friends", "color: orange; font-weight: bold;");
// EVEN BETTER : objects share common properties (name, age and nervous) therefor presenting them as a table would be even better
console.table([foo, bar, baz]);
For benchmarking performance, you can keep track of time in the console, use console.time
and console.timeEnd
.
// start tracking time
console.time("looper");
let i=0;
while (i < 1000000) { i++ }
console.timeEnd("looper");
// outputs:
// looper: 4.21349839934ms
To keep track of where a function was called, and what defined it use console.trace
const deleteMe = () => console.trace('bye bye database');
deleteMe(); // bye bye database, deleteMe @ file.js:35, (anonymous) @ file.js:37
deleteMe(); // bye bye database, deleteMe @ file.js:35, (anonymous) @ file.js:38
Destructuring and Template literals
const turtle = {
name: 'Bob',
legs: 4,
shell: true,
type: 'amphibious',
meal: 10,
diet: 'berries',
}
function feed(animal) {
const {name, meal, diet} = animal;
return `Fed ${name} ${meal} kilos of ${diet}`;
}
// Step further, build strings in a purely functional way
function horseAge(str, age) {
const ageStr = age > 5 ? 'old' : 'young';
return `${str[0]}${ageStr} at ${age} years`;
}
const bio2 = horseAge`This horse is ${horse.age}`
Spread syntax
Let's say we want to enhance an object with more attributes:
const pickachu = { name: 'Pickachu' };
const stats = { hp: 40, attack: 60, defense: 45 };
// BAD object code
pikachu['hp'] = stats.hp
pikachu['attack'] = stats.hp
pikachu['defense'] = stats.hp
// or better but not the most concise
const lvl0pikachu = Object.assign(pikachu, stats);
// or if we wanted to update a single property
const lvl1pikachu = Object.assign(pikachu, { hp: 45 });
// GOOD and concise with syntactic sugar
const lvl0pika = {...pikachu, ...stats };
Let's say we have an array of strings and we need to push additional items to it:
let pokemon = ['Arbok', 'Raichu', 'Sandshrew', ]
// BAD code old school (mutating so bad)
pokemon.push('Bulbasaur');
pokemon.push('Charizard');
// GOOD code (but still mutating, bad)
pokemon = [...pokemon, 'Bulbasaur', 'Charizard', ];
// GOOD using unshift and push
pokemon = ['Bulbasaur', ...pokemon, 'Charizard', ];
Loops
You should using functional programming patterns much more like below:
// BAD code
const total = 0;
const withTax = [];
const highValue = [];
for (let i = 0; i < orders.length; i++) {
// Reduce
total += orders[i];
// Map
withTax.push(orders[i] * 1.1);
// Filter
if (orders[i] > 100) {
highValue.push(orders[i]);
}
}
// GOOD way functional way
total = orders.reduce((carry, o)=> carry + o);
withTax = orders.map(o => o * 1.1);
highValue = orders.filter(o => o > 100);
this
Closure
Number
Truthy
Promises
Future Javascript
Classes
Arrow Functions
Rest Parameters
let
const
Destructuring
Spread Operator
for...of
Iterators
Template Strings
Promise
Generators
async
await
The event loop
The call stack
Javascript is a single threaded runtime. Which means it has a single call stack (the stack):
oneThread === oneCallStack === oneThingAtATime
The call stack is a thing that keeps track of where in the program we are. If we step into a function, we push something into the top stack. If we return from a function we pop it out of the top stack.
When your code throws an error it, actually prints the stack trace, which is the state of the stack.
We can also blow up the stack with an infinite recursion:
function foo() {
return foo():
}
foo();
Above we are adding foo()
, then foo()
etc. but we are adding before returning, so the stack fills up but never gets emptied. so we get a: RangeError: Maximum call stack size exceeded
Blocking
What happens when things are slow?
If we have synchronous calls that are slow, then all the code gets halted until the slow function returns.
So why is this a problem? Because we are running code in browser, and if we run slow code, everything in the browser gets stuck.
Non Blocking Async Callbacks and the Call Stack
Let's see an example with setTimeout
not blocking the stack.
console.log('Hello');
setTimeout(function cb() {
console.log('I love you');
}, 5000);
console.log('Suzanne,');
// Hello
// Suzanne
// --- after 5 seconds
// I love you
Here setTimeout
somehow, by some magical mechanism out of the simple stack concept, is able to remove itself from the call stack before it returns. Then let's the last console log be called. Finally after 5 seconds pushes the cb
on top of the call stack.
Why? The reason we can do more than one thing, is that the browser is more than just the runtime. The browser effectively gives us access to WebAPI
s where you can access threads and make calls to them (out of the runtime). And those pieces of the browser are where this concurrency kicks in.
For node, it is basically the same, but instead of access to WebAPI
s we have access to C++
APIs. And the running of those calls is hidden from you by being run in C++ land.
The code above then can be understood, because setTimeout
is a WebAPI
method, and so it allows to run a timer outside of the V8 and inside the WebAPI.
The WebAPI cannot just add callbacks to your stack, because if it did it would make them appear somewhere random in your code execution. That is where the Task Queue comes in. When a WebAPI is done, it pushes the callback into the Task Queue. And finally we get to the Event Loop.
The event loop is the simplest piece in this whole equation. And it has a very simple task: it looks at the call stack and the task queue:
- if the stack is empty, it takes the first thing on the queue and pushes it onto the stack (which directly runs it)
- otherwise it waits until the stack gets emptied.
Remember the stack is in the javascript land (so runs inside V8).
SetTimeout 0
Now you can understand what is the point of running setTimeout
with a delay of 0
:
console.log('Hello');
setTimeout(function cb() {
console.log('I love you');
}, 0);
console.log('Suzanne,');
Because it will allow you to push a callback into the task queue, without blocking (polluting) the call stack. This will have the effect, of allowing execution of the code below it, and eventually execute the callback if at some point the the event loop finds the call stack empty. So it will defer the execution of that timedout code to the end (because main will only return at the end).
Notes
Subsequent calls to setTimeout
, will end up being queued. So for example we pass 3 callbacks to setTimeout with a 1s delay:
setTimeout(function timed1() {
console.log('in at least 1 second, or more');
}, 1000);
setTimeout(function timed2() {
console.log('in at least 1 second, or more');
}, 1000);
setTimeout(function timed3() {
console.log('in at least 1 second, or more');
}, 1000);
When the first gest passed to the WebAPI
it makes the WebAPIs sleep for 1s, and then the timed1
callback is passed to the callback queue until eventually the stack is empty and it can be executed. But before the stack is emptied, all other setTimeout
s will have been passed to the call stack, and the WebAPI
s will try to perform the timeout once the resources are available. So it is very possible, that the timed3
is passed to the queue after 3 seconds instead of the desired 1s. This will depend on how fast the WebAPI's can handle the subsequent requests, the number of threads the WebAPI's can have etc. If there is a single thread in the WebAPI, the setTimeout
will take longer to be passed to the WebAPI (because the previous adds a 1 second delay), and thus will take more than 1 second.
So setTimeout
is a lower bound execution time, but it can take much longer than what the second parameter expects.
Callbacks can be any function that another function calls. Or callbacks can be more explicitly an asynchronous callback.
function callback(i) {
console.log(i);
}
// synchronous call
[1, 2, 3, 4,].forEach(callback);
// Asynchronous
function asyncForEach(array, cb) {
array.forEach(function () {
setTimeout(cb, 0);
});
}
// asynchronous call
asyncForEach([1, 2, 3, 4,], callback);
Using the standard forEach
on an array, will call the callback as many times as there are elements in the array. But it will call it in a synchronous way (i.e. within the current stack). So as long as the entire array has not been looped and each callback has been executed, the call stack will be full, and no other code will be executed.
However we can use setTimeout
to define an asynchronous forEach
, where the the only blocking time is that of passing each callback to the WebAPI through setTimeout
, these will be blocking, but then once the WebAPI has passed all cb
to the callback queue, the call stack will be able to process any additional code. And every time the call stack is empty, a cb
will be executed.
Render Queue
There is an additional piece which we have not touched, and it is called the Render Queue. This is the piece which allows the browser to repaint itself. It would like to do it at a 60Hz, but this is in the ideal case. In reality it is also constrained by what you are doing in JavaScript: it can't do a render if there is a call on the stack; the render is almost like a callback queue, so it has to wait until the stack is clear. However the render queue is given a higher priority than the callback queue. So every 16ms it is going to queue a render, then it is going to wait until the stack is clear, before it can actually do the render.
Conclusion
Do not block the event loop. That means, avoid as much as possible blocking code that fills the stack when instead you can pass it to the callback queue (through WebAPI) which will allow the render queue to jump in the stack whenever it needs because it has higher precedence than the callback queue.
This is also why doing animation in JavaScript is kind of sluggy if you are not careful about how you queue up the code.
An example of that is a scroll handler which gets triggered a lot, it may be on every frame. So code like the following:
function animateSomething() {
delay();
}
$.on('document', 'scroll', animateSomething);
Will get queued like crazy in the callback queue and then it has to go through and process all of those. So ok, you are not blocking the stack, but you are flooding the queue with loads of scroll callbacks. So a solution to that is to debounce that by queueing all those events but only do the slow work every few seconds.