on Rust concurrency
On one hand, futures in Rust are exceedingly small and fast, thanks to their cooperatively scheduled, stackless design. But unlike other languages with userspace concurrency, Rust tries to offer this abstraction while also promising the programmer total low-level control.
There’s a fundamental tension between the two, and the poor async Rust programmer is perpetually caught in the middle, torn between the language’s design goals and the massively-concurrent world they’re trying to build. Rust attempts to statically verify the lifetime of every object and reference in your program, all at compile time. Futures promise the opposite: that we can break code and the data it references into thousands of little pieces, runnable at any time, on any thread, based on conditions we can only know once we’ve started! A future that reads data from a client should only run when that client’s socket has data to read, and no lifetime annotation will tells us when that might be.
We are not using Rust directly because of our domain but it is good to study the limitations and workarounds in different programming languages.