Initial impressions on Rust

2025-03-18

Programming · Programming · Rust · Rust · Rustlang

6 minutes

Last year, I explored the possibility of adding a complementary systems programming language to our stack. And I mentioned that I was leaning more into Zig as it resonates with me and my biases. Well, I spent a bit of my time last week porting spindle to Rust (called spindle-rs). This blog is me sharing some of my initial impressions.

The first thing is, I quite like it. Which, for me, is a very important criteria. Mind you, as a piece of software, spindle-rs is not really that complicated, so I’m barely touching the surface of what it’s going to be like writing system software fit for Rust. However, from that experience, here are some of my impressions.

Tooling is pretty good. Toolchain installation is handled by rustup. It allows for easy switching between stable, nightly, and beta. The rust-analyzer, which includes the Rust LSP server, works out of the box, and works as how you’d expect it to work most of the time, albeit a tad slow at times. rustfmt is good with decent defaults. And cargo, Rust’s package manager, is excellent. Coming from C and C++, and having experienced CMake, Meson, Ninja, and vcpkg; cargo, with its tight integration with Rust, is actually a breath of fresh air, especially its ease of use. The overall development experience as far as tooling is concerned, is top notch.

As a language, well, it takes some getting used to. The biggest friction for me comes from “fighting the borrow checker” (as Rustaceans term it). No surprise there as it’s what really makes Rust, Rust. My mental programming model, which translates pretty well between C, C++, Go, and Zig, doesn’t map that well with Rust’s semantics. Not yet, at least. This is not a criticism, of course. To program in Rust just means doing it “the Rust way”. I just have to learn it with experience.

Syntax-wise, nothing really stood out to me. It feels similar to Zig. If I don’t put too much thought into the flow structure, it’s easy to end up with very deep nested code due to how pattern matching works in combination with Result and Option. Something like:

pub fn run(&mut self) {
    ...
    thread::spawn(move || {
        ...
        loop {
            // First, see if already locked (could be us or somebody else).
            let (tx, rx): (Sender<DiffToken>, Receiver<DiffToken>) = channel();
            if let Err(_) = tx_main.send(ProtoCtrl::CheckLock(tx)) {
                continue;
            }

            let mut locked = false;
            match rx.recv() {
                Ok(v) => {
                    'single: loop {
                        // We are leader now.
                        if token.load(Ordering::Acquire) == v.token as u64 {
                            ...
                            break 'single;
                        }

                        // We're not leader now.
                        if v.diff > 0 {
                            ...
                            break 'single;
                        }

                        break 'single;
                    }
                }
                Err(_) => continue,
            }

            if locked {
                continue;
            }

            if initial {
                // Attempt first ever lock. The return commit timestamp will be our fencing
                // token. Only one node should be able to do this successfully.
                initial = false;
                let (tx, rx): (Sender<i128>, Receiver<i128>) = channel();
                if let Ok(_) = tx_main.send(ProtoCtrl::InitialLock(tx)) {
                    if let Ok(t) = rx.recv() {
                        if t > -1 {
                            token.store(t as u64, Ordering::Relaxed);
                        }
                    }
                }
            } else {
                // For the succeeding lock attempts.
                let (tx, rx): (Sender<Record>, Receiver<Record>) = channel();
                if let Ok(_) = tx_main.send(ProtoCtrl::CurrentToken(tx)) {
                    if let Ok(v) = rx.recv() {
                        if v.token < 0 {
                            continue;
                        }

                        // Attempt to grab the next lock. Multiple nodes could potentially
                        // do this successfully.
                        let mut update = false;
                        let mut token_up: i128 = 0;
                        let mut name = String::new();
                        write!(&mut name, "{}_{}", lock_name, v.token).unwrap();
                        let (tx, rx): (Sender<i128>, Receiver<i128>) = channel();
                        if let Ok(_) = tx_main.send(ProtoCtrl::NextLockInsert { name, tx }) {
                            if let Ok(t) = rx.recv() {
                                if t > 0 {
                                    update = true;
                                    token_up = t;
                                }
                            }
                        }

                        if update {
                            // We got the lock. Attempt to update the current token
                            // to this commit timestamp.
                            let (tx, rx): (Sender<i128>, Receiver<i128>) = channel();
                            if let Ok(_) = tx_main.send(ProtoCtrl::NextLockUpdate { token: token_up, tx }) {
                                if let Ok(t) = rx.recv() {
                                    if t > 0 {
                                        // Doesn't mean we're leader, yet.
                                        token.store(token_up as u64, Ordering::Relaxed);
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    });

    ...
}

In Zig, it’s easier to tame very deep nesting due to its support for labeled blocks; most blocks can be labeled, and you can do early returns with break :label;. There is also support for labeled blocks in Rust but, as far as I’m aware, only for loop and let blocks.

For example, in Zig, you can do something like:

label: {
    const on = (msg.proto2 & 0x8000000000000000) >> 63;
    if (on == 0) break :label; // early return

    const lmin = ((msg.proto2 & 0x7FFFFFFF80000000) >> 31) * 1000;
    const lmax = ((msg.proto2 & 0x700000007FFFFFFF)) * 1000;

    @atomicStore(
        u64,
        &self.elex_tm_min,
        lmin,
        std.builtin.AtomicOrder.seq_cst,
    );

    ...
}

And then, there’s async Rust. The only experience I have with async programming (with async/await and its viral nature) was with C#, and quite frankly, I didn’t really warm up to the idea. I understand the need for a fast, runtime-assisted, concurrent programming model, but coming from Go, async Rust feels like a totally different language. At least in Go, code for concurrent programming using goroutines and channels still looks and feels like normal Go code. Mixing concurrent with synchronous Go code doesn’t feel “weird”. Not so with async Rust. When writing spindle-rs, I had to use the google-cloud-spanner crate, which is async only. Since I didn’t want to write spindle-rs in async, I had to “bridge” the async crate with my sync code. Mixing async with sync Rust code feels “weird”. At the moment, my understanding of how Tokio’s runtime works (Tokio seems like “the runtime” to use when doing async Rust) is not deep enough for me to know whether it’s even a good idea to mix the two. Now, with that said, this is, again, not a criticism to async Rust. I can even see myself using it when doing embedded programming.

The only real complaint I have at the moment is the slow compile times. I know that this is being worked on at the moment, so I won’t take it against Rust too much for now. With all the criticisms hurled towards Go, they’ve done it right in terms of compilation times. Even Zig has much better compilation times despite its age. But I’m sure it’ll get better in time.

So, going back to my choices: personally, I enjoy writing in Zig more so I will continue using it for personal projects, but I can’t, in good conscience, push it to the company; not until it becomes 1.0+ with a relatively mature ecosystem of libraries. Then again, we’ve probably established our Rust usage by that time anyway, so it doesn’t make sense to “replace” it by the time Zig becomes viable. So there you go. Rust it is. And I believe it’s objectively the better choice.