I bought Luca Palmieri's book on the recommendation of a colleague and wanted to share my own thoughts on it as a production Rust user. I offer my thoughts as I go through the book linearly. I intentionally don't go out of my way to contextualize my commentary as this is not intended to be a substitute for Luca's excellent book. Also, it saves me time writing this blog post.

IDEs

He recommends Intellij Rust as of March 2022. I think this was a sound recommendation up until recently but rust-analyzer used via an editor like Visual Studio Code is significantly better these days. Each have cases where they will fall on their face with a project or macro but rust-analyzer's improved performance and compatibility makes VS Code + rust-analyzer my primary IDE and has been for several months now.

Development loop

I think dev loop stuff is fine, I will say that a good idea will obviate the need for some of the chaining Luca describes like:

cargo watch -x check -x test -x run

Intellij Rust and VS Code + rust-analyzer are going to run check for you, just skip the extra pass and use a test and a run. You can save a little time here by passing it a cargo command that builds the test suite and the target binary and then executes the pre-built binary.

Linkers

lld and zld are fine but they don't save that much time. If you really want a responsive workflow mold on Linux is the best I've been able to muster.

Seriously, it isn't even close right now: https://old.reddit.com/r/rust/comments/un8efy/considering_mac_studio_for_rust_development/i8cdalq/

Fortunately, Rui has made significant progress on the macOS version of mold, expected to be part of the 2.0 release: https://github.com/rui314/mold/issues/548#issuecomment-1159617435

It's not worth working inside Docker on MacOS for speed FWIW. The poor I/O performance and VM overhead will probably eat up any gains you would've made.

Continuous integration

I don't put a lot of stock in code coverage as a primary metric or driver of code quality but YMMV. Some projects are more test quality/coverage sensitive than others. The work I do can usually be handled with some tests targeted at high-risk hot-spots, relying on code generation, macros, and good leverage of the type system to mitigate human error.

Of the recommendations he makes, I most strongly recommend GitLab and GitLab CI. I've used GitLab CI a long time and the diverse integrations combined with the ability to hitch up your own build machines has made it a great choice for my work.

Email newsletter project

The "as a ${USER}" verbiage is retraumatizing me but I'm an old programmer. It's good to boil down and outline your project goals and user needs.

I agree with the choice to use actix-web. It's performant, not too hard to use, and well-exercised. The middleware & integration points in actix-web are detailed enough that you can usually make it do just about anything you want without needing to patch the framework. A real concern with less mature or widely used web frameworks.

Routing

I usually split out my routers into a separate module and function to keep main.rs tidy, usually looks something like this:

// main.rs
    HttpServer::new(move || {
        App::new()
            .configure(router::app_routes)
    })
    // ...
// router.rs
pub fn app_routes(cfg: &mut web::ServiceConfig) {
    cfg
        .service(web::resource("/").route(web::get().to(views::home::home)))
        .service(web::resource("/health_check").route(web::get().to(views::health_check::health_check)))
        .service(actix_files::Files::new("/static", "./static/"));
}

Here's where I declare some inconsistency on my part: I usually prefer to have centralized routes for SSR pages and colocated routes for API endpoints. Yeah, I know, I'm weird. Sorry.

Integration tests

We will opt for a fully black-box solution: we will launch our application at the beginning of each test and interact with it using an off-the-shelf HTTP client (e.g. reqwest).

Going to strongly disagree here. This isn't necessary in most cases. You likely do not need to test actix-web. actix-web already has more tests than you can possibly think of for exercising its correctness. So why do you need to black-box test it? Further, if your concern is an API client integrating with the API, use code generation not tests to ensure correctness! Generate your clients from a spec generated from your types! I recommend Swagger/OpenAPI or JSON Schema. Here's a nice library for doing this: https://github.com/juhaku/utoipa

When I write integration tests against my actix-web handlers, I split out the API receiver function from the "do the thing" function. The "do the thing" function just takes normal inputs and returns an ordinary Result. The API handler wrapping that function has the actix-web jiggery-pokery and impl Responder making it much tidier to test the code. I usually make resource mounting / bootstrap helper functions as well. This will make your tests faster, nicer to write, and easier to maintain.

Forms

Just commenting to say I'm glad he's teaching people how to do this. The world needs more server-side rendered applications and fewer SPAs.

Database client library

Use Diesel. The initial grind to get comfy with it is worth it I promise. If you get stuck, please email me. We lose much of the benefit of Rust's nice type system when we fail to deploy it for our persistent data. Diesel with a few escape hatch SQL queries is better than 100% of your queries not being type-checked. The type-provider style thing like what SQLx does isn't worth it either.

If you'd like to try async with Diesel, check out: https://github.com/weiznich/diesel_async

I've found spawn_blocking to be sufficient for integrating Diesel into my projects.

There's this table in the book on this topic:

Crate          Compile-time safety   Query interface   Async
tokio-postgres No                    SQL               Yes
sqlx           Yes                   SQL               Yes
diesel         Yes                   DSL               No

This table requires some qualification.

First, you can use Diesel in async projects either with the diesel_async crate or by using spawn_blocking. Yes async isn't officially supported but the option does exist. Further, the query interface being characterized as "DSL" isn't fair and makes people think they're going to get an ActiveRecord looking thing. Nothing could be further from the truth, it's a type-safe SQL DSL. Take a look for yourself: https://docs.diesel.rs/diesel/query_dsl/trait.QueryDsl.html

Docker for PostgreSQL

Let’s create a small bash script for it, scripts/init_db.sh, with a few knobs to customise Postgres’ default settings:

Please just use Docker Compose. The whole point of it is to wrap things like this.

Database migrations

I recommend you use Diesel's migration kit even if you aren't using Diesel. Linear, forward-only migrations, no down.sql.

Configuration files

Mostly, don't. Try to rely primarily on environment variables and secondarily dotenv for convenience.

Cf. https://12factor.net/ and https://crates.io/crates/dotenv

Data extractors

Get comfy with these, they're very handy. I use them to enforce auth in my applications. Resource acquisition is policy enforcement :)

Test isolation

I have a resource mounting helper that generates random suffixes for database names and runs the migrations against the created database. Another perk of Diesel's migration kit: it has a pure Rust entrypoint you can invoke.

Logging

Use tracing with whatever subscribers you need. If you want to recapitulate the functionality of log + env_logger use tracing and tracing_log: https://docs.rs/tracing-log/latest/tracing_log/

Facade pattern

Use tracing and you get the pattern and integrations largely for free.

Open Telemetry: https://lib.rs/crates/tracing-opentelemetry Elasticsearch: https://lib.rs/crates/tracing-elastic-apm Error reporting: https://lib.rs/crates/tracing-error Sentry: https://lib.rs/crates/sentry-tracing Flamegraphs for span timings: https://lib.rs/crates/tracing-flame journald: https://lib.rs/crates/tracing-journald Grafana Loki: https://lib.rs/crates/tracing-loki

You get the idea.

Logs must be easy to correlate

Luca's recommendation is the same as mine but not the implementation. Yes, generate a UUID for each request but make it middleware and make wrapper macros for tracing that automatically request the UUID or expect it to be in scope, injecting it as a standard parameter. Another problem with plain log macros is you have to manually interpolate things that are data when they should really be key-value pairs.

tracing crate

Oh now we move on to the good stuff I guess. Not sure why we were subjected to the foregoing.

He's still interpolating the UUIDs into the event string for some reason. Part of the reason for using the tracing macro key value pair support is that subscribers can pull that out as structured data if their data sink supports it. They can use it for filtering too.

We’d like to put together a subscriber that has feature-parity with the good old env_logger.

What? Why? Use tracing_log. What are you doing.

Protecting your secrets

secrecy is sick, I love the Zeroize trait. This was a discovery for me. I've had to add manual annotations telling serde to skip fields, this is much better and more explicit.

Database transactions

Diesel gives you a nice way to invoke transactions and it even handles nested transactions by lowering inner transactions to checkpoints that can be rolled back to. It's been really helpful in my projects.

Error handling

Here's a Reddit post of a handy pattern I came up with for lifting errors into Actix compatibility: https://old.reddit.com/r/rust/comments/ozc0m8/an_actixanyhow_compatible_error_helper_i_found/

Error types

I second the recommendation of thiserror and anyhow. I will say that while thiserror for libraries, anyhow for applications is a good shorthand if you find yourself wanting to "handle" your own errors consider cleaning up your anyhow'isms with explicit thiserror types. Luca discusses this issue well and I agree with his characterization.

Hashing passwords

I have no idea why Luca is steelmanning the use of SHA3-256 for passwords. Do not do that. You need computationally expensive hashing functions for passwords unless you can somehow force users to not use passwords susceptible to a dictionary attack. The computational analysis he offers here is irrelevant and does not account for how large a percentage of your users will be using passwords included in most dictionary attacks. What's weird is he then course-corrects and explains that the dictionary attack will be very effective without clearly explaining why the "boiling the ocean" analysis is irrelevant.

This is feeling like an anti-pattern in the book where Luca offers a strawman example or advice and then contradicts it later. I don't think this is clear writing or respectful of the reader's time.

I second Luca's recommendation of argon2 with fallbacks to bcrypt and scrypt depending on your needs.

Session storage

You don't have to store your sessions in a database. You can use encrypted cookies if you take certain precautions. That said, just use your primary data store (preferably PostgreSQL) for your database-backed sessions.

Fault tolerance

The manual transaction manipulation in sqlx is making me wince. I'm glad he's covering this topic and explicitly addressing transaction isolation, this is an important issue that leads to a lot of long-tail errors in web applications.

I made a helper function that automatically re-runs your transactions according to a retry strategy for Diesel in order to make using higher isolation levels less troublesome. I should open source it at some point.

Summary

Excellent book, I strongly recommend buying it if you'd like to deploy a web API or web application written in Rust. Tons of practical information, pretty comprehensive to the point that it includes modern application deployment strategies. The foregoing blog post may make it seem like I didn't like the book or broadly disagreed with Luca but try to realize that the above is where I disagreed with him on what was a skim read of 511 pages at time of writing. Good amount of material for sure.

The couple pointers I got from this book even as a veteran production Rust user was worth the ~$40 something dollars I paid for it.