Let's talk about build.rs
in Rust projects—specifically, how to keep it dead simple, dependency-light, and maintainable, with real-world examples from some solid open source projects. If you’ve ever groaned at the sight of a sprawling CMakeLists.txt or the arcane mysteries of autotools, you’re in the right place. For our project we pretty much have one golden rule, the developer experience should be as close to git clone
+ cargo run
as possible when you're starting on one of our projects. Our developers shouldn't need to install any additional dependencies (like cmake, autotools, or worse, or just putting binary libs in arbitrary directories).
Why keep build.rs
simple?
- Fewer dependencies: Because
build.rs
runs early in the build process, having a lot of dependencies here to build before invoking thebuild.rs
file is mostly just serial overhead. - Predictable builds: No surprises from external tools or system packages and more important, no setup docs that go out of date.
- Easier onboarding: New contributors don’t need to learn or install CMake, autotools, or chase down system packages.
- Reproducibility: Vendoring C/C++ code means your build works everywhere, every time.
Only use cc
and bindgen
Seriously, that’s usually all you need. The cc crate compiles C/C++ code, and bindgen generates Rust FFI bindings. No need for pkg-config
, no need for CMake, no need for autotools. Just point cc
at your sources, maybe run bindgen
if you need bindings, and you’re done.
Avoid pkg-config
unless you absolutely must. It pulls in system dependencies and can break reproducibility, and most importantly it's not available in regular windows setups. If you really need it, consider a feature flag so users can opt in and instead prefer building either from source, or including pre-built binaries that work for a wide set of target systems.
The vendored
and bindgen
Features
A best practice is to offer a vendored
feature (enabled by default) that builds the C/C++ code you ship in your repo (often as a git submodule). If someone wants to link against a system library instead, they can disable the feature.
Likewise, a bindgen
feature lets you regenerate bindings if you want, but by default, you should check in the generated Rust bindings. This way, users don’t need to install LLVM or deal with bindgen unless they’re hacking on the FFI. One thing to note: you'll most likely need differently generated bindgen files for different platforms. Some library's won't expose their platform-specific APIs in all platforms (for good reason), and very often you end up mismatching types (enums are notorious—on some platforms they're represented as signed ints, and others they're represented as unsigned ints, etc).
Example: The anatomy of a minimal build.rs
Let’s see how the best projects do it:
1. Add your sources
Most projects we've moved away from cmake
in have one of two helper functions, often called add_sources
, that just lists C/C++ files to compile. Sometimes we'll walk the directory, other times we'll just explicitly have a list of files to include. It depends on preference, I prefer a flat file list since it avoids filesystem operations to discover something that's effectively static at the cost of slight maintainability trade-offs.
fn add_sources(cfg: &mut cc::Build) {
cfg.file("src/foo.c")
.file("src/bar.c");
// Or, recursively walk a directory if you have lots of files
}
2. Set up the build
Here’s the basic pattern you’ll see everywhere; we'll set up some defines (even platform specific ones), and includes. Occasionally we'll need to set some compiler-specific flags (check out functions like is_like_msvc and company for this).
fn main() {
let mut build = cc::Build::new();
add_sources(&mut build);
build.include("include");
build.define("SOME_DEFINE", Some("1"));
build.compile("libfoo.a");
}
3. Feature flags
Cargo features let you control whether you build vendored code or use system libraries, and whether you run bindgen. These feature flags can just be passed to cargo like normal (cargo build -F bindgen
) so you can run them as a one-off process.
For our projects, we'll typically build with vendored source code, so it's just a clean and easy batteries-included project to get up and going, at the cost of some compile time.
#[cfg(feature = "vendored")]
fn main() {
// Build and link the vendored C code
}
#[cfg(not(feature = "vendored"))]
fn main() {
// Link to system library (if you must)
}
If you’re generating bindings, also do it behind a feature flag:
#[cfg(feature = "bindgen")]
fn generate_bindings() {
let bindings = bindgen::Builder::default()
.header("wrapper.h")
.generate()
.expect("Unable to generate bindings");
bindings.write_to_file("src/bindings.rs").expect("Couldn't write bindings!");
}
But make sure to check in the generated bindings.rs
so users don’t need LLVM or bindgen unless they’re actively working on the FFI. This makes the crate easy to use for all other developers, and it puts the burden on the people that are doing the work, rather than the entire team.
4. Avoiding external build systems
Don’t call out to CMake, autotools, or anything else. You’re not writing a new build system—you’re just compiling a few files. If you need to mimic a CMake setup, translate the logic into Rust code with cc
and build.define
, etc. This is way easier to reason about and debug. This is logic in the wrapped project that, thankfully, in our experience doesn't change often, if at all. Occasionally you'll see folks add a few files here and there but all in all, nobody likes tinkering with build systems so it's a pretty reasonable bet that it's often left alone. Once you set up your build.rs
, you’ll rarely touch it again—unless you update the vendored C/C++ code or tweak a build flag.
If you’re porting a CMake-based project, just translate the high level logic to Rust: add the sources, set the defines, and set up the include directories. It's often not needed to port over the entire cmake setup as it's often fille with tons of minutia that might not apply. No need to bring in the whole CMake machinery.
5. Use git submodules
Most of these projects just add the C/C++ library as a git submodule, cargo
will check out these submodules by default so they're always available for a build and this keeps the code available and versioned, you can just point
cc
at the submodule directory in your build.rs
file
.
Real-world examples
- physx-sys: Uses
cc
to compile vendored C++ sources, lists files withadd_sources
, defines features, and never touches CMake. - freetype-sys: Similar pattern—minimal dependencies, feature flags for vendored/system, and simple source listing.
- zeromq-src-rs: Again, just
cc
, no external build tools, and a clean, readable build script. - spirv-tools-rs: Depends only on cc
- breakpad-sys:Depends only on cc
- metis-sys: Depends only on bindgen and cc
TL;DR
- Keep
build.rs
minimal: Only usecc
andbindgen
. - Feature flags: For vendored/system and bindgen/no-bindgen.
- No external build systems: No CMake, no autotools, no pkg-config if you can help it.
- Vendor C/C++ code via git submodules.
- Check in generated bindings.
- Just compile a few files and link them.
If you want to keep your Rust project as simple as cargo build
or cargo run
, this is the way. Life’s too short to debug CMake errors on a Friday afternoon. Keep it simple, keep it Rusty!
Final pro-tip
Do enable the cc
feature parallel
, this feature used to drag in rayon
which would serialize the compilation all the way at the beginning of the build process, however, over the years they've switched it out to jobserver
which is a much lighter dependency! Also note that due to the way cargo build features work, if one of the crates enables this, most likely all of them will have it enabled anyway!