Rust What Is The Target Triple For Macos

The llvm-target field specifies the target triple that is passed to LLVM. Target triples are a naming convention that define the CPU architecture (e.g., x8664 or arm), the vendor (e.g., apple or unknown), the operating system (e.g., windows or linux), and the ABI (e.g., gnu or msvc). For example, the target triple for 64-bit Linux is x8664. —Information accurate as of: build 904.83 The Targeting Computer is a computer loaded with software that can analyze a video and produce rotational deltas to individual objects contained in the feed. In other words, it is a military-grade computer with programs installed in it that allow it to track and mark target objects. The Targeting Computer can be used in the crafting recipe of an Auto. Target (String, defaults to ') if the given string is not empty, use the given target triple for all rustc invocations; waittobuild (u64) overrides build debounce duration (ms). This is otherwise automatically inferred by the latest build duration. Please don't laugh at me. Can Rust run on my iMac? I know it's a silly question. SPECS: iMac (21.5-inch, Mid 2010) 3.06 GHz Intel Core i3 4 GB 1333 MHz DDR3 ATI Radeon HD 4670 256 MB 4 GBS o' Ram. Aug 18, 2018  One of the most awesome and powerful feature of Rust is its ability to use and create macros. Unfortunately, the syntax for creating macros can be.

The cycle of development we’re most familiar with is: write code, compile your code, then run this code on the same machine you were writing it on. On most desktop OSes, you pick up a compiler by downloading one from your package manager. Xcode and Visual Studio are toolchains (actually IDEs) that leverage being platform-specific, each including tools tailored around the platform your code will run on and heavily showcasing the parent OS’s design language.

Yet you can also write code that runs on platforms you aren’t simultaneously coding on. Every modern computer architecture supports a C compiler you can download and run on your PC, usually a binary + utils for a new gcc or llvm backend. In practice, using these tools means setting several non-obvious environment variables like CC and searching the internet for magic command line arguments (it takes a lot of work to convince a Makefile not to default to running gcc). Installing a compiler for another machine is easy, but getting a usable result takes trial and error.

If you’ve picked up Rust and are learning systems programming, you might ask: Does Rust, a language whose design addresses C’s inadequacies for developing secure software, also address its shortcomings in generating code on other platforms?

Let’s start with the tiered platform support system Rust maintains to track which platforms it supports and how full that support is (from “it actually might not build at all” to “we test it each release”). On its own, this is a useful reference of the target identifiers of popular consumer OSes, embedded platforms, and some experimental ones (like Redox!). The majority of these different platforms don’t actually support running rustc or cargo from the command line though.

Rust makes up for this by advertising a strong cross-compilation story. Quoting from the announcement post for rustup target:

In practice, there are a lot of ducks you have to get in a row to make [cross-compilation] work: the appropriate Rust standard library, a cross-compiling C toolchain including linker, headers and binaries for C libraries, and so on. This typically involves pouring over various blog posts and package installers to get everything “just so”. And the exact set of tools can be different for every pair of host and target platforms.

The Rust community has been hard at work toward the goal of “push-button cross-compilation”. We want to provide a complete setup for a given host/target pair with the run of a single command.

This is an excellent goal given the infrastructure and design challenges ahead. And I wanted to learn more about this part of the article:

Cross-compilation is an imposing term for a common kind of desire:

  • You want to write, test and build code on your Mac, but deploy it to your Linux server.

This is exactly the scenario I’m in! But… understandably, the article doesn’t actually include an example of how to do this, because cross-compiling for another OS requires making several assumptions about the target platform that maybe not apply to everyone. Here is a recap of how I made it work for my project.

In my case, the need to build binaries for Linux came up while working on my project edit-text, a collaborative test editor for the web written in Rust. I regularly test changes out in a sandbox environment since you can’t rely on testing code locally to catch behavior that might only appear in production. Yet the issue I kept running into was how long it was taking deploy to my $10 DigitalOcean server. I spent a long time rereading the same compiler logs before it actually dawned on me—I was compiling on my web server and not my laptop. And that’s really slow.

If you have a githook that takes new source code pushed via git and loads it into a Docker container, deploying via git just sends up your source directory and points at a rustc compiler. On each new deploy, your server has to rebuild from your Dockerfile from scratch, and unless you configure it to support caching this throws away the benefit of quickly iterating on your code. If you want the benefit of faster builds by having cargo incrementally cache compiled files between builds, you’ll find it’s complex to get right in your Dockerfile configuration but entirely natural to manage this in your local development environment.

The approach I have the most experience with is to take the compilation environment and just run it locally on my machine. With Docker, we have easy way to run Linux environments (even on Mac) and to pin it to the same development environment I have on my server. Since Docker on my machine will be running Linux in a hypervisor, local performance should beat what I can do on my server even with the overhead of not being the host OS.

Did you know Rust has a first-party story for cross-compiling for Linux using Docker? The Rust cross-compilation imagerustlang/rust:nightly can help you generate binaries for the Linux target compiled with nightly Rust and can be invoked on demand from the command line using docker run . I developed this set of command line arguments to get cross-compilation with caching working:

The binaries this produced, amazingly, worked when I copied them to and ran them on my Linux server. Compiling locally was marginally faster. But there were drawbacks to this approach for cross-compilation:

  • I had to manage and run Docker locally, which on macOS requires a Docker daemon be running on macOS.
  • I had to cache each cargo and rust directory individually, which, managed separate from rustup on my machine, seemingly never got garbage collected. I accrued huge directories of cached files just for compiling for Linux.
  • Intermediate artifacts from successive builds seemed less likely to be cached between builds, meaning building took longer than they should on my machine.

I like that Docker gives a reproducible environment to build in—building Debian binaries on the Debian kernel makes things easy—but Rust’s cross-compilation support might allow me to manage all the compilation artifacts that make modern Rust compile times tolerable.

“rustup target add”

So far I’ve only mentioned the Rust compiler’s support for cross-compiling. There are a actually handful of components you need to make cross-compiling work:

  1. Compiler that support your target
  2. Library headers to link your program against (if any)
  3. Shared library files to link against (if any)
  4. A “linker” for the target platform

Let’s start with compiler support. Passing --target when running cargo build run changes the assembly your CPU outputs and bundled in object files supported by that OS. But we have to install the new toolchain adding that capability to Cargo. This is done with the command rustup target add.Installing support for another “target” is something you do once on your machine, and from then on cargo can support it via the --target argument.

To install a Linux target on my Mac, I ran:

This installs the new compilation target based on its target triplet, which means:

  • We’re compiling for the x84-64 processor set (AMD64)
  • from an unknown (generic) vendor, targeting the Linux OS
  • compiled with the musl toolchain.

A note on musl: the alternative to musl is GNU, as in GNU libc—which almost every Linux environment has installed anyway. So why choose musl? musl is designed to be statically linked into a binary rather than dynamically linked; this means I could compile a single binary I could deploy to any server without any requirements as to what libraries were installed on that OS. This might even mean we can skip steps 2) and 3) above. And I could back to GNU if it didn’t work out.

Finally, we need a program that links our compiled objects together. On macOS, you can use brew to download a community-provided binary for Linux + musl cross-compilation. Just run this to install the toolchain including the command “x86_64-linux-musl-gcc”:

At this point we need to tell Rust about the linker. The official way to do this is to add a new file named .cargo/config in the root of your project and set its content to something similar:

This should instruct Rust, whenever the target is set to --target=x86_64-unknown-linux-musl, to use the executable “x86_64-linux-musl-gcc” to link the compiled objects. But it seems to be the case that if you have any C code compiled by a Rust build script, you also have to set environment variables like TARGET_CC to get it working. So when my code started throwing linking errors, I just ran the following in my shell:

Thankfully, this made the compilation steps with linker errors work consistently.

Libraries and Linking

musl doesn’t support shared libraries, that is, libraries that are independently installed and versioned on your system via a package manager. Shared libraries are ones your program links to these at runtime (once the program starts), rather than literally embedding them in their binary at compile time (static libraries).

Sometimes you can work around the constraint of requiring static libraries without leaving the cargo ecosystem: some radical crates support a “bundled” feature, as in libsqlite3-sys, which compiles a static library during your build step and links it into your project. For example, the SQLite drivers I was relying on had no problem being compiled with musl if I enabled the “bundled” feature; I didn’t have to apt-get install libsqlite3 on the remote platform, nor did I have to find headers that matched it. An app with only this requirement would be a solid usecase for deploying binaries compiled with musl.

If your project depends on openssl, though, you’ll see this type of error midway during cargo build:

No need to check the exit code, this clearly isn’t building correctly. The error says “Could not find directory of OpenSSL installation”. This doesn’t mean I didn’t have OpenSSL installed on my computer (I did); it means it can’t find its headers and source code to compile. Compiling a static binary is with musl is now much more complicated if I need to download and compile an arbitrary OpenSSL dependency.

Why I need OpenSSL as a dependency to begin with: one of the libraries I depend on in edit-text’s software stack,reqwest, relies on native-tls to support encryption for download data over HTTPS. reqwest is a programmatic HTTP client for Rust, and it uses native-tls to link against the OS’s native SSL implementation and expose an agnostic interface to it in order to support HTTPS. I can imagine in the future a reqwest feature that substitutes a rust-tls backend instead of native-tls, allowing me to compile all my crypto could without needing to touch gcc. But for now, since I don’t want the heavy lift of compilng OpenSSL myself, dynamic linking looks like the only way forward.

Debian Packaging

New plan: compile against the GNU toolchain and use dynamic linking. If we don’t want to cross-compile libraries ourselves, then we have to find a source of pre-compiled libraries and headers (which are sometimes distinct things). Since we’re moving away from musl, we’ll even need to bring our own copy of libc!

Luckily, we just have to recreate the same environment as a Linux compiler. This turns out to be straightforward. When I’m compiling code in Linux and need to, say, link against OpenSSL, I can run the following:

Now I can compile any binary which relies on OpenSSL headers, because they were installed to my system. Where are these files? One way to find them is to run dpkg-query -L libssl-dec to list which files were installed by my package manager. In this case, most of our header files are inserted into /usr/include and the Linux libraries in /usr/lib. If we have the .deb file itself, we can actually confirm this by dumping its archive contents:

Where aes.h is the header we might require linking against.

We can essentially reuse these packages on other platforms. Package managers extract files to specific locations on your machine. If can extract these same archives locally, we tell the compiler to look in these folders for headers instead of OS folders.

Let’s describe which archives we want: First, my choice of a broadly-accessible Linux distribution that has good tooling is Debian, of which Ubuntu is a fork and which has a straightforward packaging system found with apt and its .deb package format. Second, we need to pick a sufficiently old enough version of Debian that would support ABI-compatible libraries. I chose /jessie/, the version of Debian immediately prior its current stable release, /stretch/.

Rust What Is The Target Triple For Macos X

We can’t just fetch a .deb archive via ‘apt-get install’ on a Mac though. Downloading library headers directly means navigating a hyperlink survey of computer architectures and CDNs. I poked around for a while to see if there was an obvious way to compute the URL of any Debian package, but it looks like to retrieve the package URL you basically need to reimplement all of aptitude (the package manager used by Debian). Because there were no brew formulae for libapt, and no standalone Rust bindings either, I assumed any solution would be more complicated than just referencing the direct URL. As such, the build script fetches each package URL in sequence and extracts them into a local folder:

You can see the dependencies my build script relies on. Note that we only install the packages we need to build with: backtrace-rs requires the libc library headers, and openssl-sys on Rust requires not only the headers in libssl-dev but also the shared library in libssl1.1. Other than that, these are all the packages I required when cross-compiling.

Building Linux binaries on macOS

We again need to install a linker, this time one that targets GNU/Linux. Again this is made easy with brew thanks to another community contribution:

Now the executable “x86_64-unknown-linux-gnu-gcc” is available on our PATH.

We next make a series of environment variable updates:

This specifies the linker headers and shared library locations, and some OpenSSL-specific flags required by openssl-sys. Take note of -isystem, which changes where gcc looks for system headers. System requirements for macos apps. Because we are using only Debian packages, the OpenSSL-specific build flags refer to the same folders as our other system libraries.

Now we can run the Cargo build command to cross-compile for Linux:

The “standalone” feature is part of the project, and configures everything that can be built without relying on system libraries (like SQLite).

Now in my project’s ./target/debug/x86_64-unknown-linux-gnu/ folder, I can run file on the edit-server binary:

It says it’s a shared object (in this case, an executable) and mentions we compiled it for GNU/Linux. Next, I created a example Dockerfile based on Debian that, when the binary is placed in the same directory, just launches it:

I tried it out withdocker run on my machine, and saw the server successfully boot up:

May 28, 2018  Mac OS X obviously is the supporting operating system. There are different ways to download Mac OSX 10.10 Yosemite on different computer brands. We’ve compiled the steps for a Mac as well for Windows. Downloading Apple Mac OSX 10.10 Yosemite from Apple Store. The users can still find Mac OS X Yosemite Download here and its available for free. Free email clients for mac yosemite. OS X 10.10, aka Yosemite, sports a more modern look and bridges the gap between Apple's desktop and mobile devices. The new Continuity helps you hand off tasks from iPhone to iPad to Mac, but that. May 25, 2020  The Mac OS X Yosemite 10.10 ISO arrangement had a few issues when it went to the graphical UI which was disapproved of by numerous Apple clients. The Mac OS X Yosemite 10.10 ISO picked up the trust of the end-clients with the expansion of Siri and Apple pay support in safari. To know more about Mac OS X Mavericks 10.9 ISO and DMG Image, drop.

This is Debian, running locally on my machine, successfully running the binary we compiled on my Mac. Since this is the same Dockerfile we send to the server, this means the server will be able to deploy it too!

There is one more step here: the binary now has to be sent to the server along with each deploy. Checking a large binary into Git just so Dokku could receive it via git push performs very poorly, and really is not what Git is built for. What worked for me: I switched to creating an archive of my Dockerfile’s directory and piping into ssh running dokku tar:in - on the remote server. This Dokku command loads a tarball from stdin (in this case) and deploys it, making it possible to push new code to the server without needing to check in anything to git each time.

Rust advantages in webdev

And the result: it is now much faster to update code running on a server. Compilation speed improved immensely between compiling it remotely, where each compile felt as slow as a full rebuild—and compiling it locally, where cargo’s incremental cache makes builds feel as fast as targeting your default OS. It’s fast enough I can deploy new code to a remote test server when it’s too annoying to set up a local server. Yet Rust’s cross-compilation story can’t eliminate by itself the clumsy ritual of setting arbitrary environment variables in order to get compilation to succeed.

If rustup target is a blueprint for the future, I imagine an ecosystem of cross-compilation tools will inevitably spring up that makes bundling for other OSes straightforward and configurable. Even though Rust isn’t an interpreted language, if deploying code no longer means compiling code on your server, and local recompilation is fast, it makes deployment in Rust feel much more like modern web development. Cross-compilation support is an undersold factor in Rust’s webdev story.

I’ve recently been working on a Rust project at work which requires compiling for Linux (GNU), Linux (musl - for Alpine Linux) and macOS. I use Linux Mint nearly all the time, so building for macOS targets has required asking very nicely to borrow a spare Macbook Air. This is naturally a bit crap, so I set out to find a Linux-only solution to cross compile for macOS using osxcross. A weekend of pain later, and I have the following post. Hopefully it spares you a weekend of your own pain.

Rust What Is The Target Triple For Macos

Environment

This process should work in any modern-ish Debian-based environment. This is the setup I used:

  • Linux Mint 19.1, Dell XPS1 15, Intel i9 x64
  • Rust 1.32.0 (with Rustup)
  • Clang 6.0.0

I’ve also tested this process in CircleCI and it seems to be working fine.

The only device I have to test on at time of writing is a Macbook Air with macOS Mojave on it. This process should work for other macOS versions, but is untested.

Requirements

There are a few system dependencies required to work with osxcross. I don’t think the version requirements are too strict for the packages listed below. You may want to check the osxcross requirements as well if you’re having problems.

Building osxcross

The following process is based on this tutorial on Reddit and some trial and error. I’m using the macOS 10.10 SDK as I had the least problems getting up and running with it.

Rust What Is The Target Triple For Macos Mac

Add the following to a script called osxcross_setup.sh and make it executable.

Rust What Is The Target Triple For Macos Free

Not a lot to it, thanks to the hard work put in by the osxcross developers. Running ./osxcross_setup.sh should create a folder named osxcross with everything you need in it to cross compile to macOS with Clang. This doesn’t modify $PATH or install any system files, so is useful for CI as well.

Append ./build_gcc.sh to osxcross_setup.sh if you want to use GCC to cross compile.

Configuring Cargo

Cargo needs to be told to use the correct linker for the x86_64-apple-darwin target, so add the following to your project’s .cargo/config file:

If you’ve used a different macOS SDK version, you might need to replace darwin14 with darwin15. To check what binary to use, look in osxcross/target/bin.

Building the project

Because I chose not to install osxcross at the system level, the $PATH variable must be modified for Cargo to pick up the linker binaries specified previously. The build command changes to:

This adds [pwd]/osxcross/target/bin to $PATH, which means the linker binaries should get picked up. The path must be absolute to work properly, hence $(pwd).

Now you should have a binary in target/x86_64-apple-darwin/[debug release] which works on macOS!

Building *-sys crates

You can stop here if none of your crates require any C bindings to function. Quite a few of them do, so read on if you run into compilation or linking errors.

The project I’m cross compiling uses the git2 crate which has libz-sys in its dependency tree. Unfortunately this means digging out a C compiler. The build uses the host system compiler by default, so the architectures for the final binary (target arch) and these linked libraries (host arch) don’t match up.

The solution to this is to set the CC and CXX environment variables in our build command:

This uses o64-clang and o64-clang++ in osxcross/target/bin.

Now git2 compiles, but fails to link! This is due to the fact that libz-sys attempts to link to the host system zlib library. Because I’m building on a Linux machine, this is a Linux-native library which won’t work on macOS.

Luckily, libz-sys supports building its own statically linked version of zlib. According to libz-sys’ build.rs, if LIBZ_SYS_STATIC=1 is set in the environment a bundled version of zlib will be built. Because we set CC and CXX, this statically linked code will be compiled for a macOS target. The full build command ends up looking like this:

CI

I got the above process working in CircleCI, but it should be pretty easy to get any Debian-based CI service to work.

It should be possible to cache the osxcross folder so it doesn’t have to be built for every job. The cache should be invalidated when your build script(s) change. For example, I use the cache checksum project-v1- to ensure the osxcross folder is regenerated correctly.

Rust What Is The Target Triple For Macos 2

Wrapping up

The final build command is pretty long, so I’d suggest putting it in a script. In my case, I have a build script containing the following snippet:

Rust What Is The Target Triple For Macos Download

Now you can just run ./osxcross_setup.sh and ./build_macos.sh in your CI.