I’m very confused about build vs host requirements.
I think I get the basic idea:
When cross compiling, there are some packages (binary?) that might be specifically built for a particular platform, and thus the version on the system building the the conda pacakge is different than the one on the target platform – so if the packge needs to be linked, the target version has to be there. All good.
So:
In build
: you put stuff needed to build the package – simple enough.
In host
: you put stuff that needs to be on the target machine at build time, e.g., shared libraries:
“shared libraries requirements must be listed in the host section, rather than the build section”
OK – that all makes sense. So now the confusions:
For some things, you need to put a selector on for when cross-compiling, e.g.:
build:
- python # [build_platform != target_platform]
...
host:
- python
...
which, I guess, means it’s a host requirement always, but only a build requirement when cross-compiling ?
What confused me is which things these apply to – how would I define it?, maybe:
Requirements that are BOTH used to build the package, AND provide components that must be available (e.g. linked) at build time.
So this applies to Python itself – it’s running the build system, and it also has a run-time lib that we link to (for compiled extensions) all good.
and it applies to, e.g. numpy, because we need to get the numpy headers at build time.
But what about, e.g. Cython? I’ve been told that cython should be in host, and also in build, with the selector for cross compiling – but why??
But my main question is why the selector? couldn’t conda-build figure that out for itself? OR, in fact, simply put the build packages in both when cross-compiling, and when native compiling, and you’d get two identical copies – is there harm in that?