It's assumed that where we use DEPFILE in command or rspfile_content, it
can be quoted by quoting the ninja variable (e.g. $DEPFILE ->
'$DEPFILE')
This is nearly always true, but not for gcc response files, where
backslash is always an escape, even inside single quotes.
So this fails if the value of DEPFILE contains backslashes (e.g. a
Windows path)
Do some special casing, adding DEPFILE_UNQUOTED, so that the value of
depfile is not shell quoted (so ninja can use it to locate the depfile
to read), but the value of DEPFILE used in command or rspfile_content is
shell/response file quoted)
(It would seem this also exists as a more general problem with built-in
ninja variables: '$out' appearing in command is fine, unless one of the
output filenames contains a single quote. Although forbidding shell
metacharacters in filenames seems a reasonable way to solve that.)
(How does this even work, currently? Backslashes in the value of all
ninja variables, including DEPFILE were escaped, which protected them
against being treated as escapes in the gcc response file. And
fortunately, the empty path elements indicated by a double backslash in
the value of depfile are ignored when ninja opens that file to read it.)
Now that all command-line escaping for ninja is dealt with in the ninja
backend, escape_extra_args() shouldn't need to do anything.
But tests of existing behaviour rely on all backslashes in defines being
C escaped: This means that Windows-style paths including backslashes can
be safely used, but makes it impossible to have a define containing a C
escape.
We avoided having to get this right previously, as we'd always use a
response file if possible.
But this is so insane, I can't imagine it's right.
See also: subprocess.list2cmdline() internal method
In certain exotic configurations, the style of quoting expected in the
response file may not match that expected by the shell.
e.g. under MSYS2, ninja invokes commands via CreateProcess (which
results in cmd-style quoting processed by parse_cmdline or
CommandLineToArgvW), but gcc will use sh-style quoting in any response
file it reads.
Future work: The rspfile quoting style should be a method of the
compiler or linker object, rather than hardcoded in ninjabackend.
(In fact, can_linker_accept_rsp() should be extended to do this, since
if we can accept rsp, we should know the quoting style)
Rather than ad-hoc avoiding quoting where harmful, identify arguments
which contain shell constructs and ninja variables, and don't apply
quoting to those arguments.
This is made more complex by some arguments which might contain ninja
variables anywhere, not just at start, e.g. '/Fo$out'
(This implementation would fall down if there was an argument which
contained both a literal $ or shell metacharacter and a ninja variable,
but there are no instances of such a thing and it seems unlikely)
$DEPFILE needs special treatment. It's used in the special variable
depfile, so it's value can't be shell quoted (as it used as a filename
to read by ninja). So instead that variable needs to be shell quoted
when it appears in a command.
(Test common/129, which uses a depfile with a space in it's name,
exercises that)
If 'targetdep' is not in raw_names, test cases/rust all fail.
We need to count rsp and non-rsp references separately, which we need to
do after build statement variables have been set so we can tell the
difference, which introduces a bit of complexity.
Writing rsp files on Windows is moderately expensive, so only use them
when the command line is long enough to need them.
This also makes the output of 'ninja -v' useful more often (something
like 'cl @exec@exe/main.c.obj.rsp' is not very useful if you don't have
the response file to look at)
For a rule where using a rspfile is possible, write rspfile and
non-rspfile versions of that rule. Choose which one to use for each
build statement, depending on the anticpated length of the command line.
When classifying generated sources, we were treating gir/typelib files
generated by gobject-introspection as headers. This is bad because it
serializes the build by adding order-only dependencies to every target
even though sources will never actually use them for anything.
Treat them as libraries, which is somewhat more accurate.
We do not need to *always* rebuild generated sources when a generated
header changes. We will get that information from the compiler's
dependency file, and ninja will track it for us. This is exactly the
same as static sources.
However, we do need an order-only dependency on all generated headers,
because we cannot know what headers will be needed at compile time
(which is when the compiler's dependency file is generated).
This fixes spurious rebuilds and relinking in many cases.
A current rather untyped storage of options is one of the things that
contributes to the options code being so complex. This takes a small
step in synching down by storing the compiler options in dicts per
language.
Future work might be replacing the langauge strings with an enum, and
defaultdict with a custom struct, just like `PerMachine` and
`MachineChoice`.
This make relative pathes shorter an too give a chance to
de-duplicate -isystem flags just like -I flags.
Fix common test case 203 for OSX build host too
When a source file for a library is changed without adding new extern
symbols, only that library should be rebuilt. Nothing that uses it
should be relinked.
Along the way, also remove trailing `.` in all Ninja rule
descriptions. It's very confusing to see messages like:
```
Linking target mylib.dll.
```
It's confusing that the period at the end of that is not part of the
filename. Instead of removing that period manually in the tests (which
feels wrong!) just don't print it at all.
We actually use this while linking on Windows, and hence we need to
extract symbols from this file, and not the DLL.
However, we cannot pass it instead of the DLL because it's an optional
output of the compiler. It will not be written out at all if there are
no symbols in the DLL, and we cannot know that at configure time. This
means we cannot describe it as an output of any ninja target, or the
input of any ninja target. We must pass it as an argument without
semantic meaning.
This is more correct, and forces the target(s) to be rebuilt if the
PDB files are missing. Increases the minimum required Ninja to 1.7,
which is available in Ubuntu 16.04 under backports.
We can't do the same for import libraries, because it is impossible
for us to know at configure time whether or not an import library will
be generated for a given DLL.
This is similar to what we currently do for scan-build except there is
no environment variable to choose a specific clang-format to run. If an
environment variable is needed for better control, we can add it later.
Detect scan-build the same way when trying to launch it and when
generating the target.
The detection method is:
1. look within SCANBUILD env variable
2. shutil.which('scan-build')
3. *on non-linux platforms only*: go through all the possible
name candidates and test them individually.
The third step is added following this comment
https://github.com/mesonbuild/meson/pull/5857#issuecomment-528305788
However, going through a list of all the possible candidates is neither
easily maintainable nor performant, and is therefore skipped on
platforms that should not require such a step (currently, only Linux
platforms).
This is a follow-up to the issue raised by @lantw44 during PR:
https://github.com/mesonbuild/meson/pull/5857
as what was done with clang-format, test the presence of the tool before
generating a dedicated target. Pass silently if scan-build is not found.
Signed-off-by: Gabriel Ganne <gabriel.ganne@mindmaze.ch>
Return the command line from serialize_executable, which is then
renamed to as_meson_exe_cmdline. This avoids repeating code that
is common to custom targets and generators.
- AttributeError: 'ValaCompiler' object has no attribute 'get_program_dirs'
Fixed by adding a `get_program_dirs()` function to the base Compiler
class, to match `get_library_dirs()`
- KeyError: 'vala_COMPILER'
Fixed by creating the Vala compile rules for all machines, not just
the build machine.
In most cases instead pass `for_machine`, the name of the relevant
machines (what compilers target, what targets run on, etc). This allows
us to use the cross code path in the native case, deduplicating the
code.
As one can see, environment got bigger as more information is kept
structured there, while ninjabackend got a smaller. Overall a few amount
of lines were added, but the hope is what's added is a lot simpler than
what's removed.
when we are generating the include directories for a build target, then
we are iterating over all include directories, check if they are . or ..
and if not, generate a compile args object for it. However, the join
calls and the generation of the compile object is quite expensive, if we
cache the results of this, then we can _generate_single_compile from 60%
to roughly 50%.
Switch from build.compiler to environment.coredata.compiler and likewise
from build.cross_compiler to environment.coredata.cross_compiler in
backends. (Only seems to be actually used in ninja backend).
Currently C++ inherits C, which can lead to diamond problems. By pulling
the code out into a standalone mixin class that the C, C++, ObjC, and
Objc++ compilers can inherit and override as necessary we remove one
source of diamonding. I've chosen to split this out into it's own file
as the CLikeCompiler class is over 1000 lines by itself. This also
breaks the VisualStudio derived classes inheriting from each other, to
avoid the same C -> CPP inheritance problems. This is all one giant
patch because there just isn't a clean way to separate this.
I've done the same for Fortran since it effectively inherits the
CCompiler (I say effectively because was it actually did was gross
beyond explanation), it's probably not correct, but it seems to work for
now. There really is a lot of layering violation going on in the
Compilers, and a really good scrubbing would do this code a lot of good.
After the previous commit, outfile is now passed down to lots of things
which don't use it, as they only create built statements, rather than
writing them out. Remove these unnecessary args.
Store the build statements and then write them all out, rather than
writing them out as we go.
Construct a NinjaBuildElement for the 'PHONY' target, rather than
writing it literally to the build.ninja file.
After the previous commit, outfile is now passed down to lots of things
which don't use it, as they only create rules, rather than writing them
out. Remove these unnecessary args.
Store the rules and then write them all out, rather than writing them
out as we go.
Store the rule broken down into parts which do and don't go into
rspfile, so we can construct either a rsp or non-rsp version of the
rule.
Setting this variable to contain additional commands to symlink shlib
aliases was removed in commit c0bf3e8d, so it's always unset now, and
thus does nothing.
This isn't safe given the way python implements default arguments.
Basically python store a reference to the instance it was passed, and
then if that argument is not provided it uses the default. That means
that two calls to the same function get the same instance, if one of
them mutates that instance every subsequent call that gets the default
will receive the mutated instance. The idiom to this in python is to use
None and replace the None,
def in(value: str, container: Optional[List[str]]) -> boolean:
return src in (container or [])
if there is no chance of mutation it's less code to use or and take
advantage of None being falsy. If you may want to mutate the value
passed in you need a ternary (this example is stupid):
def add(value: str, container: Optional[List[str]]) -> None:
container = container if container is not None else []
container.append(value)
I've used or everywhere I'm sure that the value will not be mutated by
the function and erred toward caution by using ternaries for the rest.
If find_program() returns a file from the source directory, anything
that uses it should add the file to the dependencies, so that they are
rebuilt whenever the script changes. Generator is not doing that.
While at it, I am doing two related fixes:
- Generator is not checking whther the generator actually was found,
resulting in a Python error involving NoneType if it isn't. To minimize
backwards compatibility issues, I am only raising the error when
g.process() is acutally called.
- the error message for custom_target with a nonexisting program
erroneously mention a not-found external program "nonexistingprogram".
The new error is similar to the one I am adding for generators.
Instad of having special casing of threads in the backends and
everywehre else, do what we did for openmp, create a real
dependency. Then make use of the fact that dependencies can now have
sub dependencies to add threads.
commit b02b2d6d0d462310b313588ca7705d391e830eeb
Author: Michael Hirsch, Ph.D <scivision@users.noreply.github.com>
Date: Sun Mar 10 03:51:09 2019 -0400
cleanup
commit 3311ff5fb12577c78671bf2ff2787d28b86ba5fa
Author: Michael Hirsch, Ph.D <scivision@users.noreply.github.com>
Date: Sun Mar 10 03:50:30 2019 -0400
more robust
commit 8030dcb76698b148ee47ecded1f33b6d3821cca2
Author: Michael Hirsch, Ph.D <scivision@users.noreply.github.com>
Date: Sun Mar 10 03:30:05 2019 -0400
inwork compiles OK but needs smod filenames
This patch creates an enum for selecting libtype as static, shared,
prefer-static, or prefer-shared. This also renames 'static-shared'
with 'prefer_static' and 'shared-static' with 'prefer_shared'. This is
just a refactor with no behavioral changes or user facing changes.
Currently we specialcase OpenMP like we do threads, with a special
`need_openmp` method. This seems like a great idea, but doesn't work
out in practice, as well as it complicates the opemp
implementation. If GCC is built without opemp support for example, we
still add -fopenmp to the the command line, which results in
compilation errors.
This patch discards that and treats it like a normal dependency,
removes the need_openmp() method, and sets the compile_args attributes
from the compiler.
Fixes#5115
This does two things:
* On windows GCC-like compilers, the subsystem is always explicitly
specified (either -mwindows or -mconsole). MSVC is already explicit.
* The gui_app linker flags are now added after those mandated by
external dependencies. This is because some misguided libraries (such
as SDL) think that hijacking `main()` and forcing `-mwindows` in link
flags is clever. We must unconditionally override such misuses to let
gui_app work as intended.
In addition to MSVC, which was worked around previously, GCC also does
not list includes from the PCH in the depfile by default, unless
-fpch-deps is given. I think it's best to stay safe and not rely on any
particular behavior from the compiler here.
Instead use coredata.compiler_options.<machine>. This brings the cross
and native code paths closer together, since both now use that.
Command line options are interpreted just as before, for backwards
compatibility. This does introduce some funny conditionals. In the
future, I'd like to change the interpretation of command line options so
- The logic is cross-agnostic, i.e. there are no conditions affected by
`is_cross_build()`.
- Compiler args for both the build and host machines can always be
controlled by the command line.
- Compiler args for both machines can always be controlled separately.
macOS provides the tool `lipo` to check the archs supported by an
object (executable, static library, dylib, etc). This is especially
useful for fat archives, but it also helps with thin archives.
Without this, the linker will fail to link to the library we mistakenly
'found' like so:
ld: warning: ignoring file /path/to/libfoo.a, missing required architecture armv7 in file /path/to/libfoo.a
Since trying to cross compile for Windows from Linux (WSL) and having
paths like this:
'-L/mnt/c/Program Files (x86)/Microsoft Visual Studio/2017/\
Community/VC/Tools/MSVC/14.15.26726/lib/x64'
I found that the spaces and brackets in the paths weren't properly
escaped by the Ninja backend.
Building a cross compiler (`build == host != target`) is not cross
compiling. As such, it doesn't make sense to handle it under
`is_cross_build`.
(N.B. Building a standard library for a cross compiler would require
cross compiling, but Meson has support to do such a thing as part of a
compiler build currently.)
Accomodate clang-cl /showIncludes output in detect_vs_dep_prefix().
clang-cl outputs lines terminated with \n, not \r\n
v2:
should invoke the detected compiler, not hardcode 'cl'
Handle clang's cl or clang-cl being in PATH, or set in CC/CXX
Future work: checking the name of the executable here seems like a bad idea.
These compilers will fail to be detected if they are renamed.
v2:
Update compiler.get_argument_type() test
Fix comparisons of id inside CCompiler, backends and elsewhere
v3:
ClangClCPPCompiler should be a subclass of ClangClCCompier, as well
Future work: mocking in test_find_library_patterns() is effected, as we
now test for a subclass, rather than self.id in CCompiler.get_library_naming()
* Don't try to import empty-string custom target include dirs
* Import current directory if custom target dir is empty
This restores the previous behavior and fixes test failures caused by
the previous commit.
We now use the soversion to set compatibility_version and
current_version by default. This is the only sane thing we can do by
default because of the restrictions on the values that can be used for
compatibility and current version.
Users can override this value with the `darwin_versions:` kwarg, which
can be a single value or a two-element list of values. The first one
is the compatibility version and the second is the current version.
Fixes https://github.com/mesonbuild/meson/issues/3555
Fixes https://github.com/mesonbuild/meson/issues/1451
This means that we will take into account all the flags set in the
cross file when fetching the list of library dirs, which means we
won't incorrectly look for 64-bit libraries when building for 32-bit.
Signed-off-by: Nirbheek Chauhan <nirbheek@centricular.com>
Closes https://github.com/mesonbuild/meson/issues/3881
Ninja buffers all commands and prints them only after they are
complete. Because of this, long-running commands such as `cargo
build` show no output at all and it's impossible to know if the
command is merely taking too long or is stuck somewhere.
To cater to such use-cases, Ninja has a 'pool' with depth 1 called
'console', and all processes in this pool have the following
properties:
1. stdout is connected to the program, so output can be seen in
real-time
2. The output of all other commands is buffered and displayed after
a command in this pool finishes running
3. Commands in this pool are executed serially (normal commands
continue to run in the background)
This feature is available since Ninja v1.5
https://ninja-build.org/manual.html#_the_literal_console_literal_pool
Meson already had code to propagate link dependencies from static
libraries to programs that use those static libraries.
Unfortunately, it was not handling the special cases of 'threads' and
'openmp' dependencies.
* get_library_naming: Use templates instead of suffix/prefix pairs
This commit does not change functionality, and merely sets the
groundwork for a more flexibly naming implementation.
* find_library: Fix manual searching on OpenBSD
On OpenBSD, shared libraries are called libfoo.so.X.Y where X is the
major version and Y is the minor version. We were assuming that it's
libfoo.so and not finding shared libraries at all while doing manual
searching, which meant we'd link statically instead.
See: https://www.openbsd.org/faq/ports/specialtopics.html#SharedLibs
Now we use file globbing to do searching, and pick the first one
that's a real file.
Closes https://github.com/mesonbuild/meson/issues/3844
* find_library: Fix priority of library search in OpenBSD
Also add unit tests for the library naming function so that it's
absolutely clear what the priority list of naming is.
Testing is done with mocking on Linux to ensure that local testing
is easy
We used to immediately try to use whatever exe_wrapper was defined in
the cross file, but some people generate the cross file once and use
it for several projects, most of which do not even need an exe wrapper
to build.
Now we're a bit more resilient. We quietly fall back to using
non-exe-wrapper paths for compiler checks and skip the sanity check.
However, if some code needs the exe wrapper, f.ex., if you run a built
executable using custom_target() or run_target(), we will error out
during setup.
Tests will, of course, continue to error out when you run them if the
exe wrapper was not found. We don't want people's tests to silently
"pass" (aka skip) because of a bad CI setup.
Closes https://github.com/mesonbuild/meson/issues/3562
This commit also adds a test for the behaviour of exe_wrapper in these
cases, and refactors the unit tests a bit for it.
We already have code to fetch and find binaries specified in a cross
file, so use the same code for exe_wrapper. This allows us to handle
the same corner-cases that were fixed for other cross binaries.
ninja chokes when building FFmpeg's static libraries, as the
command line can be larger than 32000.
This was disabled on purpose in #1649, but the rsp syntax was
different: this commit makes it so the options and output file
are still passed on the command line, gcc-ar didn't work
otherwise.
Since `build_always` also adds a target to the set of default targets,
this option is marked deprecated in favour of the new option
`build_always_stale`.
`build_always_stale` *only* marks the target to be always considered out
of date, but does *not* add it to the set of default targets.
The old behaviour can still be achieved by combining
`build_always_stale` with `build_by_default`.
fixes#1942
On macOS, we set the install_name for built libraries to
@rpath/libfoo.dylib, and when linking to the library, we set the RPATH
to its path in the build directory. This allows all built binaries to
be run as-is from the build directory (uninstalled).
However, on install, we have to strip all the RPATHs because they
point to the build directory, and we change the install_name of all
built libraries to the absolute path to the library. This causes the
install name in binaries to be out of date.
We now change that install name to point to the absolute path to each
built library after installation.
Fixes https://github.com/mesonbuild/meson/issues/3038
Fixes https://github.com/mesonbuild/meson/issues/3077
With this, the default workflow on macOS matches what everyone seems
to do, including Autotools and CMake. The next step is providing a way
for build files to override the install_name that is used after
installation for use with, f.ex., private libraries when combined with
the install_rpath: kwarg on targets.
On Windows, if we are going to link with a shared module, we need the
implib.
Use case: The Xorg server builds some X protocol extensions as modules. The
implibs for these modules need to be shipped as part of the SDK, to enable
building of 3rd party extensions which reference symbols in (and hence on
Windows, need to be linked with) these modules.
Normally, people would just pass -fembed-bitcode in CFLAGS, but this
conflicts with -Wl,-dead_strip_dylibs and -bundle, so we need it as
an option so that those can be quietly disabled.
When the exe runner is `wine` or `wine32` or `wine64`, etc.
This allows people to run tests with wine.
Note that you also have to set WINEPATH to point to your custom
prefix(es) if your tests use external dependencies.
Closes https://github.com/mesonbuild/meson/issues/3620
This makes it possible to customize permissions of all installable
targets, such as executable(), libraries, man pages, header files and
custom or generated targets.
This is useful, for instance, to install setuid/setgid binaries, which
was hard to accomplish without access to this attribute.
To allow the javac -implicit:class behaviour to know where to find
generated .java files then the build directory for the target is also
added to the -sourcefile path.
Although only one file is passed to javac at a time, if your code has
any inter-file dependencies javac still needs to know how to find other
source files for its -implicit:class feature to work whereby it will
automatically also compile files that the given file depends on.
-implicit:class is the default, practical, behaviour of javac since
otherwise it would be necessary to declare the class dependencies
for parallel java builds to be feasible.
Passing "include_directory: include_directory('.')" to jar() causes
-souredir <path/to/top/of/java/src> to be passed to javac which then
enables your source code to have inter-file class dependencies -
assuming none of your source code is generated.
This ensures that '.' is included by default.
The -sourcepath option can't be passed multiple times to javac, since it
simply overrides prior arguments. Instead -sourcepath takes a colon (or
semi-colon on windows) separated list of paths.
The entire subdirectory was getting duplicated, which was exceeding the
max path limit in Python on Windows and causing build failures.
Example:
subprojects/gst-plugins-bad/gst-libs/gst/uridownloader/subprojects@gst-plugins-bad@gst-libs@gst@uridownloader@@gsturidownloader-1.0@sha/subprojects/gst-plugins-bad/gst-libs/gst/uridownloader/gsturidownloader-1.0-0.dll.symbols
This path is too long and opening it will cause a FileNotFoundError on
Windows.
- determine_ext_objs: What matters is if extobj.target is a unity build,
not if the target using those objects is a unity build.
- determine_ext_objs: Return one object file per compiler, taking into
account generated sources.
- object_filename_from_source: No need to special-case unity build, it
does the same thing in both code paths.
- check_unity_compatible: For each compiler we must extract either none
or all its sources, taking into account generated sources.
This option controls the permissions of installed files (except for
those specified explicitly using install_mode option, e.g. in
install_data rules.)
An install-umask of 022 will install all binaries, directories and
executable files with mode rwxr-xr-x, while all data and non-executable
files will be installed with mode rw-r--r--.
Setting install-umask to the string 'preserve' will disable this
feature, keeping the permissions of installed files same as the files in
the build tree (or source tree for install_data and install_subdir.)
Note that, in this case, the umask used when building and that used when
checking out the source tree will leak into the install tree.
Keep the default as 'preserve', to show that no behavior is changed and
all tests keep passing unchanged.
Tested: ./run_tests.py
This patch exploits the information residing in ltversion to set the
-compatibility_version and -current_version flags that are passed to the
linker on macOS.
This way they override all other arguments. This matches the order of
link arguments too.
Note that this means -I flags will come in afterwards and not override
anything else, but this is correct since that's how toolchain paths
work normally too -- they are searched last.
Closes https://github.com/mesonbuild/meson/issues/3089
The linkers currently do not support ninja compatible output of
dependencies used while linking. Try to guess which files will be used
while linking in python code and generate conservative dependencies to
ensure changes in linked libraries are detected.
This generates dependencies on the best match for static and shared
linking, but this should not be a problem, except for spurious
rebuilding when only one of them changes, which should not be a problem.
Also makes sure to ignore any libraries generated inside the build, to
keep the optimisation working where changes in a shared library only
cause relink if the symbols have changed as well.
Restore subproject exclusion for the html coverage report that existed
in the ninja backend legacy target.
Also exclude subprojects for the gcovr generated reports.
ninja coverage -> generate all possible reports (text, xml, html)
depending on gcovr and/or lcov/genhtml availability.
ninja coverage-html -> generate only html report
ninja coverage-xml -> generate only xml report
ninja coverage-text -> generate only text report
Make all targets phony, the old legacy rules where just annoying as
you would have to remove the old report before being able to generate
a new one.
ninja coverage succeeds if it can generate at least one report.
ninja coverage-* only succeeds if it can generate the requested report
Fixes the bug with flat layout and identical target names in subprojects.
Without this change directories are not created with subproject prefix
and they can collide.
Remove dead makedirs code in Backend.__init__(), during initialization
of backend build.targets is empty. Create output directories in
Vs2010Backend.generate_projects() instead.
Also use double blank line in run_unittests.py according to
https://www.python.org/dev/peps/pep-0008/#blank-lines.
Modern gcovr includes html generation support so if lcov and
genhtml are not available fallback to gcovr.
Kept lcov and genhtml as default so to not surprise existing
users of coverage-html with the different output of gcovr.
gcovr added html support in 3.0 but as there already is a test
for 3.1 because of the changes to -r/--rootdir I opted to only
allow html generation for >= 3.1 to keep things simple.
In gcovr 3.1 the -r/--rootdir argument changed meaning causing
reports generated with gcovr 3.1 to not find the source files
and look for *.gcda in the whole source tree rather than the
build dir.
So, detect gcovr version and if 3.1 give build_root to -r instead
of source_root.
Change the code to store D properties as plain data. Only convert them
to compiler flags in the backend. This also means we can fully parse D
arguments without needing to know the compiler being used.
When building a Rust target with Rust library dependencies, an
`--extern` argument is now specified to avoid ambiguity between the
dependency library, and any crates of the same name in `rustc`'s
private sysroot.
Includes an illustrative test case.
The documentation doesn't require it and the interpreter code works around the
possibility of it being None. The ninja backend code however fails with
File "/home/whot/code/meson/mesonbuild/backend/ninjabackend.py", line 796, in generate_data_install
dstabs = os.path.join(subdir or None, plain_f)
File "/usr/lib64/python3.6/posixpath.py", line 78, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
If install_dir is missing, default to datadir/projectname
We missed one particular edge-case in #2413: when the generated vala
file is inside --basedir, the path is not just the basename.c
Since this case can never happen in a project test, this includes a unit
test for the same.
Closes https://github.com/mesonbuild/meson/issues/815
- Pass exclude_files and exclude_directories relative to src_dir,
same as specified by user and documented in public install_subdir().
- Make do_copydir() interface similar to do_copyfile():
install src_dir contents to dst_dir.
- Remove src_prefix/src_dir code, it adds confusion and duplicates arguments.
Use single src_dir parameter instead.
- Make callers specify that src_dir contents should be installed
under dst_dir/basename(src_dir) if necessary.
- Use os.path.relpath() instead of string manipulations on paths.
- Add documentation to do_copydir(): specify types and add usage example.
According to Python documentation[1] dirname and basename
are defined as follows:
os.path.dirname() = os.path.split()[0]
os.path.basename() = os.path.split()[1]
For the purpose of better readability split() is replaced
by appropriate function if only one part of returned tuple
is used.
[1]: https://docs.python.org/3/library/os.path.html#os.path.split