This makes the typing annotations basically impossible to get right, but
if we only have one key then it's easy. Fortunately python provides
comprehensions, so we don't even need the ability to pass multiple keys,
we can just [extract_as_list(kwargs, c) for c in ('a', 'b', 'c')] and
get the same result.
listify shouldn't be unholdering, it's a function to turn scalar values
into lists, or flatten lists. Having a separate function is clearer,
easier to understand, and can be run recursively if necessary.
The old logic was completely broken, and didn't even assert that the
specified section was found at all. The CPU families test was broken
because of this. Luckily, the table didn't go out of sync with the
code.
It now also doesn't assume that each section has only one table. This
fixes the test now that we document the buildtype/optimization/debug
mapping in a second table inside the `Universal options` section.
* unittests: fix finding python2 if the binary is named python2
Because of the way the python module works the simplicity of the test
function is no longer valid, we need to have and additional name
parameter to make the python module work, as it doesn't look for an
entry called "python2" or "python3", only "python"
* unittests: Don't make our python 2.x check debian specific
* unittests: On macOS the python2 binary is still called python
This adds a warnings counter for subprojects that passed. This is to
encourage developpers to check warnings in the logs and hopefully fix
them. Otherwise they could be hidden in hundreds lines of logs.
This also print the error message for subprojects that did not pass. The
error message is often enough to fix the issue (e.g. missing
dependency) and it's easier than searching in the logs why a subproject
failed.
When a source file for a library is changed without adding new extern
symbols, only that library should be rebuilt. Nothing that uses it
should be relinked.
Along the way, also remove trailing `.` in all Ninja rule
descriptions. It's very confusing to see messages like:
```
Linking target mylib.dll.
```
It's confusing that the period at the end of that is not part of the
filename. Instead of removing that period manually in the tests (which
feels wrong!) just don't print it at all.
This makes two basic changes, 1 it moves the name of the linker into the
linker class, this should reduce the number of errors and typos, and
ensure that a linker always has one name. This then renames the linkers
to have more consistent names.
Posix/gnu linkers are called ld.<name>: ld.gold, ld.lld, ld.solaris.
Apple linkers are renamed ld64.
This allows users to disable writing out the inbuilt variables to
the pkg-config file as they might actualy not be required.
One reason to have this is for architecture-independent pkg-config
files in projects which also have architecture-dependent outputs.
For example : https://gitlab.freedesktop.org/wayland/weston/issues/269Fixes#4011
* Extend test_prefix_dependent_defaults unit test to cover default case
Extend test_prefix_dependent_defaults unit test to cover the default
case, when the default prefix is '/usr/local'. (On Windows, the default
prefix is 'c:/')
* Restore adjusting option defaults depending on the default prefix
Restore adjusting option defaults, depending on the default prefix.
Droppped in d778a371
Error is raised due to Elbrus Fortran compiler can't
generate debug information for now, because it's a 2-step
compiler where 1st step is code conversion from Fortran to C,
so debug information which C compiler would produce, is useless.
When a dependency is required, not found on the system, and its fallback
is disabled with --wrap-mode=nofallback, meson should abort instead of
returning not-found.
pkgconf automatically prunes "system library paths" from its output. The
system library paths depend on the system toolchain. A common value on a
64-bit system is as follows:
/lib64:/usr/lib64:/usr/local/lib64
So, if -L/usr/lib64 appears in the Libs section, it will be pruned from
the output of pkg-config --libs.
The pc files generated for this test contain something like this:
libdir=/usr/lib
Libs: -L${libdir} ...
pkgconf may not consider /usr/lib to be a system library path, so it is
not pruned as the test expects. To work around this, override the
compiled-in list of paths via the PKG_CONFIG_SYSTEM_LIBRARY_PATH
environment variable.
Fixes: https://github.com/mesonbuild/meson/issues/6004
The rust code is ugly, because rust is annoying. It doesn't invoke a
linker directly (unless that linker is link.exe or lld-link.exe),
instead it invokes the C compiler (gcc or clang usually) to do it's
linking. Meson doesn't have good abstractions for this, though we
probably should because some of the D compilers do the same thing.
Either that or we should just call the c compiler directly, like vala
does.
This changes the public interface for meson, which we don't do unless we
absolutely have to. In this case I think we need to do it. A fair number
of projects have already been using 'ld' in their cross/native files to
get the ld binary and call it directly in custom_targets or generators,
and we broke that. While we could hit this problem again names like
`c_ld` and `cpp_ld` are far less likely to cause collisions than `ld`.
Additionally this gives a way to set the linker on a per-compiler basis,
which is probably in itself very useful.
Fixes#6442
When there is more than one path in PKG_CONFIG_PATH. It is almost always
preferred to find things in the order specified by PKG_CONFIG_PATH
instead of assuming pkg-config returns flags in a meaningful order.
For example:
/usr/local/lib/libgtk-3.so.0
/usr/local/lib/pkgconfig/gtk+-3.0.pc
/usr/local/lib/libcanberra-gtk3.so
/usr/local/lib/pkgconfig/libcanberra-gtk3.pc
/home/mesonuser/.local/lib/libgtk-3.so.0
/home/mesonuser/.local/lib/pkgconfig/gtk+-3.0.pc
PKG_CONFIG_PATH="/home/mesonuser/.local/lib/pkgconfig:/usr/local/lib/pkgconfig"
libcanberra-gtk3 is a library which depends on gtk+-3.0. The dependency
is mentioned in the .pc file with 'Requires', so flags from gtk+-3.0 are
used in both dynamic and static linking.
Assume the user wants to compile an application which needs both
libcanberra-gtk3 and gtk+-3.0. The application depends on features added
in the latest version of gtk+-3.0, which can be found in the home
directory of the user but not in /usr/local. When meson asks pkg-config
for linker flags of libcanberra-gtk3, pkg-config picks
/usr/local/lib/pkgconfig/libcanberra-gtk3.pc and
/home/mesonuser/.local/lib/pkgconfig/gtk+-3.0.pc. Since these two
libraries come from different prefixes, there will be two -L arguments
in the output of pkg-config. If -L/usr/local/lib is put before
-L/home/mesonuser/.local/lib, meson will find both libraries in
/usr/local/lib instead of picking libgtk-3.so.0 from the home directory.
This can result in linking failure such as undefined references error
when meson decides to put linker arguments of libcanberra-gtk3 before
linker arguments of gtk+-3.0. When both /usr/local/lib/libgtk-3.so.0 and
/home/mesonuser/.local/lib/libgtk-3.so.0 are present on the command
line, the linker chooses the first one and ignores the second one. If
the application needs new symbols that are only available in the second
one, the linker will throw an error because of missing symbols.
To resolve the issue, we should reorder -L flags according to
PKG_CONFIG_PATH ourselves before using it to find the full path of
library files. This makes sure that we always follow the preferences of
users, without depending on the unreliable part of pkg-config output.
Fixes https://github.com/mesonbuild/meson/issues/4271.
There are two awful things about CompilerArgs, one is that it directly
inherits from list, and there are a lot of subtle gotcahs with
inheriting from builtin types. The second is that the class allows
arguments to be passed in whatever order. That's bad. This also fully
annotates the CompilerArgs class, so mypy can type check it for us.
This dumps xild on mac and linux. After a lot of reading and banging my
head I've discovered we (meson) don't care about xild, xild is only
useful if your invoke ld directly (not through icc/icpc) and you want to
do ipo/lto/wpo. Instead just make icc report what it's actually doing,
invoking ld or ld64 (for linux and mac respectively) directly. This
allows us to get -fuse-ld working on linux.
Previously if a user tried to pass a command line build
option that contained a '%' character the command line
parser assumed that there was string interpolation to be
done. As there is no sense in such a scenario no code
provides any input for the interpolation. This then leads to
a failure.
In this commit we specifically override the defaults in
ConfigParser and set interpolation to None, which disables
command line build option interpolation.
Fixes#6157
A build with a cross file should always be identified as a cross build, even if
the host and build machine are identical. This was the case in 0.50, regressed
in 0.51, and is fixed again in 0.52, so add a test case to ensure it doesn't
regress again.
Now that the linkers are split out of the compilers this enum is
only used to know what platform we're compiling for. Which is
what the MachineInfo class is for
test3-static was actually always using the shared library because that
warning was not fatal:
WARNING: Static library 'func6' not found for dependency 'func6', may
not be statically linked
The reason why the libfunc6.a wasn't found is because the prefix in the
generated pc file was not set to install dir.
In qemu, minikconf generates a depfile that meson could use to
automatically reconfigure on dependency change.
Note: someone clever can perhaps find a way to express this with a
ninja rule & depfile=. I didn't manage, so I wrote a simple depfile
parser.
According to http://testanything.org/tap-specification.html
"Any output line that is not a version, a plan, a test line, a
diagnostic or a bail out is considered an “unknown” line. A TAP parser
is required to not consider an unknown line as an error but may
optionally choose to capture said line and hand it to the test
harness, which may have custom behavior attached [...] TAP::Harness
reports TAP syntax errors at the end of a test run".
(glib gtest can generate empty lines)
The main library must come before extra libraries, because they are
likely to be dependencies of the main library that get promoted from
private to public. This was causing static link issues with glib-2.0.pc.
On illumos (and presumably Solaris, though I can't test) cc normally
points to Sun CC, which we don't support. So ensure that gcc is used
explicitly in that case.
* Do not strip static archives
Stripping static archives without more fine-grained options (e.g. `-g`)
leads to failures such as
ld: libfoo.a: error adding symbols: archive has no index; run ranlib to add one
because GNU strip removes *every* symbol in a static archive by default.
Given that static archives are not final build artifacts (unlike
executables and shared libraries), stripping them gains little and only
causes more edge case failures.
* Gentoo's portage only strips debug information:
86f211e3a5/bin/estrip (L322)
* Fedora also only strips debug information:
e9c13c6565/scripts/brp-strip-static-archive (L18)
* Debian also only does some very light stripping:
72ed1d3261/dh_strip (L374)Fixes#4138
* Add test case for static archive stripping
Instead of the DynamicLinker returning a hardcoded value like
`-Wl,-foo`, it now is passed a value that could be '-Wl,', or could be
something '-Xlinker='
This makes a few things cleaner, and will make it possible to fix using
clang (not clang-cl) on windows, where it invokes either link.exe or
lld-link.exe instead of a gnu-ld compatible linker.
Clang doesn't really like having no-undefined plus the address sanitizer, but
gcc doesn't mind. This all happens to work with clang + gnu ld, but with clang
+ apple ld this turns into a dumpster fire. Just add b_lundef=false to make
everyone happy.
@TingPing has a repository that contains a grammar for meson which is
used by linguist (GitHub), and by many editors such as Atom, VS Code,
TextMate, Sublime Text, etc. Add CI so that we notice that the
function list in it is out of date, such as https://github.com/TingPing/language-meson/pull/3
It's harder to do this generically for other syntax such as the `in`
keyword, but it's better than nothing.
This reverts the changes to the `section` key for the
buildoptions and moves the machine choice into it's
own `machine` key.
With this commit the __undocumented__ breaking change
to the introspection format (introduced in 0.51.0) is
reverted and a new key is added instead.
Instead of trying to guess whether we need py or python3, and then
falling over when whatever we guessed isn't in the path or isn't right,
just use sys.executable which should always work.
* coredata: Correctly handle receiving a pipe for native/cross files
In some cases a cross/native file may be a pipe, such as when using bash
process replacement `meson --native-file
<([binaries]llvm-config='/opt/bin/llvm-config')`, for example. In this
case we copy the contents of the pipe into a file in the meson-private
directory so we can create a proper ninja dependency, and be able to
reload the file on --wipe/--reconfigure. This requires some extra
negotiation to preserve these native/cross files.
Fixes#5505
* run_unitests: Add a unit test for native files that are pipes
Using mkfifo.
We were setting the base options for the Objective-C compiler
manually, due to which options such as b_bitcode and b_ndebug were not
getting set at all.
The base options here are the same as for C code with the Clang
compiler, so just use the same inherited list.
Also expand the bitcode test to ObjC and ObjC++ so this doesn't happen
again.
In most cases instead pass `for_machine`, the name of the relevant
machines (what compilers target, what targets run on, etc). This allows
us to use the cross code path in the native case, deduplicating the
code.
As one can see, environment got bigger as more information is kept
structured there, while ninjabackend got a smaller. Overall a few amount
of lines were added, but the hope is what's added is a lot simpler than
what's removed.
It was using ':' as a path separator while GCC uses ';' resulting in bogus
paths being returned. Instead assume that the compiler uses the platform native
separator.
The previous splitting code still worked sometimes because splitting
"C:/foo;C:/bar" resulted in the last part "/bar" being valid if "<DriveOfCWD>:/bar"
existed.
The fix also exposes a clang Windows bug where it uses the wrong separator:
https://reviews.llvm.org/D61121 . Use a regex to fix those first.
This resulted in linker errors when statically linking against a library which
had an external dependency linking against system libs.
Fixes#5386
Currently this test assumes that the user doesn't have XDG_DATA_HOME
set in their path, but this isn't a good assumption, and can result in
the test not actually testing what it means to.
Meson itself *almost* only cares about the build and host platforms. The
exception is it takes a `target_machine` in the cross file and exposes
it to the user; but it doesn't do anything else with it. It's therefore
overkill to put target in `PerMachine` and `MachineChoice`. Instead, we
make a `PerThreeMachine` only for the machine infos.
Additionally fix a few other things that were bugging me in the process:
- Get rid of `MachineInfos` class. Since `envconfig.py` was created, it
has no methods that couldn't just got on `PerMachine`
- Make `default_missing` and `miss_defaulting` work functionally. That
means we can just locally bind rather than bind as class vars the
"unfrozen" configuration. This helps prevent bugs where one forgets
to freeze a configuration.
The Intel compiler is strange. On Linux and macOS it's called ICC, and
it tries to mostly behave like gcc/clang. On Windows it's called ICL,
and tries to behave like MSVC. This makes the code that's used to
implement ICC support useless for supporting ICL, because their command
line interfaces are completely different.
pkg-config(1) on OpenBSD is not the one from freedesktop.org and hence has
subtle differences (which don't impact real usage). The meson test fails
because white space between operators are stripped by our pkg-config:
$ grep Require /usr/local/lib/pkgconfig/xmlsec1.pc
Requires: libxml-2.0 >= 2.8.0 libxslt >= 1.0.20
$ pkg-config --print-requires xmlsec1
libxml-2.0>=2.8.0
libxslt>=1.0.20
There actually is an ICC for windows that can be used to cross compile
from Windows to Linux, but it's not supported in meson currently and I
don't plan to enable it.
Currently C++ inherits C, which can lead to diamond problems. By pulling
the code out into a standalone mixin class that the C, C++, ObjC, and
Objc++ compilers can inherit and override as necessary we remove one
source of diamonding. I've chosen to split this out into it's own file
as the CLikeCompiler class is over 1000 lines by itself. This also
breaks the VisualStudio derived classes inheriting from each other, to
avoid the same C -> CPP inheritance problems. This is all one giant
patch because there just isn't a clean way to separate this.
I've done the same for Fortran since it effectively inherits the
CCompiler (I say effectively because was it actually did was gross
beyond explanation), it's probably not correct, but it seems to work for
now. There really is a lot of layering violation going on in the
Compilers, and a really good scrubbing would do this code a lot of good.
For consistency, it can be useful to have an explicit empty test suite list
for a test:
test('test-name', binary, suite: [])
This currently passes meson but fails when running meson tests:
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/mesonbuild/mesonmain.py", line 122, in run
return options.run_func(options)
File "/usr/lib/python3.7/site-packages/mesonbuild/mtest.py", line 1005, in run
return th.doit()
File "/usr/lib/python3.7/site-packages/mesonbuild/mtest.py", line 756, in doit
self.run_tests(tests)
File "/usr/lib/python3.7/site-packages/mesonbuild/mtest.py", line 896, in run_tests
visible_name = self.get_pretty_suite(test)
File "/usr/lib/python3.7/site-packages/mesonbuild/mtest.py", line 875, in get_pretty_suite
rv = TestHarness.split_suite_string(test.suite[0])[0]
IndexError: list index out of range
Fix it by simply checking for the test suite to be a valid list we can pass on
Fixes#5340
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Before this we only covered >, <, and ==, but we an apply some basic
logic to know that a > b == !(a <= b), or that if a > b then a != b.
This uncovered some bugs I wrote while working on this code.
We really should be testing using the operators themselves, not the
internal implementations, ie we should use a > b not a.__cmp__(b) == 1
in our tests, because __cmp__ is just an implementation detail.
This isn't safe given the way python implements default arguments.
Basically python store a reference to the instance it was passed, and
then if that argument is not provided it uses the default. That means
that two calls to the same function get the same instance, if one of
them mutates that instance every subsequent call that gets the default
will receive the mutated instance. The idiom to this in python is to use
None and replace the None,
def in(value: str, container: Optional[List[str]]) -> boolean:
return src in (container or [])
if there is no chance of mutation it's less code to use or and take
advantage of None being falsy. If you may want to mutate the value
passed in you need a ternary (this example is stupid):
def add(value: str, container: Optional[List[str]]) -> None:
container = container if container is not None else []
container.append(value)
I've used or everywhere I'm sure that the value will not be mutated by
the function and erred toward caution by using ternaries for the rest.
* coredata: store cross/native files in the same form they will be used
Currently they're forced to absolute paths when they're stored in the
coredata datastructure, then when they're loaded we de-absolute path
them to check if they're in the system wide directories. This doesn't
work at all, since the ninja backend will generat a dependency on a
file that is in the source directory unless the path was already given
as absolute. This results in builds being retriggereed forever due to
a non-existant file.
The right way to do this is to figure out whether the file is in the
build directory, is absolute, or is in one of the system paths at
creation time, and store that path as absolute. Then the code that
reads the file and the code that generates the dependencies in the
ninja backend just takes the computed list and there is no mismatch
between them.
Fixes#5257
* run_unittests: Add a test for correct native file storage
This tests the bug in #5257
Warn when someone tries to use append() or prepend() on an env var
which already has an operation set on it. People seem to think that
multiple append/prepend operations stack, but they don't.
Closes https://github.com/mesonbuild/meson/issues/5087
This creates a new command line option to store pkg_config_path into,
and store the environment variable into that option. Currently this
works like the environment variable, for both cross and native targets.
Since sanity check now includes CFLAGS, the test fails earlier.
But if the compiler is ICC, it will only fail during the build proper as
before, since that's where where the flag making `-std=unknown` an error
not warning is used.
This patch creates an enum for selecting libtype as static, shared,
prefer-static, or prefer-shared. This also renames 'static-shared'
with 'prefer_static' and 'shared-static' with 'prefer_shared'. This is
just a refactor with no behavioral changes or user facing changes.
OpenBSD doesn't have any support for the compiler sanitizers yet.
While this may change in the future, better fix test suite run in "failfast"
mode for now. This can be revisited once (if) we get support in the future.
* clang 7.0.1
$ make CFLAGS=-fsanitize=address foo
cc -fsanitize=address -o foo foo.c
cc: error: unsupported option '-fsanitize=address' for target 'amd64-unknown-openbsd6.5'
* gcc 4.2.1
*** Error 1 in /tmp (<sys.mk>:85 'foo')
$ make CC=gcc CFLAGS=-fsanitize=address foo
gcc -fsanitize=address -o foo foo.c
cc1: error: unrecognized command line option "-fsanitize=address"
* gcc 8.2.0
$ make CC=egcc CFLAGS=-fsanitize=address foo
egcc -fsanitize=address -o foo foo.c
ld: error: unable to find library -lasan
collect2: error: ld returned 1 exit status
This provides an initial support for parsing TAP output. It detects failures
and skipped tests without relying on exit code, as well as early termination
of the test due to an error or a crash.
For now, subtests are not recorded in the TestRun object. However, because the
TAP output goes on stdout, it is printed by --print-errorlogs when a test does
not behave as expected. Handling subtests as TestRuns, and serializing them
to JSON, can be added later.
The parser was written specifically for Meson, and comes with its own
test suite.
Fixes#2923.
This makes the testsuite work better with other test runners, like
pytest. This is important because better test runners are very useful to
development (e.g. avoiding running succeeding tests again and again),
even if we want to still support 0 dependency testing of Meson though
keeping the default test runnner working.
This allows the person running configure (either a developer, user, or
distro maintainer) to keep a configuration of where various kinds of
files should end up.
Instead use coredata.compiler_options.<machine>. This brings the cross
and native code paths closer together, since both now use that.
Command line options are interpreted just as before, for backwards
compatibility. This does introduce some funny conditionals. In the
future, I'd like to change the interpretation of command line options so
- The logic is cross-agnostic, i.e. there are no conditions affected by
`is_cross_build()`.
- Compiler args for both the build and host machines can always be
controlled by the command line.
- Compiler args for both machines can always be controlled separately.
macOS provides the tool `lipo` to check the archs supported by an
object (executable, static library, dylib, etc). This is especially
useful for fat archives, but it also helps with thin archives.
Without this, the linker will fail to link to the library we mistakenly
'found' like so:
ld: warning: ignoring file /path/to/libfoo.a, missing required architecture armv7 in file /path/to/libfoo.a
Instead of only doing a naive filesystem search, also run the linker
so that it can tell us whether the -F path specified actually contains
the framework we're looking for.
Unfortunately, `extraframework` searching is still not 100% correct in
the case when since we want to search in either /Library/Frameworks or
in /System/Library/Frameworks but not in both. The -Z flag disables
searching in those prefixes and would in theory allow this, but then
you cannot force the linker to look in those by manually adding -F
args, so that doesn't work.
Also add a test for it. In the process, also remove an overly-zealous
try..except statement that was catching *all* exceptions, not just
expected ones, which was masking programming errors.
Also ensure that the test's no-pkg-config codepath will always be run,
even on the CI where we always have pkg-config available.
This counts as a test case for #4728
The native file tests were never run on the CI since they were skipped
on Windows and also skipped on Linux and macOS since CC/CXX/etc are
always set by the CI.
Also fix test failure on macOS. The test was assuming that because
/usr/bin/gcc and /usr/bin/clang exist on macOS, they must be different
compilers. They're not. gcc is just a wrapper around clang, and we
correctly detect it as such.
In some cases (see #4817) it's helpful if the output file uses the
same newlines as the input file without translating them to the
platform defaults.
open() by default recognizes all newline styles and translates them
to "\n" and then to the platform default when writing.
Passing "" to "newline" disables the translation and lets us pass through
the original newline characters.
Our builddir ABI is stable across minor (stable) releases, so there is
no need to force a wipe. We already release pretty often, no need to
force people to wipe twice as often.
* Fixed spelling
* Merged the Buildoptions and Projectinfo interpreter
* Moved detect_compilers to Environment
* Added removed test case
* Split detect_compilers and moved even more code into Environment
* Moved set_default_options to coredata
* Small code simplification in mintro.run
* Move cmd_line_options back to `environment`
We don't actually wish to persist something this unstructured, so we
shouldn't make it a field on `coredata`. It would also be data
denormalization since the information we already store in coredata
depends on the CLI args.
The returned not-found object can be from any type because we were
returning the first of the failed attempts. It also can happen that we
don't have any dependency object in which case we should just return
NotFoundDependency() object as well instead of raising an exception.
That exception was happening before, but dependency_impl() was
calling find_external_dependency() in a try block so it was hidden.
This seems to be related to deleting the current working directory.
Simply deleting all of the trees inside the build directory instead
seems to fix it. This only appears with some combination of generated
targets, running the test case against say "1 trivial" doesn't show the
bug.
See this mesa bug: https://bugs.freedesktop.org/show_bug.cgi?id=109071
Since `_process_libs` appends the lib's dependencies this list already,
the final return value of `_process_libs` will end up after its
dependencies, which is the wrong way around. (The lib must come first,
then its dependencies)
The easiest solution is to simply pre-pend the return value of
`_process_libs` rather than appending it, so that its dependencies come
after the library itself.
Closes#4091.
llvm-config --libfiles --link-shared wants to link to a bunch of shared
libraries which don't exist, so we end up at dev.py:308, but the guess
that makes ('libLLVM*.dll') doesn't take into account the existence of
implibs (which is fixable), but even if it did 'libLLVM-7.0.dll.a'
doesn't seem to exist... so not sure how to fix this...)
Also some steps towards making that work:
Adjust helper_create_binary_wrapper for MSYS2. The .bat wrapper should
run msys2 python, not try to invoke the 'py' python launcher (which may
not be present)
Suppress echoing of the command in helper_create_binary_wrapper
(otherwise the echoed command can interfere in interpreting the output
of the wrapped command, which seems to be the case when it's
llvm-config)
This variant was added to allow introspection before configuring a build
directory. This is useful for IDE integration to allow displaying and/or
setting options for the initial configuration of the build directory.
It also allows showing basic information about the project even if it's
not yet configured or configuring failed.
The project 'name' field in --projectinfo is used inconsistently:
For the top level project it always shows the name configured in
the top level meson.build file. For subprojects it's referring to the
name of the directory the subproject's meson.build is contained in.
To have a consistent output and preserve the existing behavior this adds
the 'descriptive_name' field which always shows the name set in the
project.
To be consistent the 'descriptive_name' field was also added to the
--projectfiles variant that uses an already configured build.
It also extends the information shown with the list of buildsystem-files.
This is currently only implemented in the variant for unconfigured
projects.
Fixed-size hash makes paths shorter and prevents doubling of path length
because of subdir usage in target id: "subdir/id" would generate
"subdir/{subdir-without-slashes}@@id" target otherwise.
Export construct_id_from_path() to aid tests.
Add a separate unit test for this function to make sure it is not broken unexpectedly.
Closes#4226.
this adds support for generating pkgconfig files for c#.
The difference to c and cpp is that the -I flag is not known to the c#
compiler, but rather the -r flag which is used to link a .dll file into
the compiled library.
However this opens the question of validating which pkgconfig files can
be generated (depending on the language).
This implements 4409.
part of using ICC is configuring LD_LIBRARY_PATH so that you can link
with several Intel specific .so's. Currently meson blanket overrides the
LD_LIBRARARY_PATH in several tests which breaks them. Instead prepend
the test dir td LD_LIBRARY_PATH. Fixes 6 tests with ICC.
This commit adds a nice decorator helper for skipping tests when they
require the compiler to implement a specific base option, and uses it to
turn off b_sanitize tests, which fixes some tests on ICC.
samu prints a different message when the build is a no-op, so make
assertBuildIsNoop consider that as well.
Also, if compile_commands.json cannot be found, just skip the test. This
seems reasonable since meson just produces a warning if `ninja -t compdb`
fails.
Finally, only capture stdout in run_meson_command_tests.py, since the
backend may print messages the tests don't recognize to stderr.
Fixes#3405.
When trying to cross-compile mesa on an aarch64 system, I noticed some
strange behavior. Meson would only ever find the wayland-scanner binary
in my host machine's sysroot (/mnt/amethyst):
Native dependency wayland-scanner found: YES 1.16.0
Program /mnt/amethyst/usr/bin/wayland-scanner found: YES (/mnt/amethyst/usr/bin/wayland-scanner)
It should be finding /usr/bin/wayland-scanner instead, since the
wayland-scanner dependency is created as native. On closer inspection,
it turned out that meson was ignoring the native argument passed to
dependency(), and wuld always use the pkgconfig binary specified in my
toolchain instead of the native one (/usr/bin/pkg-config):
Native dependency wayland-scanner found: YES 1.16.0
Called `/home/lyudess/Projects/panfrost/scripts/amethyst-pkg-config
--variable=wayland_scanner wayland-scanner` -> 0
Turns out that if we create a dependency() object with native:false, we
end up caching the pkg-config path for the host machine in
PkgConfigDependency.class_pkgbin, instead of the build machine's
pkg-config path. This results causing in all pkg-config invocations for
dependency() objects to use the host machine's pkg-config binary,
regardless of whether or not 'native: true' was specified when the
dependency() object was instantiated.
So, fix this by never setting PkgConfigDependency.class_pkgbin for cross
dependency() objects. Also, add some test cases for this. Since
triggering this bug can be avoided by creating a dependency() objects
with native:true before creating any with native:false, we make sure
that our test has two modes: one where it starts with a native
dependency first, and another where it starts with a cross dependency
first.
As a final note here: We currently skip this test on windows, because
windows doesn't support directly executing python scripts as
executables: something that we need in order to point pkgconfig to a
wrapper script that sets the PKG_CONFIG_LIBDIR env appropriately before
calling pkg-config.
Signed-off-by: Lyude Paul <thatslyude@gmail.com>
AR wasn't reset in the environment, so this test could fail if more than one
language compiler was specified in the environment and the linker wasn't
'ar'
Handle clang's cl or clang-cl being in PATH, or set in CC/CXX
Future work: checking the name of the executable here seems like a bad idea.
These compilers will fail to be detected if they are renamed.
v2:
Update compiler.get_argument_type() test
Fix comparisons of id inside CCompiler, backends and elsewhere
v3:
ClangClCPPCompiler should be a subclass of ClangClCCompier, as well
Future work: mocking in test_find_library_patterns() is effected, as we
now test for a subclass, rather than self.id in CCompiler.get_library_naming()
Remove the code responsible for implicitly compressing manpages as .gz
files. It has been established that manpage compression is a distro
packager's task, with existing distros already having their own
implementations of compression.
Fixes#4330
Occasionally Darwin libraries can be .so rather than .dylib e.g. tensorflow_cc.so
tensorflow_cc is a c++ API for Tensorflow (https://github.com/FloopCZ/tensorflow_cc)
which was primarily written for Linux but is also compilable on Darwin. Possibly
through laziness, possibly just to have consistent filenames, the developers did not
opt to change the suffix from the Linux default when this is compiled on Darwin.
Also, the Darwin linker will find libraries with a .so suffix if they are
in its path. find_library() needs to match the linker behaviour.
Fixes Issue #4323.
The check to see if a call to configure_file() overwrites the output of
a preceding call should perform the substitution for the output file
before doing the check.
Added tests to ensure the proper behaviour.
-It tests both Qt4 and Qt5 detection -> qtdependency_pkgconfig_detection
-It doesn't need to skip test if Qt4 isn't found and if Qt5 isn't, the
meson.build file already skips the test.
-The regex was outdated and since skipped because of Qt4 it is silently
broken for a long time.
Signed-off-by: Alexis Jeandet <alexis.jeandet@member.fsf.org>
* Enums are strongly typed and make the whole
`gcc_type`/`clang_type`/`icc_type` distinction
redundant.
* Enums also allow extending via member functions,
which makes the code more generalisable.
Correct version_compare_condition_with_min() for the case where no minimum
version is established by the version constraint. Add a simple test.
Also fix test_feature_check_usage_subprojects by escaping regex
metacharacters.
if |condition| is '<', '<=' or '!=', the minimum version satisfying the
condition is 0, so the minimum version for a feature is never met.
if |condition| is '>=' or '==', the minimum version satisfying the condition
is the version compared with, so the minimum version for a feature must be
less than or equal to that.
if |condition| is '>', the minimum version satisfying the condition is
greater than the version compared with, so the minimum version for a feature
must be less than that
(it's this last condition that makes this function necessary, as in all
other cases we could establish a definite minimum version which we could
compare to see if it's less than or equal to the current version)
Currently this trims '0.48.0.dev1' to '0.48.0', and then requires exactly
that version in the generated meson.build for the test.
Just use the exact version.
Also only use a 'project(meson_version:)' constraint in the generated
project if a version is specified
Also remove unused grab_leading_numbers