t.pic won't be defined. We can only hope it has been built with -fPIC.
Linker will complain otherwise any way.
t.extract_all_objects_recurse() won't be defined. We could support this
case by extracting the archive somewhere and pick object files.
Some things, like `method[...](...)` or `x: ... = ...` python 3.5
doesn't support, so I made a comment instead with the intention that it
can someday be made into a real annotation.
In most cases instead pass `for_machine`, the name of the relevant
machines (what compilers target, what targets run on, etc). This allows
us to use the cross code path in the native case, deduplicating the
code.
As one can see, environment got bigger as more information is kept
structured there, while ninjabackend got a smaller. Overall a few amount
of lines were added, but the hope is what's added is a lot simpler than
what's removed.
This function is currently setup with keyword arguments defaulting to
None. However, it is never called without passing all of it's arguments
explicitly, and only one of it's arguments would actually be valid as
None. So just drop that, and make them all positional. And annotate
them.
Some things, like `method[...](...)` or `x: ... = ...` python 3.5
doesn't support, so I made a comment instead with the intention that it
can someday be made into a real annotation.
This isn't safe given the way python implements default arguments.
Basically python store a reference to the instance it was passed, and
then if that argument is not provided it uses the default. That means
that two calls to the same function get the same instance, if one of
them mutates that instance every subsequent call that gets the default
will receive the mutated instance. The idiom to this in python is to use
None and replace the None,
def in(value: str, container: Optional[List[str]]) -> boolean:
return src in (container or [])
if there is no chance of mutation it's less code to use or and take
advantage of None being falsy. If you may want to mutate the value
passed in you need a ternary (this example is stupid):
def add(value: str, container: Optional[List[str]]) -> None:
container = container if container is not None else []
container.append(value)
I've used or everywhere I'm sure that the value will not be mutated by
the function and erred toward caution by using ternaries for the rest.
If find_program() returns a file from the source directory, anything
that uses it should add the file to the dependencies, so that they are
rebuilt whenever the script changes. Generator is not doing that.
While at it, I am doing two related fixes:
- Generator is not checking whther the generator actually was found,
resulting in a Python error involving NoneType if it isn't. To minimize
backwards compatibility issues, I am only raising the error when
g.process() is acutally called.
- the error message for custom_target with a nonexisting program
erroneously mention a not-found external program "nonexistingprogram".
The new error is similar to the one I am adding for generators.
Warn when someone tries to use append() or prepend() on an env var
which already has an operation set on it. People seem to think that
multiple append/prepend operations stack, but they don't.
Closes https://github.com/mesonbuild/meson/issues/5087
I'll be using this later, but it seems useful to allow dependencies to
that have special handlers to declare that they depend on other
dependencies. This should allow us to stop treating threads special
internally and just make it a normal dependency.
Since the "-l<lib>" flags in the build.ninja file are passed in
"--start-group"/"--end-group" flags, there should be no need to have any
library listed twice, even if there are circular dependencies. Therefore we
can eliminate duplicates. For speed, rather than deduplicating at the end
of the process, it's faster to not add the duplicate flags in the first
place.
This should help fix#2150
A custom_target, if install is set to true, will always be built by
default even if build_by_default is explicitly set to false.
Ensure that this does not happen if it's set explicitly. To keep
backward compatibility, if build_by_default is not set explicitly and
install is true, set build_by_default to true.
Fixes#4107
Previously, the configuration worked fine, but the compiler raised an
error. Now, we explicitly check for the existence of files and print a
useful error message if they do not exist.
Fixed-size hash makes paths shorter and prevents doubling of path length
because of subdir usage in target id: "subdir/id" would generate
"subdir/{subdir-without-slashes}@@id" target otherwise.
Export construct_id_from_path() to aid tests.
Add a separate unit test for this function to make sure it is not broken unexpectedly.
Closes#4226.
It's a (presumably unintentional) quirk of the current implementation of
SharedLibrary.determine_filenames() that if both name_suffix and
name_prefix are set, an import library isn't generated.
Adjust test 'common/25 library versions': Make the library have exports,
so an implib is generated with MSVC. Add implib to set of files expected
to be installed
Adjust test 'common/122 shared module': Add libnosyms implib to set of
files expected to be installed, except for MSVC, where none is generated
as it has no exports
Use the specified name_prefix for implib, rather than hardcoding it.
(This is needed to allow an installed library given name_prefix:'' and a
name starting with 'lib' to be linked with using -l. (This case is
handled specially in the pkgconfig module))
Handle clang's cl or clang-cl being in PATH, or set in CC/CXX
Future work: checking the name of the executable here seems like a bad idea.
These compilers will fail to be detected if they are renamed.
v2:
Update compiler.get_argument_type() test
Fix comparisons of id inside CCompiler, backends and elsewhere
v3:
ClangClCPPCompiler should be a subclass of ClangClCCompier, as well
Future work: mocking in test_find_library_patterns() is effected, as we
now test for a subclass, rather than self.id in CCompiler.get_library_naming()
If a subproject is not required and fails during its configuration, the
parent project continues, but should not include any target or state set
by the failed subproject. This fix ninja still trying to build targets
generated by subprojects before they fail in their configuration.
The 'build' object is now per-interpreter instead of being global. Once
a subproject interpreter succeed, values from its 'build' object are
merged back into its parent 'build' object.
Before, the mappings has been created over all the links, while it
actaully only used the Shared or Static Targets. This structure now is
tree like structured and cached, thus the results can be computed a lot
faster.
The generator step generate_install is now for EFL from 6 sec. down to
0.3s. Which improves the overall build time from ~20 sec. to ~14 sec.
there is a huge amount of isinstance calls, this reduces the amount of
these calls while splitting up a rather big function. It also assosiates
every target type with theire default install directory.
The problem with the earlier position of the generation code was, that
the results could not be cached, because the list of all link_deps was
overall different. However, it shared a special kind of subsets with
other build build targets.
Generating the set of subdirs that are required for linking, alongside
with the link dependencies brings the possibility of caching this.
This reduces the buildting from 1 min. in efl down to 20 sec. And
reduces the amount of 30872534 calls down.
this saves ~40 sec.
With this it is now possible to do
foobar = executable('foobar', ...)
meson.override_find_program('foobar', foobar)
Which is convenient for a project like protobuf which produces both a
dependency and a tool. If protobuf is updated to use
override_find_program, it can be used as
protobuf_dep = dependency('protobuf', version : '>=3.3.1',
fallback : ['protobuf', 'protobuf_dep'])
protoc_prog = find_program('protoc')
We now use the soversion to set compatibility_version and
current_version by default. This is the only sane thing we can do by
default because of the restrictions on the values that can be used for
compatibility and current version.
Users can override this value with the `darwin_versions:` kwarg, which
can be a single value or a two-element list of values. The first one
is the compatibility version and the second is the current version.
Fixes https://github.com/mesonbuild/meson/issues/3555
Fixes https://github.com/mesonbuild/meson/issues/1451
Ninja buffers all commands and prints them only after they are
complete. Because of this, long-running commands such as `cargo
build` show no output at all and it's impossible to know if the
command is merely taking too long or is stuck somewhere.
To cater to such use-cases, Ninja has a 'pool' with depth 1 called
'console', and all processes in this pool have the following
properties:
1. stdout is connected to the program, so output can be seen in
real-time
2. The output of all other commands is buffered and displayed after
a command in this pool finishes running
3. Commands in this pool are executed serially (normal commands
continue to run in the background)
This feature is available since Ninja v1.5
https://ninja-build.org/manual.html#_the_literal_console_literal_pool
We now pass the current subproject to every FeatureNew and
FeatureDeprecated call. This requires a bunch of rework to:
1. Ensure that we have access to the subproject in the list of
arguments when used as a decorator (see _get_callee_args).
2. Pass the subproject to .use() when it's called manually.
3. We also can't do feature checks for new features in
meson_options.txt because that's parsed before we know the
meson_version from project()
* Use _get_callee_args to unwrap function call arguments, needed for
module functions.
* Move some FeatureNewKwargs from build.py to interpreter.py
* Print a summary for featurenew only if conflicts were found. The
summary now only prints conflicting features.
* Report and store featurenew/featuredeprecated only once
* Fix version comparison: use le/ge and resize arrays to not fail on
'0.47.0>=0.47'
Closes https://github.com/mesonbuild/meson/issues/3660
If the external program is a string that is meant to be searched in
PATH, we can't add a dependency on it at configure time because we don't
know where it will be at compile time.
D is not a 'c-like' language, but it can link to C libraries. The same
might be true of Rust in the future and Go when we add support for it.
This contains no functionality changes.
Since `build_always` also adds a target to the set of default targets,
this option is marked deprecated in favour of the new option
`build_always_stale`.
`build_always_stale` *only* marks the target to be always considered out
of date, but does *not* add it to the set of default targets.
The old behaviour can still be achieved by combining
`build_always_stale` with `build_by_default`.
fixes#1942
On Windows, if we are going to link with a shared module, we need the
implib.
Use case: The Xorg server builds some X protocol extensions as modules. The
implibs for these modules need to be shipped as part of the SDK, to enable
building of 3rd party extensions which reference symbols in (and hence on
Windows, need to be linked with) these modules.
Refine #3277
According to what I read on the internet, on OSX, both MH_BUNDLE (module)
and MH_DYLIB (shared library) can be dynamically loaded using dlopen(), but
it is not possible to link against MH_BUNDLE as if they were shared
libraries.
Metion this as an issue in the documentation.
Emitting a warning, and then going on to fail during the build with
mysterious errors in symbolextractor isn't very helpful, so make attempting
this an error on OSX.
Add a test for that.
See also:
https://docstore.mik.ua/orelly/unix3/mac/ch05_03.htmhttps://stackoverflow.com/questions/2339679/what-are-the-differences-between-so-and-dylib-on-osx
This makes it possible to customize permissions of all installable
targets, such as executable(), libraries, man pages, header files and
custom or generated targets.
This is useful, for instance, to install setuid/setgid binaries, which
was hard to accomplish without access to this attribute.
Libraries that have been linked with link_whole: are internal
implementation details and should never be exposed to the outside
world in either Libs: or Libs.private:
Closes https://github.com/mesonbuild/meson/issues/3509
To maintain backward compatibility we cannot add recursive objects by
default. Print a warning when there are recursive objects to be pulled
and the argument is not set. After a while we'll do pull recursive
objects by default.
- determine_ext_objs: What matters is if extobj.target is a unity build,
not if the target using those objects is a unity build.
- determine_ext_objs: Return one object file per compiler, taking into
account generated sources.
- object_filename_from_source: No need to special-case unity build, it
does the same thing in both code paths.
- check_unity_compatible: For each compiler we must extract either none
or all its sources, taking into account generated sources.
when flattening the chained dependencies of an object, we don't need to
create any new internal dependencies if all the fields to be added to it
are empty.
For projects with a lot of libraries and dependency objects this can lead
to noticeable performance improvements.
fixup
When getting dependencies, we don't need to get the same dependencies and
dependency chains multiple times. If library a depends on x, y and z, and
library b depends on a, then we should not have to iterate through x, y and
z multiple times. Pruning at the stage of scanning the dependencies leads
to significant time savings when running meson
Change the code to store D properties as plain data. Only convert them
to compiler flags in the backend. This also means we can fully parse D
arguments without needing to know the compiler being used.
According to Python documentation[1] dirname and basename
are defined as follows:
os.path.dirname() = os.path.split()[0]
os.path.basename() = os.path.split()[1]
For the purpose of better readability split() is replaced
by appropriate function if only one part of returned tuple
is used.
[1]: https://docs.python.org/3/library/os.path.html#os.path.split
Currently, we only consider the build depends of the Executable being
run when serializing custom targets. However, this is not always
sufficient, for example if the executable loads modules at runtime or if
the executable is actually a python script that loads a built module.
For these cases, we need to set PATH on Windows correctly or the custom
target will fail to run at build time complaining about missing DLLs.
We can now specify the library type we want to search for, and whether
we want to prefer static libraries over shared ones or the other way
around. This functionality is not exposed to build files yet.
Currently, run_target does not get namespaced for each subproject,
unlike executable and others. This means that two subprojects sharing
the same run_target name cause meson to crash.
Fix this by moving the subproject namespacing logic from the BuildTarget
class to the Target class.
With executable(), if the link_with argument has a string as one of it's
elements, meson ends up throwing an AttributeError exception:
...
File "/home/lyudess/Projects/meson/mesonbuild/build.py", line 868, in link
if not t.is_linkable_target():
AttributeError: 'str' object has no attribute 'is_linkable_target'
Which is not very helpful in figuring out where exactly the project is
trying to link against a string instead of an actual link target. So,
fix this by verifying in BuildTarget.link() that each given target is
actually a Target object and not something else.
Additionally, add a simple test case for this in failing tests. At the
moment, this test case just passes unconditionally due to meson throwing
the AttributeError exception and failing as expected. However, this test
case will be useful eventually if we ever end up making failing tests
more strict about failing gracefully (per advice of QuLogic).
This allows a CustomTarget to be indexed, and the resulting indexed
value (a CustomTargetIndex type), to be used as a source in other
targets. This will confer a dependency on the original target, but only
inserts the source file returning by index the original target's
outputs. This can allow a CustomTarget that creates both a header and a
code file to have it's outputs split, for example.
Fixes#1470
Currently sources, generated sources, or objects are considered to be
sources for a target, but link_whole should also fulfill the sources
requirement.
Fixes#2180
Currently meson only considers what compiler/linker were used by a
Target's immediate sources or objects, not the sources of libraries it's
linked with by the link_with and link_while keywords. This means that if
given 3 libraries: libA which is C++, libB which is C, and libC which is
also C, and libC links with libB which links with libA then linking libC
will be attempted with the C linker, and will fail.
This patch corrects that by adding the compilers used by sub libraries
to the collection of compilers considered by meson when picking a
linker.
This adds a new process_compilers_late method to the BuildTarget class,
which is evaluated after process_kwargs is called. This is needed
because some D options need to be evaluated after compilers are
selected, while for C-like languages we need to check the link* targets
for language requirements, and link* targets are passed by kwargs.
This implementation is recursive, since each Target adds it's parent's
dependencies.
Currently if a target uses link_whole, and one of those archives is a
C++, but the files for the target are C linking will fail when the C
linker attempts to link the C++ files. This patches add
link_whole_targets to the list of languages in the target so the correct
linker will be selected.
Add a boolean 'implib' kwarg to executable(). If true, it is permitted to
use the returned build target object in link_with:
On platforms where this makes sense (e.g. Windows), an implib is generated
for the executable and used when linking. Otherwise, it has no effect.
(Rather than checking if it is a StaticLibrary or SharedLibary, BuildTarget
subclasses gain the is_linkable_target method to test if they can appear in
link_with:)
Also install any executable implib in a similar way to a shared library
implib, i.e. placing the implib in the appropriate place
Add tests of:
- a shared_module containing a reference to a symbol which is known (at link
time) to be provided by the executable
- trying to link with non-implib executables (should fail)
- installing the implib
(This last one needs a little enhancement of the installed file checking as
this is the first install test we have which needs to work with either
MSVC-style or GCC-style implib filenames)
- Adds a `crate_type` kwarg to library targets, allowing the different
types of Rust [linkage][1].
- Shared libraries use the `dylib` crate type by default, but can also
be `cdylib`
- Static libraries use the `rlib` crate type by default, but can also
be `staticlib`
- If any Rust target has shared library dependencies, add the
appropriate linker arguments, including rpath for the sysroot of the
Rust compiler
[1]: https://doc.rust-lang.org/reference/linkage.html
This class now consolidates a lot of the logic that each external
dependency was duplicating in its class definition.
All external dependencies now set:
* self.version
* self.compile_args and self.link_args
* self.is_found (if found)
* self.sources
* etc
And the abstract ExternalDependency class defines the methods that
will fetch those properties. Some classes still override that for
various reasons, but those should also be migrated to properties as
far as possible.
Next step is to consolidate and standardize the way in which we call
'configuration binaries' such as sdl2-config, llvm-config, pkg-config,
etc. Currently each class has to duplicate code involved with that
even though the format is very similar.
Currently only pkg-config supports multiple version requirements, and
some classes don't even properly check the version requirement. That
will also become easier now.
Currently only strings can be passed to the link_depends argument of
executable and *library, which solves many cases, but not every one.
This patch allows generated sources and Files to be passed as well.
On the implementation side, it uses a helper method to keep the more
complex logic separated from the __init__ method. This also requires
that Targets set their link_depends paths as Files, and the backend is
responsible for converting to strings when it wants them.
This adds tests for the following cases:
- Using a file in a subdir
- Using a configure_file as an input
- Using a custom_target as an input
It does not support using a generator as an input, since currently that
would require calling the generator twice, once for the -Wl argument,
and once for the link_depends.
Also updates the docs.
Meson has a common pattern of using 'if len(foo) == 0:' or
'if len(foo) != 0:', however, this is a common anti-pattern in python.
Instead tests for emptiness/non-emptiness should be done with a simple
'if foo:' or 'if not foo:'
Consider the following:
>>> import timeit
>>> timeit.timeit('if len([]) == 0: pass')
0.10730923599840025
>>> timeit.timeit('if not []: pass')
0.030033907998586074
>>> timeit.timeit('if len(['a', 'b', 'c', 'd']) == 0: pass')
0.1154778649979562
>>> timeit.timeit("if not ['a', 'b', 'c', 'd']: pass")
0.08259823200205574
>>> timeit.timeit('if len("") == 0: pass')
0.089759664999292
>>> timeit.timeit('if not "": pass')
0.02340641999762738
>>> timeit.timeit('if len("foo") == 0: pass')
0.08848102600313723
>>> timeit.timeit('if not "foo": pass')
0.04032287199879647
And for the one additional case of 'if len(foo.strip()) == 0', which can
be replaced with 'if not foo.isspace()'
>>> timeit.timeit('if len(" ".strip()) == 0: pass')
0.15294511600222904
>>> timeit.timeit('if " ".isspace(): pass')
0.09413968399894657
>>> timeit.timeit('if len(" abc".strip()) == 0: pass')
0.2023209120015963
>>> timeit.timeit('if " abc".isspace(): pass')
0.09571301700270851
In other words, it's always a win to not use len(), when you don't
actually want to check the length.
Sometimes you want to link to a C++ library that exports C API, which
means the linker must link in the C++ stdlib, and we must use a C++
compiler for linking. The same is also applicable for objc/objc++ etc,
so we can keep using clike_langs for the priority order.
Closes https://github.com/mesonbuild/meson/issues/1653
This detects and allows passing a generated file as a vs_module_def, it
also adds a testcase that tests using configure_file to generate the
.def file.
You can now pass a list of strings to the install_dir: kwarg to
build_target and custom_target.
Custom Targets:
===============
Allows you to specify the installation directory for each
corresponding output. For example:
custom_target('different-install-dirs',
output : ['first.file', 'second.file'],
...
install : true,
install_dir : ['somedir', 'otherdir])
This would install first.file to somedir and second.file to otherdir.
If only one install_dir is provided, all outputs are installed there
(same behaviour as before).
To only install some outputs, pass `false` for the outputs that you
don't want installed. For example:
custom_target('only-install-second',
output : ['first.file', 'second.file'],
...
install : true,
install_dir : [false, 'otherdir])
This would install second.file to otherdir and not install first.file.
Build Targets:
==============
With build_target() (which includes executable(), library(), etc),
usually there is only one primary output. However some types of
targets have multiple outputs.
For example, while generating Vala libraries, valac also generates
a header and a .vapi file both of which often need to be installed.
This allows you to specify installation directories for those too.
# This will only install the library (same as before)
shared_library('somevalalib', 'somesource.vala',
...
install : true)
# This will install the library, the header, and the vapi into the
# respective directories
shared_library('somevalalib', 'somesource.vala',
...
install : true,
install_dir : ['libdir', 'incdir', 'vapidir'])
# This will install the library into the default libdir and
# everything else into the specified directories
shared_library('somevalalib', 'somesource.vala',
...
install : true,
install_dir : [true, 'incdir', 'vapidir'])
# This will NOT install the library, and will install everything
# else into the specified directories
shared_library('somevalalib', 'somesource.vala',
...
install : true,
install_dir : [false, 'incdir', 'vapidir'])
true/false can also be used for secondary outputs in the same way.
Valac can also generate a GIR file for libraries when the `vala_gir:`
keyword argument is passed to library(). In that case, `install_dir:`
must be given a list with four elements, one for each output.
Includes tests for all these.
Closes https://github.com/mesonbuild/meson/issues/705
Closes https://github.com/mesonbuild/meson/issues/891
Closes https://github.com/mesonbuild/meson/issues/892
Closes https://github.com/mesonbuild/meson/issues/1178
Closes https://github.com/mesonbuild/meson/issues/1193
Previously, two functionally identical builds could produce different
build.ninja files. The ordering of the rules themselves doesn't affect
behaviour, but unnecessary changes in commandline arguments can cause
spurious rebuilds and if the ordering of the overall file is stable
than it's easy to use `diff` to compare different build.ninja files
and spot the differences in ordering that are triggering the unnecessary
rebuilds.
Now as long as you have a C compiler available in the project, it will
be used to compile assembly even if the target contains a C++ compiler
and even if the target contains only assembly and C++ sources.
Earlier, the order in which sources appeared in a target would decide
which compiler would be used.
However, if the project only provides a C++ compiler, that will be
used for compiling assembly sources.
If this breaks your use-case, please tell us.
Includes a test that ensures that all of the above is adhered to.
Use an ordered dict for the compiler dictionary and sort it according
to a priority order: fortran, c, c++, etc.
This also ensures that builds are reproducible because it would be
a toss-up whether a C or a C++ compiler would be used based on the
order in which compilers.items() would return items.
Closes https://github.com/mesonbuild/meson/issues/1370
We were adding them to the CompilerArgs instance in the order in which
they are specified, which is wrong because later dependencies would
override previous ones. Add them in the reverse order instead.
Closes https://github.com/mesonbuild/meson/issues/1495
This changes how generated files are added to the VS project.
Previously, they were all added as a single CustomBuildStep with all
generator commands, inputs and outputs merged together.
Now, each input file is added separately to the project and is given a
CustomBuild command. This adds all generator input files to the files list
in the VS gui and allows to run only some of the generator commands if
only some of the input files have changed.
We check for the existence of PDB files in the install script, so we
don't need to do all this mucking about here. That's more robust too
because we don't need to parse build arguments in buildtype=plain
and decide if the PDB file would be generated.
This means replacing @PLAINNAME@ and @BASENAME@ in the outputs. This is
the same feature as generator().
This is only allowed when there is only one input file for obvious
reasons + failing test for this.
Factor it out into a function in mesonlib.py. This will allow us to
reuse it for generators and for configure_file(). The latter doesn't
implement this at all right now.
Also includes unit tests.
Without this, files() in the arguments give an error because it's a list
of mesonlib.File objects:
Array as argument 1 contains a non-string.
It also breaks in nested lists. Includes a test for this.
At the same time, also fix the order in which compile arguments are
added. Detailed comments have been added concerning the priority and
order of the arguments.
Also adds a unit test and an integration test for the same.
With the 'install_mode' kwarg, you can now specify the file and
directory permissions and the owner and the group to be used while
installing. You can pass either:
* A single string specifying just the permissions
* A list of strings with:
- The first argument a string of permissions
- The second argument a string specifying the owner or
an int specifying the uid
- The third argument a string specifying the group or
an int specifying the gid
Specifying `false` as any of the arguments skips setting that one.
The format of the permissions kwarg is the same as the symbolic
notation used by ls -l with the first character that specifies 'd',
'-', 'c', etc for the file type omitted since that is always obvious
from the context.
Includes unit tests for the same. Sadly these only run on Linux right
now, but we want them to run on all platforms. We do set the mode in the
integration tests for all platforms but we don't check if they were
actually set correctly.