Summary: The Offloading library wraps around the underlying plugins. The problem is that we currently initialize all plugins we find, even if they are not needed for the program. This is very expensive for trivial uses, as fully heterogenous usage is quite rare. In practice this means that you will always pay a 200 ms penalty for having CUDA installed. This patch changes the behavior to provide accessors into the plugins and devices that allows them to be initialized lazily. We use a once_flag, this should properly take a fast-path check while still blocking on concurrent use. Making full use of this will require a way to filter platforms more specifically. I'm thinking of what this would look like as an API. I'm thinking that we either have an extra iterate function that takes a callback on the platform, or we just provide a helper to find all the devices that can run a given image. Maybe both? Fixes: https://github.com/llvm/llvm-project/issues/159636
The LLVM/Offload Subproject
The Offload subproject aims at providing tooling, runtimes, and APIs that allow users to execute code on accelerators or other "co-processors" that may or may not match the architecture of their "host". In the long run, all kinds of targets are in scope of this effort, including but not limited to: CPUs, GPUs, FPGAs, AI/ML accelerators, distributed resources, etc.
For OpenMP offload users, the project is ready and fully usable. The final API design is still under development. More content will show up here and on our webpage soon. In the meantime, people are encouraged to participate in our meetings (see below) and check our development board as well as the discussions on Discourse.
Meetings
Every second Wednesday, 7:00 - 8:00am PT, starting Jan 24, 2024. Alternates with the OpenMP in LLVM meeting. invite.ics Meeting Minutes and Agenda