When trimming old allocations in usm reuse start from largest
allocations.
This will reduce memory usage more quickly once max hold time is hit.
Related-To: NEO-6893, NEO-14429
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Real allocation size should be used to properly apply limits and allow
more usm reuse hits.
Related-To: NEO-6893
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Save svmData on putting into reuse, instead of searching each time.
Change UNRECOVERABLE_IF to DEBUG_BREAK_IF.
Related-To: NEO-6893
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Host usm and device usm for igfx checks system memory usage.
Device usm for dgfx checks local memory usage.
If used memory is above limit threshold:
- no new allocations will be saved for reuse
- cleaner will use shorter hold time of 2 seconds
- cleaner will free all eligible allocations, regardless of async
deleter thread having work
Motivation: in case of gfx memory being full, making resident new
allocations will require evictions which leads to massive slowdown on
enqueue calls.
This change aims to minimize cases where extra memory usage from usm
reuse mechanism leads to above situation.
Related-To: NEO-6893, NEO-14160
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
If flag "LogUsmReuse" is set, usm reuse will log operations to csv file.
Each line will contain: timestamp, host/device, operation type,
allocation size, true/false whether operation succeeded.
This data can then be used to produce graphs and help in analyzing
usm reuse in a particular workload.
Related-To: NEO-6893
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Cleaner thread will run every 15ms instead of 2s.
Allocations will be held for at least 10s.
If deferred deleter has elements to release, will skip cleaning cache.
Will clean only 1 allocation per cache, per cleaning run.
Related-To: NEO-6893
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
mechanism for freeing allocations saved for reuse that have not been
used in a given time
Related-To: NEO-13425
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Calculate available memory for usm device reuse based as (total device
memory - used memory) * fraction for reuse.
Use sys mem allocs for devices without local memory.
Related-To: NEO-12902
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Add tracking of memory used for usm reuse mechanism when multiple cl
contexts are used.
Tracking for device added to NEO::Device, for host added to
NEO::MemoryManager.
This fixes usm reuse using x% of memory per each context instead of
globally.
Related-To: NEO-13308
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
if limiting, disable device usm reuse (set max size to 0)
do not reserve vector for allocation infos if reuse is disabled
Related-To: NEO-12924
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Allocations over a certain size will be checked for memory utilization
when chosen for reuse.
If utilization is below a threshold, they will not be reused.
Related-To: NEO-6893
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Related-To: GSD-9385
In case of indirect allocations, we don't really know
their task count because we can't track their true usage
on GPU.
In case of non-blocking free, don't wait for latestSentTaskCount.
Signed-off-by: Szymon Morek <szymon.morek@intel.com>
Do not put into usm reuse if is internal.
Set new isInternalAllocation flag for globals allocations.
Use actual size on device for tracking memory usage.
Related-To: NEO-6893
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Do not put into usm reuse if is internal.
Set new isInternalAllocation flag for globals allocations.
Use actual size on device for tracking memory usage.
Related-To: NEO-6893
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>
Allocating global surface is expecting that the usm allocation is zeroed
out. Reusing allocations can be filled with junk data and this caused
errors.
Resolves: HSD-18038551036, HSD-18038551766, HSD-18038551957, HSD-18038552252
Signed-off-by: Dominik Dabek <dominik.dabek@intel.com>