This prepares for moving this method from MemoryManager to CSR.
Change-Id: I82393289c48990f26ed3ac922bcd64e2b6c11f28
Signed-off-by: Maciej Dziuban <maciej.dziuban@intel.com>
- Command Stream Receiver should be used instead for locking.
- Remove not needed synchronization in clSetUserEventStatus
Change-Id: I17050dc70cb0be03b2003043a9666ba8df1a83c9
- resources are dumped in make non resident call
- in order to dump correct data we need to be sure that GPU is done processing
- waiting needs to be unconditional to handle all cases
- remove not needed parameter to makeSurfacePackNonResident
Change-Id: Ib2b065d486cd3a5d86e599c51b24f3c958c3a10b
- remove not needed method in mock device.
- remove duplication from aub tests.
- tag allocation now have desired value
Change-Id: Ib3161cce6753eae27c60fddb63054fd2e12f7dac
- remove other explicit resets, no longer needed.
- change the order of destruction, command stream receiver needs to be
destroyed prior to memory manager.
Change-Id: I3c5db46db15a2cb7dc9f6fdb0e06441806fbd9f2
This code is an infrastructure for special debug purpose that allow measure
execution time of any hardware command.
Change-Id: Id12a7979d204734a0c4a6c4700e427b65ac2397f
- replace createGraphicsAllocationWithRequiredBitness with more general
methodallocateGraphicsMemoryInPreferredPool based on passed
AllocationData
- proper flags for allocation selected based on AllocationType
- remove allocateGraphicsMemory(size_t size, size_t alignment)
and use allocateGraphicsMemory(size_t size) instead where default
alignment is sufficient, otherwise use full options version:
allocateGraphicsMemory(size_t size, size_t alignment,
bool forcePin, bool uncacheable)
Change-Id: I2da891f372ee181253cb840568a61b33c0d71fc9
- change GraphicsAllocatoin::AllocationType to scoped enumeration
so that ALLOCATION_TYPE_ prefix in every enum value can be removed
- all accesses are typed (example AllocationType::IMAGE)
- Rename allocationType to AllocationUsage to eliminate confusion
with multiple AllocationType enums / types
Change-Id: I16003297ecfcb0aaa5779ad00706c5d983914bbe
- makeCoherent should be called after TBX finished processing
- this is when tagAddress is updated with taskCount
makeCoherent is called from makeNonResident which is invoked just
after flush and may happen before TBX server finished processing
leading to invalid data to be read back to CPU accessible memory
- this fix adds waiting for taskCount to blocking calls for TBX CSR
before calling makeNonResident on surfaces to guarantee correct data
from TBX server is ready.
Change-Id: I498a5454e0826eec2a5413a08880af40268550e1
This commit adds a capability to selectively enable/disable AUB capture,
i.e. by toggling the registry key from the outside or specifying the filter
with a kernel name and/or kernel start index and kernel end index.
Change-Id: Ib5d39c21863fbc4a95aa73c949b9779ff993de0f
-This is required to enable N:1 submission model.
-If heaps are coming from different command queues that always
mean that STATE_BASE_ADDRESS needs to be reloaded
-In order to not emit any non pipelined state in CSR, this change
moves the ownership of IndirectHeap to one centralized place which is
CommandStreamReceiver
-This way when there are submissions from multiple command queues then
they reuse the same heaps, therefore preventing SBA reload
Change-Id: I5caf5dc5cb05d7a2d8766883d9bc51c29062e980
- Internal allocations may now coexists with non internal on reusable list.
- Caller now specifies if internal allocation is needed.
- If criteria are not met , then allocation is not returned.
Change-Id: I7da3a4f944768b7c8a873e44fd47248f1d76bf9e
- cpu virtual address was used instead of gpu va
- this caused incorrect behaviour of TBX server when
special heap allocator assigning GPU addresses was used
Change-Id: I2328cf2441be797311fd6a3c7b331b0fff79d4fc
-This is to make sure those functions are not called when gtpin is not used
-This preserves CPU instruction cache pollution.
-Our enqueue path needs to be as thin as possible, even with this small change
there is visible gain in ULT execution time.
Change-Id: I44cc2144754cda95ca1fe058184cd8a151b8d35c
- Microseconds offer better precision.
- Some workloads require threshold less then 1 millisecond to work
efficiently.
Change-Id: I1a565049340fb6eeebe5c0a61ededae9959daca8
- Call waitForTaskCountAndCleanAllocationList with latest flushed task count
to reflect what was actually sent to HW.
- refactor cleanAllocationList to waitForTaskCountAndCleanAllocationList
Change-Id: I5301185c5fce212e39eb017b952b43c279559cf4
This commit as aimed to add support for batched dispatch,
but doesn't make it the default mode for AubCSR yet.
Change-Id: I4dc366ec5f01adf2c4793009da2100ba0230c60a