performance: Reuse GPU timestamp instead of KMD escape

This can be enabled only if related
debug flag will be set.

Related-To: NEO-10615

Signed-off-by: Szymon Morek <szymon.morek@intel.com>
This commit is contained in:
Szymon Morek
2024-04-30 10:59:04 +00:00
committed by Compute-Runtime-Automation
parent c1004b77bf
commit 83e8ae4a20
17 changed files with 601 additions and 74 deletions

View File

@@ -652,7 +652,7 @@ EngineControl &Device::getEngine(uint32_t index) {
bool Device::getDeviceAndHostTimer(uint64_t *deviceTimestamp, uint64_t *hostTimestamp) const {
TimeStampData timeStamp;
auto retVal = getOSTime()->getGpuCpuTime(&timeStamp);
auto retVal = getOSTime()->getGpuCpuTime(&timeStamp, true);
if (retVal) {
*hostTimestamp = timeStamp.cpuTimeinNS;
if (debugManager.flags.EnableDeviceBasedTimestamps.get()) {