This feature is part of performance improvement to dispatch and start
command buffers as primary batch buffers.
When exhausted command buffer is closed, then reserve exact space for chained
batch buffer start and bind it to the next command buffer.
When closing command buffer, then save ending pointer and
reserve aligned space.
Related-To: NEO-7807
Signed-off-by: Zbigniew Zdanowicz <zbigniew.zdanowicz@intel.com>
MaxDualSubSlicesSupported is filled inside GT_SYSTEM_INFO
structure when querying the KMD appropriately with the
number of enabled DualSubSlices. However we need to find
the highest index of the last enabled DualSubSlice.
For proper allocation of thread scratch space, allocation
has to be done based on native die config (including unfused
or non-enabled DualSubSlices). Since HW doesn't provide us a
way to know the exact native die config, in SW we need to
allocate RT stacks with enough size based on the last used
DualSubSlice.
The IsDynamicallyPopulated field in GT_SYSTEM_INFO is used to
indicate if system details are populated either via Fuse reg.
or hard-coded. Based on this field's value, we calcuate the
numRtStacks appropriately.
Related-To: LOCI-3954
Signed-off-by: Raiyan Latif <raiyan.latif@intel.com>
Sizing context (PVC):
When using LargeGRF (a.k.a GRF256) there are only 4 HW threads per EU
(instead of default 8). Together with SIMD16 that means that there can
be max 64 work-items per EU. With 8 EU per subslice this gives 512
work-items on a single subslice. For correct intra-WG synchronization
all its WIs must be executed on the same subslice (to access the same
SLM, where the synchronization primitives are stored). Thus, with SIMD16
and LargeGRF the work-group size must not exceed 512 (PVC example).
So far `maxWorkGroupSize` is taken solely from a DeviceInfo structure
both in `ModuleTranslationUnit::processUnpackedBinary()` and
`ModuleImp::initialize()`. This method does not take kernel parameters
(LargeGRF) into account. It allows to submit a kernel using LargeGRF
with SIMD16 with the work-group size set to 1024. That leads to a hang.
Fix the `.maxWorkGroupSize` computation so that it takes the kernel
parameters into consideration.
Add new (for discrete platforms >= XeHP) and adapt existing tests, fix
cosmetics by the way.
Similar check for OCL:
https://github.com/intel/compute-runtime/blob/master/opencl/source/comma
nd_queue/enqueue_kernel.h#L130
Related-To: NEO-7684
Signed-off-by: Maciej Bielski <maciej.bielski@intel.com>