Machine has 4 Numa nodes and is booted with kernel boot parameter default_hugepagesz=1G. I start VM with libvirt/virsh, and I can see that qemu launches with -m 65536 ... -mem-prealloc -mem-path /mnt/hugepages/libvirt/qemu, i.e. start virtual machine with 64GB of memory and request it to allocate the guest memory from a temporarily created file in /mnt/hugepages/libvirt/qemu:
% fgrep Huge /proc/meminfo
AnonHugePages: 270336 kB
ShmemHugePages: 0 kB
HugePages_Total: 113
HugePages_Free: 49
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
Hugetlb: 118489088 kB
%
% numastat -cm -p `pidof qemu-system-x86_64`
Per-node process memory usage (in MBs) for PID 3365 (qemu-system-x86)
Node 0 Node 1 Node 2 Node 3 Total
------ ------ ------ ------ -----
Huge 29696 7168 0 28672 65536
Heap 0 0 0 31 31
Stack 0 0 0 0 0
Private 4 9 4 305 322
------- ------ ------ ------ ------ -----
Total 29700 7177 4 29008 65889
...
Node 0 Node 1 Node 2 Node 3 Total
------ ------ ------ ------ ------
MemTotal 128748 129017 129017 129004 515785
MemFree 98732 97339 100060 95848 391979
MemUsed 30016 31678 28957 33156 123807
...
AnonHugePages 0 4 0 260 264
HugePages_Total 29696 28672 28672 28672 115712
HugePages_Free 0 21504 28672 0 50176
HugePages_Surp 0 0 0 0 0
%
This output confirms that host's memory of 512GB is equally split across the numa nodes, and hugepages are also equally distributed across the nodes.
The question is how does qemu (or kvm?) determine how many hugepages to allocate? Note that libvirt xml has the following directive:
<memoryBacking>
<hugepages/>
<locked/>
</memoryBacking>
However, it is unclear from https://libvirt.org/formatdomain.html#memory-tuning what are defaults for hugepage allocation and on which nodes? Is it possible to have all memory for VM allocated from node 0? What is the right way doing this?
UPDATE
Since my VM workload is actually pinned to a set of cores on a single numa node 0 using <vcpupin> element, I thought it'd be good idea to enforce Qemu to allocate memory from the the same numa node:
<numtune>
<memory mode="strict" nodeset="0">
</numtune>
However this didn't work, qemu returned error in its log:
os_mem_prealloc insufficient free host memory pages available to allocate guest ram
Does it mean it fails to find free huge pages on the numa node 0?
If you use a plain <hugepages/> element, then libvirt will configure QEMU to allocate from the default huge page pool. Given your 'default_hugepagesz=1G' that should mean that QEMU allocates 1 GB sized pages. QEMU will allocate as many as are needed to satisfy the request RAM size. Given your configuration, these huge pages can potentially be allocated from any NUMA node.
With more advanced libvirt configuration it is possible to request allocation of a specific size of huge page, and pick them from specific NUMA nodes. The latter is only really needed if you are also locking CPUs to a specific host NUMA node.
Does it mean it fails to find free huge pages on the numa node 0?
Yes, it does.
numastat -m can be used to find out how many Huge Pages are there totally, free.
Related
I want to calculate the achieved occupancy and compare it with the value that is being displayed in Nsight Compute.
ncu says: Theoretical Occupancy [%] 100, and Achieved Occupancy [%] 93,04. What parameters do i need to calculate this value?
I can see the theoretical occupancy using the occupancy api, which comes out as 1.0 or 100%.
I tried looking for the metric achieved_occupancy, sm__active_warps_sum, sm__actice_cycles_sum but all of them say: Failed to find metric sm__active_warps_sum. I can see the formaula to calculate the achieved occupancy from this SO answer.
Few details if that might help:
There are 1 CUDA devices.
CUDA Device #0
Major revision number: 7
Minor revision number: 5
Name: NVIDIA GeForce GTX 1650
Total global memory: 4093181952
Total constant memory: 65536
Total shared memory per block: 49152
Total registers per block: 65536
Total registers per multiprocessor: 65536
Warp size: 32
Maximum threads per block: 1024
Maximum threads per multiprocessor: 1024
Maximum blocks per multiprocessor: 16
Maximum dimension 0 of block: 1024
Maximum dimension 1 of block: 1024
Maximum dimension 2 of block: 64
Maximum dimension 0 of grid: 2147483647
Maximum dimension 1 of grid: 65535
Maximum dimension 2 of grid: 65535
Clock rate: 1515000
Maximum memory pitch: 2147483647
Total constant memory: 65536
Texture alignment: 512
Concurrent copy and execution: Yes
Number of multiprocessors: 14
Kernel execution timeout: Yes
ptxas info : Used 18 registers, 360 bytes cmem[0]
Shorter:
In a nutshell, the theoretical occupancy is given by metric name sm__maximum_warps_per_active_cycle_pct and the achieved occupancy is given by metric name sm__warps_active.avg.pct_of_peak_sustained_active.
Longer:
The metrics you have indicated:
I tried looking for the metric achieved_occupancy, sm__active_warps_sum, sm__active_cycles_sum but all of them say: Failed to find metric sm__active_warps_sum.
are not applicable to nsight compute. NVIDIA has made a variety of different profilers, and these metric names apply to other profilers. The article you reference refers to a different profiler (the original profiler on windows used the nsight name but was not nsight compute.)
This blog article discusses different ways to get valid nsight compute metric names with references to documentation links that present the metrics in different ways.
I would also point out for others that nsight compute has a whole report section dedicated to occupancy, and so for typical interest, that is probably the easiest way to go. Additional instructions for how to run nsight compute are available in this blog.
To come up with metrics that represent occupancy the way the nsight compute designers intended, my suggestion would be to look at their definitions. Each report section in nsight compute has "human-readable" files that indicate how the section is assembled. Since there is a report section for occupancy that includes reporting both theoretical and achieved occupancy, we can discover how those are computed by inspecting those files.
The methodology for how the occupancy section is computed is contained in 2 files which are part of a CUDA install. On a standard linux CUDA install, these will be in /usr/local/cuda-XX.X/nsight-compute-zzzzzz/sections/Occupancy.py and .../sections/Occupancy.section. The python file gives the exact names of the metrics that are used as well as the calculation method(s) for other displayed topics related to occupancy (e.g. notes, warnings, etc.) In a nutshell, the theoretical occupancy is given by metric name sm__maximum_warps_per_active_cycle_pct and the achieved occupancy is given by metric name sm__warps_active.avg.pct_of_peak_sustained_active.
You could retrieve both the Occupancy section report (which is part of the "default" "set") as well as these specific metrics with a command line like this:
ncu --set default --metrics sm__maximum_warps_per_active_cycle_pct,sm__warps_active.avg.pct_of_peak_sustained_active ./my-app
Here is an example output from such a run:
$ ncu --set default --metrics sm__maximum_warps_per_active_cycle_pct,sm__warps_active.avg.pct_of_peak_sustained_active ./t2140
Testing with mask size = 3
==PROF== Connected to process 31551 (/home/user2/misc/t2140)
==PROF== Profiling "convolution_2D" - 1: 0%....50%....100% - 20 passes
==PROF== Profiling "convolution_2D_tiled" - 2: 0%....50%....100% - 20 passes
Time elapsed on naive GPU convolution 2d tiled ( 32 ) block 460.922913 ms.
________________________________________________________________________
Testing with mask size = 5
==PROF== Profiling "convolution_2D" - 3: 0%....50%....100% - 20 passes
==PROF== Profiling "convolution_2D_tiled" - 4: 0%....50%....100% - 20 passes
Time elapsed on naive GPU convolution 2d tiled ( 32 ) block 429.748230 ms.
________________________________________________________________________
Testing with mask size = 7
==PROF== Profiling "convolution_2D" - 5: 0%....50%....100% - 20 passes
==PROF== Profiling "convolution_2D_tiled" - 6: 0%....50%....100% - 20 passes
Time elapsed on naive GPU convolution 2d tiled ( 32 ) block 500.704254 ms.
________________________________________________________________________
Testing with mask size = 9
==PROF== Profiling "convolution_2D" - 7: 0%....50%....100% - 20 passes
==PROF== Profiling "convolution_2D_tiled" - 8: 0%....50%....100% - 20 passes
Time elapsed on naive GPU convolution 2d tiled ( 32 ) block 449.445892 ms.
________________________________________________________________________
==PROF== Disconnected from process 31551
[31551] t2140#127.0.0.1
convolution_2D(float *, const float *, float *, unsigned long, unsigned long, unsigned long), 2022-Oct-29 13:02:44, Context 1, Stream 7
Section: Command line profiler metrics
---------------------------------------------------------------------- --------------- ------------------------------
sm__maximum_warps_per_active_cycle_pct % 50
sm__warps_active.avg.pct_of_peak_sustained_active % 40.42
---------------------------------------------------------------------- --------------- ------------------------------
Section: GPU Speed Of Light Throughput
---------------------------------------------------------------------- --------------- ------------------------------
DRAM Frequency cycle/usecond 815.21
SM Frequency cycle/nsecond 1.14
Elapsed Cycles cycle 47,929
Memory [%] % 23.96
DRAM Throughput % 15.23
Duration usecond 42.08
L1/TEX Cache Throughput % 26.90
L2 Cache Throughput % 10.54
SM Active Cycles cycle 42,619.88
Compute (SM) [%] % 37.09
---------------------------------------------------------------------- --------------- ------------------------------
WRN This kernel exhibits low compute throughput and memory bandwidth utilization relative to the peak performance
of this device. Achieved compute throughput and/or memory bandwidth below 60.0% of peak typically indicate
latency issues. Look at Scheduler Statistics and Warp State Statistics for potential reasons.
Section: Launch Statistics
---------------------------------------------------------------------- --------------- ------------------------------
Block Size 1,024
Function Cache Configuration cudaFuncCachePreferNone
Grid Size 1,024
Registers Per Thread register/thread 38
Shared Memory Configuration Size byte 0
Driver Shared Memory Per Block byte/block 0
Dynamic Shared Memory Per Block byte/block 0
Static Shared Memory Per Block byte/block 0
Threads thread 1,048,576
Waves Per SM 12.80
---------------------------------------------------------------------- --------------- ------------------------------
Section: Occupancy
---------------------------------------------------------------------- --------------- ------------------------------
Block Limit SM block 32
Block Limit Registers block 1
Block Limit Shared Mem block 32
Block Limit Warps block 2
Theoretical Active Warps per SM warp 32
Theoretical Occupancy % 50
Achieved Occupancy % 40.42
Achieved Active Warps Per SM warp 25.87
---------------------------------------------------------------------- --------------- ------------------------------
WRN This kernel's theoretical occupancy (50.0%) is limited by the number of required registers
convolution_2D_tiled(float *, const float *, float *, unsigned long, unsigned long, unsigned long), 2022-Oct-29 13:02:45, Context 1, Stream 7
Section: Command line profiler metrics
---------------------------------------------------------------------- --------------- ------------------------------
sm__maximum_warps_per_active_cycle_pct % 100
sm__warps_active.avg.pct_of_peak_sustained_active % 84.01
---------------------------------------------------------------------- --------------- ------------------------------
Section: GPU Speed Of Light Throughput
---------------------------------------------------------------------- --------------- ------------------------------
DRAM Frequency cycle/usecond 771.98
SM Frequency cycle/nsecond 1.07
Elapsed Cycles cycle 31,103
Memory [%] % 40.61
DRAM Throughput % 24.83
Duration usecond 29.12
L1/TEX Cache Throughput % 46.39
L2 Cache Throughput % 18.43
SM Active Cycles cycle 27,168.03
Compute (SM) [%] % 60.03
---------------------------------------------------------------------- --------------- ------------------------------
WRN Compute is more heavily utilized than Memory: Look at the Compute Workload Analysis report section to see
what the compute pipelines are spending their time doing. Also, consider whether any computation is
redundant and could be reduced or moved to look-up tables.
Section: Launch Statistics
---------------------------------------------------------------------- --------------- ------------------------------
Block Size 1,024
Function Cache Configuration cudaFuncCachePreferNone
Grid Size 1,156
Registers Per Thread register/thread 31
Shared Memory Configuration Size Kbyte 8.19
Driver Shared Memory Per Block byte/block 0
Dynamic Shared Memory Per Block byte/block 0
Static Shared Memory Per Block Kbyte/block 4.10
Threads thread 1,183,744
Waves Per SM 7.22
---------------------------------------------------------------------- --------------- ------------------------------
Section: Occupancy
---------------------------------------------------------------------- --------------- ------------------------------
Block Limit SM block 32
Block Limit Registers block 2
Block Limit Shared Mem block 24
Block Limit Warps block 2
Theoretical Active Warps per SM warp 64
Theoretical Occupancy % 100
Achieved Occupancy % 84.01
Achieved Active Warps Per SM warp 53.77
---------------------------------------------------------------------- --------------- ------------------------------
WRN This kernel's theoretical occupancy is not impacted by any block limit. The difference between calculated
theoretical (100.0%) and measured achieved occupancy (84.0%) can be the result of warp scheduling overheads
or workload imbalances during the kernel execution. Load imbalances can occur between warps within a block
as well as across blocks of the same kernel.
<sections repeat for each kernel launch>
$
I am running a memory coalescing experiment on Pascal and getting unexpected nvprof results. I have one kernel that copies 4 GB of floats from one array to another one. nvprof reports confusing numbers for gld_transactions_per_request and gst_transactions_per_request.
I ran the experiment on a TITAN Xp and a GeForce GTX 1080 TI. Same results.
#include <stdio.h>
#include <cstdint>
#include <assert.h>
#define N 1ULL*1024*1024*1024
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
__global__ void copy_kernel(
const float* __restrict__ data, float* __restrict__ data2) {
for (unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
tid < N; tid += blockDim.x * gridDim.x) {
data2[tid] = data[tid];
}
}
int main() {
float* d_data;
gpuErrchk(cudaMalloc(&d_data, sizeof(float) * N));
assert(d_data != nullptr);
uintptr_t d = reinterpret_cast<uintptr_t>(d_data);
assert(d%128 == 0); // check alignment, just to be sure
float* d_data2;
gpuErrchk(cudaMalloc(&d_data2, sizeof(float)*N));
assert(d_data2 != nullptr);
copy_kernel<<<1024,1024>>>(d_data, d_data2);
gpuErrchk(cudaDeviceSynchronize());
}
Compiled with CUDA version 10.1:
nvcc coalescing.cu -std=c++11 -Xptxas -dlcm=ca -gencode arch=compute_61,code=sm_61 -O3
Profiled with:
nvprof -m all ./a.out
There are a few confusing parts in the profiling results:
gld_transactions = 536870914, which means that every global load transaction should on average be 4GB/536870914 = 8 bytes. This is consistent with gld_transactions_per_request = 16.000000: Each warp reads 128 bytes (1 request) and if every transaction is 8 bytes, then we need 128 / 8 = 16 transactions per request. Why is this value so low? I would expect perfect coalescing, so something along the lines of 4 (or even 1) transactions/request.
gst_transactions = 134217728 and gst_transactions_per_request = 4.000000, so storing memory is more efficient?
Requested and achieved global load/store throughput (gld_requested_throughput, gst_requested_throughput, gld_throughput, gst_throughput) is 150.32GB/s each. I would expect a lower throughput for loads than for stores since we have more transactions per request.
gld_transactions = 536870914 but l2_read_transactions = 134218800. Global memory is always accessed through the L1/L2 caches. Why is the number of L2 read transactions so much lower? It can't all be cached in the L1. (global_hit_rate = 0%)
I think I am reading the nvprof results wrong. Any suggestions would be appreciated.
Here is the full profiling result:
Device "GeForce GTX 1080 Ti (0)"
Kernel: copy_kernel(float const *, float*)
1 inst_per_warp Instructions per warp 1.4346e+04 1.4346e+04 1.4346e+04
1 branch_efficiency Branch Efficiency 100.00% 100.00% 100.00%
1 warp_execution_efficiency Warp Execution Efficiency 100.00% 100.00% 100.00%
1 warp_nonpred_execution_efficiency Warp Non-Predicated Execution Efficiency 99.99% 99.99% 99.99%
1 inst_replay_overhead Instruction Replay Overhead 0.000178 0.000178 0.000178
1 shared_load_transactions_per_request Shared Memory Load Transactions Per Request 0.000000 0.000000 0.000000
1 shared_store_transactions_per_request Shared Memory Store Transactions Per Request 0.000000 0.000000 0.000000
1 local_load_transactions_per_request Local Memory Load Transactions Per Request 0.000000 0.000000 0.000000
1 local_store_transactions_per_request Local Memory Store Transactions Per Request 0.000000 0.000000 0.000000
1 gld_transactions_per_request Global Load Transactions Per Request 16.000000 16.000000 16.000000
1 gst_transactions_per_request Global Store Transactions Per Request 4.000000 4.000000 4.000000
1 shared_store_transactions Shared Store Transactions 0 0 0
1 shared_load_transactions Shared Load Transactions 0 0 0
1 local_load_transactions Local Load Transactions 0 0 0
1 local_store_transactions Local Store Transactions 0 0 0
1 gld_transactions Global Load Transactions 536870914 536870914 536870914
1 gst_transactions Global Store Transactions 134217728 134217728 134217728
1 sysmem_read_transactions System Memory Read Transactions 0 0 0
1 sysmem_write_transactions System Memory Write Transactions 5 5 5
1 l2_read_transactions L2 Read Transactions 134218800 134218800 134218800
1 l2_write_transactions L2 Write Transactions 134217741 134217741 134217741
1 global_hit_rate Global Hit Rate in unified l1/tex 0.00% 0.00% 0.00%
1 local_hit_rate Local Hit Rate 0.00% 0.00% 0.00%
1 gld_requested_throughput Requested Global Load Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 gst_requested_throughput Requested Global Store Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 gld_throughput Global Load Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 gst_throughput Global Store Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 local_memory_overhead Local Memory Overhead 0.00% 0.00% 0.00%
1 tex_cache_hit_rate Unified Cache Hit Rate 50.00% 50.00% 50.00%
1 l2_tex_read_hit_rate L2 Hit Rate (Texture Reads) 0.00% 0.00% 0.00%
1 l2_tex_write_hit_rate L2 Hit Rate (Texture Writes) 0.00% 0.00% 0.00%
1 tex_cache_throughput Unified Cache Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 l2_tex_read_throughput L2 Throughput (Texture Reads) 150.32GB/s 150.32GB/s 150.32GB/s
1 l2_tex_write_throughput L2 Throughput (Texture Writes) 150.32GB/s 150.32GB/s 150.32GB/s
1 l2_read_throughput L2 Throughput (Reads) 150.32GB/s 150.32GB/s 150.32GB/s
1 l2_write_throughput L2 Throughput (Writes) 150.32GB/s 150.32GB/s 150.32GB/s
1 sysmem_read_throughput System Memory Read Throughput 0.00000B/s 0.00000B/s 0.00000B/s
1 sysmem_write_throughput System Memory Write Throughput 5.8711KB/s 5.8711KB/s 5.8701KB/s
1 local_load_throughput Local Memory Load Throughput 0.00000B/s 0.00000B/s 0.00000B/s
1 local_store_throughput Local Memory Store Throughput 0.00000B/s 0.00000B/s 0.00000B/s
1 shared_load_throughput Shared Memory Load Throughput 0.00000B/s 0.00000B/s 0.00000B/s
1 shared_store_throughput Shared Memory Store Throughput 0.00000B/s 0.00000B/s 0.00000B/s
1 gld_efficiency Global Memory Load Efficiency 100.00% 100.00% 100.00%
1 gst_efficiency Global Memory Store Efficiency 100.00% 100.00% 100.00%
1 tex_cache_transactions Unified Cache Transactions 134217728 134217728 134217728
1 flop_count_dp Floating Point Operations(Double Precision) 0 0 0
1 flop_count_dp_add Floating Point Operations(Double Precision Add) 0 0 0
1 flop_count_dp_fma Floating Point Operations(Double Precision FMA) 0 0 0
1 flop_count_dp_mul Floating Point Operations(Double Precision Mul) 0 0 0
1 flop_count_sp Floating Point Operations(Single Precision) 0 0 0
1 flop_count_sp_add Floating Point Operations(Single Precision Add) 0 0 0
1 flop_count_sp_fma Floating Point Operations(Single Precision FMA) 0 0 0
1 flop_count_sp_mul Floating Point Operation(Single Precision Mul) 0 0 0
1 flop_count_sp_special Floating Point Operations(Single Precision Special) 0 0 0
1 inst_executed Instructions Executed 470089728 470089728 470089728
1 inst_issued Instructions Issued 470173430 470173430 470173430
1 sysmem_utilization System Memory Utilization Low (1) Low (1) Low (1)
1 stall_inst_fetch Issue Stall Reasons (Instructions Fetch) 0.79% 0.79% 0.79%
1 stall_exec_dependency Issue Stall Reasons (Execution Dependency) 1.46% 1.46% 1.46%
1 stall_memory_dependency Issue Stall Reasons (Data Request) 96.16% 96.16% 96.16%
1 stall_texture Issue Stall Reasons (Texture) 0.00% 0.00% 0.00%
1 stall_sync Issue Stall Reasons (Synchronization) 0.00% 0.00% 0.00%
1 stall_other Issue Stall Reasons (Other) 1.13% 1.13% 1.13%
1 stall_constant_memory_dependency Issue Stall Reasons (Immediate constant) 0.00% 0.00% 0.00%
1 stall_pipe_busy Issue Stall Reasons (Pipe Busy) 0.07% 0.07% 0.07%
1 shared_efficiency Shared Memory Efficiency 0.00% 0.00% 0.00%
1 inst_fp_32 FP Instructions(Single) 0 0 0
1 inst_fp_64 FP Instructions(Double) 0 0 0
1 inst_integer Integer Instructions 1.0742e+10 1.0742e+10 1.0742e+10
1 inst_bit_convert Bit-Convert Instructions 0 0 0
1 inst_control Control-Flow Instructions 1073741824 1073741824 1073741824
1 inst_compute_ld_st Load/Store Instructions 2147483648 2147483648 2147483648
1 inst_misc Misc Instructions 1077936128 1077936128 1077936128
1 inst_inter_thread_communication Inter-Thread Instructions 0 0 0
1 issue_slots Issue Slots 470173430 470173430 470173430
1 cf_issued Issued Control-Flow Instructions 33619968 33619968 33619968
1 cf_executed Executed Control-Flow Instructions 33619968 33619968 33619968
1 ldst_issued Issued Load/Store Instructions 268500992 268500992 268500992
1 ldst_executed Executed Load/Store Instructions 67174400 67174400 67174400
1 atomic_transactions Atomic Transactions 0 0 0
1 atomic_transactions_per_request Atomic Transactions Per Request 0.000000 0.000000 0.000000
1 l2_atomic_throughput L2 Throughput (Atomic requests) 0.00000B/s 0.00000B/s 0.00000B/s
1 l2_atomic_transactions L2 Transactions (Atomic requests) 0 0 0
1 l2_tex_read_transactions L2 Transactions (Texture Reads) 134217728 134217728 134217728
1 stall_memory_throttle Issue Stall Reasons (Memory Throttle) 0.00% 0.00% 0.00%
1 stall_not_selected Issue Stall Reasons (Not Selected) 0.39% 0.39% 0.39%
1 l2_tex_write_transactions L2 Transactions (Texture Writes) 134217728 134217728 134217728
1 flop_count_hp Floating Point Operations(Half Precision) 0 0 0
1 flop_count_hp_add Floating Point Operations(Half Precision Add) 0 0 0
1 flop_count_hp_mul Floating Point Operation(Half Precision Mul) 0 0 0
1 flop_count_hp_fma Floating Point Operations(Half Precision FMA) 0 0 0
1 inst_fp_16 HP Instructions(Half) 0 0 0
1 sysmem_read_utilization System Memory Read Utilization Idle (0) Idle (0) Idle (0)
1 sysmem_write_utilization System Memory Write Utilization Low (1) Low (1) Low (1)
1 pcie_total_data_transmitted PCIe Total Data Transmitted 1024 1024 1024
1 pcie_total_data_received PCIe Total Data Received 0 0 0
1 inst_executed_global_loads Warp level instructions for global loads 33554432 33554432 33554432
1 inst_executed_local_loads Warp level instructions for local loads 0 0 0
1 inst_executed_shared_loads Warp level instructions for shared loads 0 0 0
1 inst_executed_surface_loads Warp level instructions for surface loads 0 0 0
1 inst_executed_global_stores Warp level instructions for global stores 33554432 33554432 33554432
1 inst_executed_local_stores Warp level instructions for local stores 0 0 0
1 inst_executed_shared_stores Warp level instructions for shared stores 0 0 0
1 inst_executed_surface_stores Warp level instructions for surface stores 0 0 0
1 inst_executed_global_atomics Warp level instructions for global atom and atom cas 0 0 0
1 inst_executed_global_reductions Warp level instructions for global reductions 0 0 0
1 inst_executed_surface_atomics Warp level instructions for surface atom and atom cas 0 0 0
1 inst_executed_surface_reductions Warp level instructions for surface reductions 0 0 0
1 inst_executed_shared_atomics Warp level shared instructions for atom and atom CAS 0 0 0
1 inst_executed_tex_ops Warp level instructions for texture 0 0 0
1 l2_global_load_bytes Bytes read from L2 for misses in Unified Cache for global loads 4294967296 4294967296 4294967296
1 l2_local_load_bytes Bytes read from L2 for misses in Unified Cache for local loads 0 0 0
1 l2_surface_load_bytes Bytes read from L2 for misses in Unified Cache for surface loads 0 0 0
1 l2_local_global_store_bytes Bytes written to L2 from Unified Cache for local and global stores. 4294967296 4294967296 4294967296
1 l2_global_reduction_bytes Bytes written to L2 from Unified cache for global reductions 0 0 0
1 l2_global_atomic_store_bytes Bytes written to L2 from Unified cache for global atomics 0 0 0
1 l2_surface_store_bytes Bytes written to L2 from Unified Cache for surface stores. 0 0 0
1 l2_surface_reduction_bytes Bytes written to L2 from Unified Cache for surface reductions 0 0 0
1 l2_surface_atomic_store_bytes Bytes transferred between Unified Cache and L2 for surface atomics 0 0 0
1 global_load_requests Total number of global load requests from Multiprocessor 134217728 134217728 134217728
1 local_load_requests Total number of local load requests from Multiprocessor 0 0 0
1 surface_load_requests Total number of surface load requests from Multiprocessor 0 0 0
1 global_store_requests Total number of global store requests from Multiprocessor 134217728 134217728 134217728
1 local_store_requests Total number of local store requests from Multiprocessor 0 0 0
1 surface_store_requests Total number of surface store requests from Multiprocessor 0 0 0
1 global_atomic_requests Total number of global atomic requests from Multiprocessor 0 0 0
1 global_reduction_requests Total number of global reduction requests from Multiprocessor 0 0 0
1 surface_atomic_requests Total number of surface atomic requests from Multiprocessor 0 0 0
1 surface_reduction_requests Total number of surface reduction requests from Multiprocessor 0 0 0
1 sysmem_read_bytes System Memory Read Bytes 0 0 0
1 sysmem_write_bytes System Memory Write Bytes 160 160 160
1 l2_tex_hit_rate L2 Cache Hit Rate 0.00% 0.00% 0.00%
1 texture_load_requests Total number of texture Load requests from Multiprocessor 0 0 0
1 unique_warps_launched Number of warps launched 32768 32768 32768
1 sm_efficiency Multiprocessor Activity 99.63% 99.63% 99.63%
1 achieved_occupancy Achieved Occupancy 0.986477 0.986477 0.986477
1 ipc Executed IPC 0.344513 0.344513 0.344513
1 issued_ipc Issued IPC 0.344574 0.344574 0.344574
1 issue_slot_utilization Issue Slot Utilization 8.61% 8.61% 8.61%
1 eligible_warps_per_cycle Eligible Warps Per Active Cycle 0.592326 0.592326 0.592326
1 tex_utilization Unified Cache Utilization Low (1) Low (1) Low (1)
1 l2_utilization L2 Cache Utilization Low (2) Low (2) Low (2)
1 shared_utilization Shared Memory Utilization Idle (0) Idle (0) Idle (0)
1 ldst_fu_utilization Load/Store Function Unit Utilization Low (1) Low (1) Low (1)
1 cf_fu_utilization Control-Flow Function Unit Utilization Low (1) Low (1) Low (1)
1 special_fu_utilization Special Function Unit Utilization Idle (0) Idle (0) Idle (0)
1 tex_fu_utilization Texture Function Unit Utilization Low (1) Low (1) Low (1)
1 single_precision_fu_utilization Single-Precision Function Unit Utilization Low (1) Low (1) Low (1)
1 double_precision_fu_utilization Double-Precision Function Unit Utilization Idle (0) Idle (0) Idle (0)
1 flop_hp_efficiency FLOP Efficiency(Peak Half) 0.00% 0.00% 0.00%
1 flop_sp_efficiency FLOP Efficiency(Peak Single) 0.00% 0.00% 0.00%
1 flop_dp_efficiency FLOP Efficiency(Peak Double) 0.00% 0.00% 0.00%
1 dram_read_transactions Device Memory Read Transactions 134218560 134218560 134218560
1 dram_write_transactions Device Memory Write Transactions 134176900 134176900 134176900
1 dram_read_throughput Device Memory Read Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 dram_write_throughput Device Memory Write Throughput 150.27GB/s 150.27GB/s 150.27GB/s
1 dram_utilization Device Memory Utilization High (7) High (7) High (7)
1 half_precision_fu_utilization Half-Precision Function Unit Utilization Idle (0) Idle (0) Idle (0)
1 ecc_transactions ECC Transactions 0 0 0
1 ecc_throughput ECC Throughput 0.00000B/s 0.00000B/s 0.00000B/s
1 dram_read_bytes Total bytes read from DRAM to L2 cache 4294993920 4294993920 4294993920
1 dram_write_bytes Total bytes written from L2 cache to DRAM 4293660800 4293660800 4293660800
With Fermi and Kepler GPUs, when a global transaction was issued, it was always for 128 bytes, and the L1 cacheline size (if enabled) was 128 bytes. With Maxwell and Pascal, these characteristics changed. In particular, a read of a portion of an L1 cacheline does not necessarily trigger a full 128-byte width transaction. This is fairly easily discoverable/provable with microbenchmarking.
Effectively, the size of a global load transaction changed, subject to a certain quantum of granularity. Based on this change of transaction size, it's possible that multiple transactions could be required, where previously only 1 was required. As far as I know, none of this is clearly published or detailed, and I won't be able to do that here. However I think we can address a number of your questions without giving a precise description of how global load transactions are calculated.
gld_transactions = 536870914, which means that every global load transaction should on average be 4GB/536870914 = 8 bytes. This is consistent with gld_transactions_per_request = 16.000000: Each warp reads 128 bytes (1 request) and if every transaction is 8 bytes, then we need 128 / 8 = 16 transactions per request. Why is this value so low? I would expect perfect coalescing, so something along the lines of 4 (or even 1) transactions/request.
This mindset (1 transaction per request for fully coalesced loads of a 32-bit quantity per thread) would have been correct in the Fermi/Kepler timeframe. It is no longer correct for Maxwell and Pascal GPUs. As you've already calculated, the transaction size appears to be smaller than 128 bytes, and therefore the number of transactions per request is higher than 1. But this doesn't indicate an efficiency problem per se (as it would have in Fermi/Kepler timeframe). So let's just acknowledge that the transaction size can be smaller and therefore transactions per request can be higher, even though the underlying traffic is essentially 100% efficient.
gst_transactions = 134217728 and gst_transactions_per_request = 4.000000, so storing memory is more efficient?
No, that's not what this means. It simply means that the subdivision quanta can be different for loads (load transactions) and stores (store transactions). These happen to be 32-byte transactions. In either case, loads or stores, the transactions are and should be fully efficient in this case. The requested traffic is consistent with the actual traffic, and other profiler metrics confirm this. If the actual traffic were much higher than the requested traffic, that would be a good indication of inefficient loads or stores:
1 gld_requested_throughput Requested Global Load Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 gst_requested_throughput Requested Global Store Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 gld_throughput Global Load Throughput 150.32GB/s 150.32GB/s 150.32GB/s
1 gst_throughput Global Store Throughput 150.32GB/s 150.32GB/s 150.32GB/s
Requested and achieved global load/store throughput (gld_requested_throughput, gst_requested_throughput, gld_throughput, gst_throughput) is 150.32GB/s each. I would expect a lower throughput for loads than for stores since we have more transactions per request.
Again, you'll have to adjust your way of thinking to account for variable transaction sizes. Throughput is driven by the needs and efficiency associated with fulfilling those needs. Both loads and stores are fully efficient for your code design, so there is no reason to think there is or should be an imbalance in efficiency.
gld_transactions = 536870914 but l2_read_transactions = 134218800. Global memory is always accessed through the L1/L2 caches. Why is the number of L2 read transactions so much lower? It can't all be cached in the L1. (global_hit_rate = 0%)
This is simply due to the different size of the transactions. You've already calculated that the apparent global load transaction size is 8 bytes, and I've already indicated that the L2 transaction size is 32 bytes, so it makes sense that there would be a 4:1 ratio between the total number of transactions, since they reflect the same movement of the same data, viewed through 2 different lenses. Note that there has always been a disparity in the size of global transactions vs. the size of L2 transactions, or transactions to DRAM. Its simply that the ratios of these may vary by GPU architecture, and possibly other factors, such as load patterns.
Some notes:
I won't be able to answer questions such as "why is it this way?", or "why did Pascal change from Fermi/Kepler?" or "given this particular code, what would you predict as the needed global load transactions on this particular GPU?", or "generally, for this particular GPU, how would I calculate or predict transaction size?"
As an aside, there are new profiling tools (Nsight Compute and Nsight Systems) being advanced by NVIDIA for GPU work. Many of the efficiency and transactions per request metrics which are available in nvprof are gone under the new toolchain. So these mindsets will have to be broken anyway, because these methods of ascertaining efficiency won't be available moving forward, based on the current metric set.
Note that the use of compile switches such as -Xptxas -dlcm=ca may affect (L1) caching behavior. I don't expect caches to have much performance or efficiency impact on this particular copy code, however.
This possible reduction in transaction size is generally a good thing. It results in no loss of efficiency for traffic patterns such as presented in this code, and for certain other codes it allows (less-than-128byte) requests to be satisfied with less wasted bandwidth.
Although not specifically Pascal, here is a better defined example of the possible variability in these measurements for Maxwell. Pascal will have similar variability. Also, some small hint of this change (especially for Pascal) was given in the Pascal Tuning Guide. It by no means offers a complete description or explains all of your observations, but it does hint at the general idea that the global transactions are no longer fixed to a 128-byte size.
The kernel uses: (--ptxas-options=-v)
0 bytes stack frame, 0 bytes spill sotes, 0 bytes spill loads
ptxas info: Used 45 registers, 49152+0 bytes smem, 64 bytes cmem[0], 12 bytes cmem[16]
Launch with: kernelA<<<20,512>>>(float parmA, int paramB); and it will run fine.
Launch with: kernelA<<<20,513>>>(float parmA, int paramB); and it get the out of resources error. (too many resources requested for launch).
The Fermi device properties: 48KB of shared mem per SM, constant mem 64KB, 32K registers per SM, 1024 maximum threads per block, comp capable 2.1 (sm_21)
I'm using all my shared mem space.
I'll run out of block register space around 700 threads/block. The kernel will not launch if I ask for more than half the number of MAX_threads/block. It may just be a coincidence, but I doubt it.
Why can't I use a full block of threads (1024)?
Any guess as to which resource I'm running out of?
I have often wondered where the stalled thread data/state goes between warps. What resource holds these?
When I did the reg count, I commented out the printf's. Reg count= 45
When it was running, it had the printf's coded. Reg count= 63 w/plenty of spill "reg's".
I suspect each thread really has 64 reg's, with only 63 available to the program.
64 reg's * 512 threads = 32K - The maximum available to a single block.
So I suggest the # of available "code" reg's to a block = cudaDeviceProp::regsPerBlock - blockDim i.e. The kernel doesn't have access to all 32K registers.
The compiler currently limits the # of reg's per thread to 63, (or they spill over to lmem). I suspect this 63, is a HW addressing limitation.
So it looks like I'm running out of register space.
I was profiling my Cuda 4 program and it turned out that at some stage the running process used over 80 GiB of virtual memory. That was a lot more than I would have expected.
After examining the evolution of the memory map over time and comparing what line of code it is executing it turned out that after these simple instructions the virtual memory usage bumped up to over 80 GiB:
int deviceCount;
cudaGetDeviceCount(&deviceCount);
if (deviceCount == 0) {
perror("No devices supporting CUDA");
}
Clearly, this is the first Cuda call, thus the runtime got initialized. After this the memory map looks like (truncated):
Address Kbytes RSS Dirty Mode Mapping
0000000000400000 89796 14716 0 r-x-- prg
0000000005db1000 12 12 8 rw--- prg
0000000005db4000 80 76 76 rw--- [ anon ]
0000000007343000 39192 37492 37492 rw--- [ anon ]
0000000200000000 4608 0 0 ----- [ anon ]
0000000200480000 1536 1536 1536 rw--- [ anon ]
0000000200600000 83879936 0 0 ----- [ anon ]
Now with this huge memory area mapped into virtual memory space.
Okay, its maybe not a big problem since reserving/allocating memory in Linux doesn't do much unless you actually write to this memory. But it's really annoying since for example MPI jobs have to be specified with the maximum amount of vmem usable by the job. And 80GiB that's s just a lower boundary then for Cuda jobs - one has to add all other stuff too.
I can imagine that it has to do with the so-called scratch space that Cuda maintains. A kind of memory pool for kernel code that can dynamically grow and shrink. But that's speculation. Also it's allocated in device memory.
Any insights?
Nothing to do with scratch space, it is the result of the addressing system that allows unified andressing and peer to peer access between host and multiple GPUs. The CUDA driver registers all the GPU(s) memory + host memory in a single virtual address space using the kernel's virtual memory system. It isn't actually memory consumption, per se, it is just a "trick" to map all the available address spaces into a linear virtual space for unified addressing.
I'm having some trouble understanding InnoDB usage - we have a drupal based DB (5:1 read:write) running on mysql (Server version: 5.1.41-3ubuntu12.10-log (Ubuntu)). Our current Innodb data/index sizing is:
Current InnoDB index space = 196 M
Current InnoDB data space = 475 M
Looking around on the web and reading books like 'High performance sql' suggest to have 10% increase on data size - i have set the buffer pool to be (data+index)+10% and noticed that the buffer pool was at 100%...even increasing about this to 896Mb still makes it 100% (even though the data + indexes are only ~671Mb?
I've attached the output of the innodb section of mysqlreport below. Pages free of 1 seems to be suggesting a major problem also as well. The innodb_flush_method is set at its default - I will investigate setting this to O_DIRECT but want to sort out this issue before.
__ InnoDB Buffer Pool __________________________________________________
Usage 895.98M of 896.00M %Used: 100.00
Read hit 100.00%
Pages
Free 1 %Total: 0.00
Data 55.96k 97.59 %Drty: 0.01
Misc 1383 2.41
Latched 0 0.00
Reads 405.96M 1.2k/s
From file 15.60k 0.0/s 0.00
Ahead Rnd 211 0.0/s
Ahead Sql 1028 0.0/s
Writes 29.10M 87.3/s
Flushes 597.58k 1.8/s
Wait Free 0 0/s
__ InnoDB Lock _________________________________________________________
Waits 66 0.0/s
Current 0
Time acquiring
Total 3890 ms
Average 58 ms
Max 3377 ms
__ InnoDB Data, Pages, Rows ____________________________________________
Data
Reads 21.51k 0.1/s
Writes 666.48k 2.0/s
fsync 324.11k 1.0/s
Pending
Reads 0
Writes 0
fsync 0
Pages
Created 84.16k 0.3/s
Read 59.35k 0.2/s
Written 597.58k 1.8/s
Rows
Deleted 19.13k 0.1/s
Inserted 6.13M 18.4/s
Read 196.84M 590.6/s
Updated 139.69k 0.4/s
Any help on this would be greatly apprectiated.
Thanks!