mirror of
https://github.com/NVIDIA/cuda-samples.git
synced 2024-12-01 14:39:18 +08:00
44 lines
2.3 KiB
Plaintext
44 lines
2.3 KiB
Plaintext
./deviceQueryDrv Starting...
|
|
|
|
CUDA Device Query (Driver API) statically linked version
|
|
Detected 1 CUDA Capable device(s)
|
|
|
|
Device 0: "NVIDIA H100 PCIe"
|
|
CUDA Driver Version: 12.0
|
|
CUDA Capability Major/Minor version number: 9.0
|
|
Total amount of global memory: 81082 MBytes (85021163520 bytes)
|
|
(114) Multiprocessors, (128) CUDA Cores/MP: 14592 CUDA Cores
|
|
GPU Max Clock rate: 1650 MHz (1.65 GHz)
|
|
Memory Clock rate: 1593 Mhz
|
|
Memory Bus Width: 5120-bit
|
|
L2 Cache Size: 52428800 bytes
|
|
Max Texture Dimension Sizes 1D=(131072) 2D=(131072, 65536) 3D=(16384, 16384, 16384)
|
|
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
|
|
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
|
|
Total amount of constant memory: 65536 bytes
|
|
Total amount of shared memory per block: 49152 bytes
|
|
Total number of registers available per block: 65536
|
|
Warp size: 32
|
|
Maximum number of threads per multiprocessor: 2048
|
|
Maximum number of threads per block: 1024
|
|
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
|
|
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
|
|
Texture alignment: 512 bytes
|
|
Maximum memory pitch: 2147483647 bytes
|
|
Concurrent copy and kernel execution: Yes with 3 copy engine(s)
|
|
Run time limit on kernels: No
|
|
Integrated GPU sharing Host Memory: No
|
|
Support host page-locked memory mapping: Yes
|
|
Concurrent kernel execution: Yes
|
|
Alignment requirement for Surfaces: Yes
|
|
Device has ECC support: Enabled
|
|
Device supports Unified Addressing (UVA): Yes
|
|
Device supports Managed Memory: Yes
|
|
Device supports Compute Preemption: Yes
|
|
Supports Cooperative Kernel Launch: Yes
|
|
Supports MultiDevice Co-op Kernel Launch: Yes
|
|
Device PCI Domain ID / Bus ID / location ID: 0 / 193 / 0
|
|
Compute Mode:
|
|
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
|
|
Result = PASS
|