mirror of
https://github.com/NVIDIA/cuda-samples.git
synced 2024-11-28 16:49:18 +08:00
71 lines
3.4 KiB
Markdown
71 lines
3.4 KiB
Markdown
# immaTensorCoreGemm - Tensor Core GEMM Integer MMA
|
|
|
|
## Description
|
|
|
|
CUDA sample demonstrating a integer GEMM computation using the Warp Matrix Multiply and Accumulate (WMMA) API for integer introduced in CUDA 10. This sample demonstrates the use of the CUDA WMMA API employing the Tensor Cores introduced in the Volta chip family for faster matrix operations. In addition to that, it demonstrates the use of the new CUDA function attribute cudaFuncAttributeMaxDynamicSharedMemorySize that allows the application to reserve an extended amount of shared memory than it is available by default.
|
|
|
|
## Key Concepts
|
|
|
|
Matrix Multiply, WMMA, Tensor Cores
|
|
|
|
## Supported SM Architectures
|
|
|
|
[SM 7.2 ](https://developer.nvidia.com/cuda-gpus) [SM 7.5 ](https://developer.nvidia.com/cuda-gpus) [SM 8.0 ](https://developer.nvidia.com/cuda-gpus)
|
|
|
|
## Supported OSes
|
|
|
|
Linux, Windows
|
|
|
|
## Supported CPU Architecture
|
|
|
|
x86_64, ppc64le, aarch64
|
|
|
|
## CUDA APIs involved
|
|
|
|
### [CUDA Runtime API](http://docs.nvidia.com/cuda/cuda-runtime-api/index.html)
|
|
cudaMallocManaged, cudaDeviceSynchronize, cudaFuncSetAttribute, cudaEventCreate, cudaEventRecord, cudaEventSynchronize, cudaEventElapsedTime, cudaFree
|
|
|
|
## Prerequisites
|
|
|
|
Download and install the [CUDA Toolkit 11.0](https://developer.nvidia.com/cuda-downloads) for your corresponding platform.
|
|
|
|
## Build and Run
|
|
|
|
### Windows
|
|
The Windows samples are built using the Visual Studio IDE. Solution files (.sln) are provided for each supported version of Visual Studio, using the format:
|
|
```
|
|
*_vs<version>.sln - for Visual Studio <version>
|
|
```
|
|
Each individual sample has its own set of solution files in its directory:
|
|
|
|
To build/examine all the samples at once, the complete solution files should be used. To build/examine a single sample, the individual sample solution files should be used.
|
|
> **Note:** Some samples require that the Microsoft DirectX SDK (June 2010 or newer) be installed and that the VC++ directory paths are properly set up (**Tools > Options...**). Check DirectX Dependencies section for details."
|
|
|
|
### Linux
|
|
The Linux samples are built using makefiles. To use the makefiles, change the current directory to the sample directory you wish to build, and run make:
|
|
```
|
|
$ cd <sample_dir>
|
|
$ make
|
|
```
|
|
The samples makefiles can take advantage of certain options:
|
|
* **TARGET_ARCH=<arch>** - cross-compile targeting a specific architecture. Allowed architectures are x86_64, ppc64le, aarch64.
|
|
By default, TARGET_ARCH is set to HOST_ARCH. On a x86_64 machine, not setting TARGET_ARCH is the equivalent of setting TARGET_ARCH=x86_64.<br/>
|
|
`$ make TARGET_ARCH=x86_64` <br/> `$ make TARGET_ARCH=ppc64le` <br/> `$ make TARGET_ARCH=aarch64` <br/>
|
|
See [here](http://docs.nvidia.com/cuda/cuda-samples/index.html#cross-samples) for more details.
|
|
* **dbg=1** - build with debug symbols
|
|
```
|
|
$ make dbg=1
|
|
```
|
|
* **SMS="A B ..."** - override the SM architectures for which the sample will be built, where `"A B ..."` is a space-delimited list of SM architectures. For example, to generate SASS for SM 50 and SM 60, use `SMS="50 60"`.
|
|
```
|
|
$ make SMS="50 60"
|
|
```
|
|
|
|
* **HOST_COMPILER=<host_compiler>** - override the default g++ host compiler. See the [Linux Installation Guide](http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#system-requirements) for a list of supported host compilers.
|
|
```
|
|
$ make HOST_COMPILER=g++
|
|
```
|
|
|
|
## References (for more details)
|
|
|