cuda-samples/Samples/tf32TensorCoreGemm
Mahesh Doijade 92b0568792 Add cudaNvSciNvMedia sample with/without nvsci* APIs, it takes RGBA image as input and produces YUV via nvmedia
this YUV is consumed by cuda which converts it to grayscale image which is written to file as output
2020-11-24 16:28:04 +05:30
..
Makefile Add cudaNvSciNvMedia sample with/without nvsci* APIs, it takes RGBA image as input and produces YUV via nvmedia 2020-11-24 16:28:04 +05:30
NsightEclipse.xml Add and update samples for cuda 11.1 support 2020-09-15 23:45:56 +05:30
README.md Add and update samples for cuda 11.1 support 2020-09-15 23:45:56 +05:30
tf32TensorCoreGemm_vs2015.sln Add and update samples for cuda 11.0 support 2020-05-18 22:22:06 +05:30
tf32TensorCoreGemm_vs2015.vcxproj Add and update samples for cuda 11.1 support 2020-09-15 23:45:56 +05:30
tf32TensorCoreGemm_vs2017.sln Add and update samples for cuda 11.0 support 2020-05-18 22:22:06 +05:30
tf32TensorCoreGemm_vs2017.vcxproj Add and update samples for cuda 11.1 support 2020-09-15 23:45:56 +05:30
tf32TensorCoreGemm_vs2019.sln Add and update samples for cuda 11.0 support 2020-05-18 22:22:06 +05:30
tf32TensorCoreGemm_vs2019.vcxproj Add and update samples for cuda 11.1 support 2020-09-15 23:45:56 +05:30
tf32TensorCoreGemm.cu Add and update samples for cuda 11.0 support 2020-05-18 22:22:06 +05:30

tf32TensorCoreGemm - tf32 Tensor Core GEMM

Description

A CUDA sample demonstrating tf32 (e8m10) GEMM computation using the Warp Matrix Multiply and Accumulate (WMMA) API introduced with CUDA 11 in Ampere chip family tensor cores for faster matrix operations. This sample also uses async copy provided by cuda pipeline interface for gmem to shmem async loads which improves kernel performance and reduces register presssure.

Key Concepts

Matrix Multiply, WMMA, Tensor Cores

Supported SM Architectures

SM 8.0 SM 8.6

Supported OSes

Linux, Windows

Supported CPU Architecture

x86_64, ppc64le, aarch64

CUDA APIs involved

CUDA Runtime API

cudaMalloc, cudaDeviceSynchronize, cudaFuncSetAttribute, cudaEventCreate, cudaEventRecord, cudaEventSynchronize, cudaEventElapsedTime, cudaFree

Prerequisites

Download and install the CUDA Toolkit 11.1 for your corresponding platform.

Build and Run

Windows

The Windows samples are built using the Visual Studio IDE. Solution files (.sln) are provided for each supported version of Visual Studio, using the format:

*_vs<version>.sln - for Visual Studio <version>

Each individual sample has its own set of solution files in its directory:

To build/examine all the samples at once, the complete solution files should be used. To build/examine a single sample, the individual sample solution files should be used.

Note: Some samples require that the Microsoft DirectX SDK (June 2010 or newer) be installed and that the VC++ directory paths are properly set up (Tools > Options...). Check DirectX Dependencies section for details."

Linux

The Linux samples are built using makefiles. To use the makefiles, change the current directory to the sample directory you wish to build, and run make:

$ cd <sample_dir>
$ make

The samples makefiles can take advantage of certain options:

  • TARGET_ARCH= - cross-compile targeting a specific architecture. Allowed architectures are x86_64, ppc64le, aarch64. By default, TARGET_ARCH is set to HOST_ARCH. On a x86_64 machine, not setting TARGET_ARCH is the equivalent of setting TARGET_ARCH=x86_64.
    $ make TARGET_ARCH=x86_64
    $ make TARGET_ARCH=ppc64le
    $ make TARGET_ARCH=aarch64
    See here for more details.

  • dbg=1 - build with debug symbols

    $ make dbg=1
    
  • SMS="A B ..." - override the SM architectures for which the sample will be built, where "A B ..." is a space-delimited list of SM architectures. For example, to generate SASS for SM 50 and SM 60, use SMS="50 60".

    $ make SMS="50 60"
    
  • HOST_COMPILER=<host_compiler> - override the default g++ host compiler. See the Linux Installation Guide for a list of supported host compilers.

    $ make HOST_COMPILER=g++

References (for more details)