cuda-samples/Samples/cudaTensorCoreGemm
2018-08-24 22:35:15 +05:30
..
cudaTensorCoreGemm_vs2012.sln Update samples list to include additional samples. 2018-03-09 18:05:01 -08:00
cudaTensorCoreGemm_vs2012.vcxproj Add and Update samples for CUDA 10.0 2018-08-24 22:35:15 +05:30
cudaTensorCoreGemm_vs2013.sln Update samples list to include additional samples. 2018-03-09 18:05:01 -08:00
cudaTensorCoreGemm_vs2013.vcxproj Add and Update samples for CUDA 10.0 2018-08-24 22:35:15 +05:30
cudaTensorCoreGemm_vs2015.sln Update samples list to include additional samples. 2018-03-09 18:05:01 -08:00
cudaTensorCoreGemm_vs2015.vcxproj Add and Update samples for CUDA 10.0 2018-08-24 22:35:15 +05:30
cudaTensorCoreGemm_vs2017.sln Update samples list to include additional samples. 2018-03-09 18:05:01 -08:00
cudaTensorCoreGemm_vs2017.vcxproj Add and Update samples for CUDA 10.0 2018-08-24 22:35:15 +05:30
cudaTensorCoreGemm.cu Add and Update samples for CUDA 10.0 2018-08-24 22:35:15 +05:30
Makefile Add and Update samples for CUDA 10.0 2018-08-24 22:35:15 +05:30
NsightEclipse.xml Add and Update samples for CUDA 10.0 2018-08-24 22:35:15 +05:30
README.md Add and Update samples for CUDA 10.0 2018-08-24 22:35:15 +05:30

cudaTensorCoreGemm - CUDA Tensor Core GEMM

Description

CUDA sample demonstrating a GEMM computation using the Warp Matrix Multiply and Accumulate (WMMA) API introduced in CUDA 9.

This sample demonstrates the use of the new CUDA WMMA API employing the Tensor Cores introduced in the Volta chip family for faster matrix operations.

In addition to that, it demonstrates the use of the new CUDA function attribute cudaFuncAttributeMaxDynamicSharedMemorySize that allows the application to reserve an extended amount of shared memory than it is available by default.

Key Concepts

Matrix Multiply, WMMA, Tensor Cores

Supported SM Architectures

SM 7.0 SM 7.5

Supported OSes

Linux, Windows

Supported CPU Architecture

x86_64, ppc64le

CUDA APIs involved

CUDA Runtime API

cudaMallocManaged, cudaDeviceSynchronize, cudaFuncSetAttribute, cudaEventCreate, cudaEventRecord, cudaEventSynchronize, cudaEventElapsedTime, cudaFree

Prerequisites

Download and install the CUDA Toolkit 10.0 for your corresponding platform.

Build and Run

Windows

The Windows samples are built using the Visual Studio IDE. Solution files (.sln) are provided for each supported version of Visual Studio, using the format:

*_vs<version>.sln - for Visual Studio <version>

Each individual sample has its own set of solution files in its directory:

To build/examine all the samples at once, the complete solution files should be used. To build/examine a single sample, the individual sample solution files should be used.

Note: Some samples require that the Microsoft DirectX SDK (June 2010 or newer) be installed and that the VC++ directory paths are properly set up (Tools > Options...). Check DirectX Dependencies section for details."

Linux

The Linux samples are built using makefiles. To use the makefiles, change the current directory to the sample directory you wish to build, and run make:

$ cd <sample_dir>
$ make

The samples makefiles can take advantage of certain options:

  • TARGET_ARCH= - cross-compile targeting a specific architecture. Allowed architectures are x86_64, ppc64le. By default, TARGET_ARCH is set to HOST_ARCH. On a x86_64 machine, not setting TARGET_ARCH is the equivalent of setting TARGET_ARCH=x86_64.
    $ make TARGET_ARCH=x86_64
    $ make TARGET_ARCH=ppc64le
    See here for more details.

  • dbg=1 - build with debug symbols

    $ make dbg=1
    
  • SMS="A B ..." - override the SM architectures for which the sample will be built, where "A B ..." is a space-delimited list of SM architectures. For example, to generate SASS for SM 50 and SM 60, use SMS="50 60".

    $ make SMS="50 60"
    
  • HOST_COMPILER=<host_compiler> - override the default g++ host compiler. See the Linux Installation Guide for a list of supported host compilers.

    $ make HOST_COMPILER=g++

References (for more details)