# dmmaTensorCoreGemm - Double Precision Tensor Core GEMM ## Description CUDA sample demonstrates double precision GEMM computation using the Double precision Warp Matrix Multiply and Accumulate (WMMA) API introduced with CUDA 11 in Ampere chip family tensor cores for faster matrix operations. This sample also uses async copy provided by cuda pipeline interface for gmem to shmem async loads which improves kernel performance and reduces register presssure. Further, this sample also demonstrates how to use cooperative groups async copy interface over a group for performing gmem to shmem async loads. ## Key Concepts Matrix Multiply, WMMA, Tensor Cores ## Supported SM Architectures [SM 8.0 ](https://developer.nvidia.com/cuda-gpus) [SM 8.6 ](https://developer.nvidia.com/cuda-gpus) ## Supported OSes Linux, Windows ## Supported CPU Architecture x86_64, ppc64le, aarch64 ## CUDA APIs involved ### [CUDA Runtime API](http://docs.nvidia.com/cuda/cuda-runtime-api/index.html) cudaMallocManaged, cudaDeviceSynchronize, cudaFuncSetAttribute, cudaEventCreate, cudaEventRecord, cudaEventSynchronize, cudaEventElapsedTime, cudaFree ## Prerequisites Download and install the [CUDA Toolkit 11.1](https://developer.nvidia.com/cuda-downloads) for your corresponding platform. ## Build and Run ### Windows The Windows samples are built using the Visual Studio IDE. Solution files (.sln) are provided for each supported version of Visual Studio, using the format: ``` *_vs.sln - for Visual Studio ``` Each individual sample has its own set of solution files in its directory: To build/examine all the samples at once, the complete solution files should be used. To build/examine a single sample, the individual sample solution files should be used. > **Note:** Some samples require that the Microsoft DirectX SDK (June 2010 or newer) be installed and that the VC++ directory paths are properly set up (**Tools > Options...**). Check DirectX Dependencies section for details." ### Linux The Linux samples are built using makefiles. To use the makefiles, change the current directory to the sample directory you wish to build, and run make: ``` $ cd $ make ``` The samples makefiles can take advantage of certain options: * **TARGET_ARCH=** - cross-compile targeting a specific architecture. Allowed architectures are x86_64, ppc64le, aarch64. By default, TARGET_ARCH is set to HOST_ARCH. On a x86_64 machine, not setting TARGET_ARCH is the equivalent of setting TARGET_ARCH=x86_64.
`$ make TARGET_ARCH=x86_64`
`$ make TARGET_ARCH=ppc64le`
`$ make TARGET_ARCH=aarch64`
See [here](http://docs.nvidia.com/cuda/cuda-samples/index.html#cross-samples) for more details. * **dbg=1** - build with debug symbols ``` $ make dbg=1 ``` * **SMS="A B ..."** - override the SM architectures for which the sample will be built, where `"A B ..."` is a space-delimited list of SM architectures. For example, to generate SASS for SM 50 and SM 60, use `SMS="50 60"`. ``` $ make SMS="50 60" ``` * **HOST_COMPILER=** - override the default g++ host compiler. See the [Linux Installation Guide](http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#system-requirements) for a list of supported host compilers. ``` $ make HOST_COMPILER=g++ ``` ## References (for more details)