# reduction - CUDA Parallel Reduction ## Description A parallel sum reduction that computes the sum of a large arrays of values. This sample demonstrates several important optimization strategies for Data-Parallel Algorithms like reduction. ## Key Concepts Data-Parallel Algorithms, Performance Strategies ## Supported SM Architectures [SM 3.0 ](https://developer.nvidia.com/cuda-gpus) [SM 3.5 ](https://developer.nvidia.com/cuda-gpus) [SM 3.7 ](https://developer.nvidia.com/cuda-gpus) [SM 5.0 ](https://developer.nvidia.com/cuda-gpus) [SM 5.2 ](https://developer.nvidia.com/cuda-gpus) [SM 6.0 ](https://developer.nvidia.com/cuda-gpus) [SM 6.1 ](https://developer.nvidia.com/cuda-gpus) [SM 7.0 ](https://developer.nvidia.com/cuda-gpus) [SM 7.2 ](https://developer.nvidia.com/cuda-gpus) [SM 7.5 ](https://developer.nvidia.com/cuda-gpus) ## Supported OSes Linux, Windows, MacOSX ## Supported CPU Architecture x86_64, ppc64le, armv7l ## CUDA APIs involved ## Prerequisites Download and install the [CUDA Toolkit 10.1](https://developer.nvidia.com/cuda-downloads) for your corresponding platform. ## Build and Run ### Windows The Windows samples are built using the Visual Studio IDE. Solution files (.sln) are provided for each supported version of Visual Studio, using the format: ``` *_vs.sln - for Visual Studio ``` Each individual sample has its own set of solution files in its directory: To build/examine all the samples at once, the complete solution files should be used. To build/examine a single sample, the individual sample solution files should be used. > **Note:** Some samples require that the Microsoft DirectX SDK (June 2010 or newer) be installed and that the VC++ directory paths are properly set up (**Tools > Options...**). Check DirectX Dependencies section for details." ### Linux The Linux samples are built using makefiles. To use the makefiles, change the current directory to the sample directory you wish to build, and run make: ``` $ cd $ make ``` The samples makefiles can take advantage of certain options: * **TARGET_ARCH=** - cross-compile targeting a specific architecture. Allowed architectures are x86_64, ppc64le, armv7l. By default, TARGET_ARCH is set to HOST_ARCH. On a x86_64 machine, not setting TARGET_ARCH is the equivalent of setting TARGET_ARCH=x86_64.
`$ make TARGET_ARCH=x86_64`
`$ make TARGET_ARCH=ppc64le`
`$ make TARGET_ARCH=armv7l`
See [here](http://docs.nvidia.com/cuda/cuda-samples/index.html#cross-samples) for more details. * **dbg=1** - build with debug symbols ``` $ make dbg=1 ``` * **SMS="A B ..."** - override the SM architectures for which the sample will be built, where `"A B ..."` is a space-delimited list of SM architectures. For example, to generate SASS for SM 50 and SM 60, use `SMS="50 60"`. ``` $ make SMS="50 60" ``` * **HOST_COMPILER=** - override the default g++ host compiler. See the [Linux Installation Guide](http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#system-requirements) for a list of supported host compilers. ``` $ make HOST_COMPILER=g++ ``` ### Mac The Mac samples are built using makefiles. To use the makefiles, change directory into the sample directory you wish to build, and run make: ``` $ cd $ make ``` The samples makefiles can take advantage of certain options: * **dbg=1** - build with debug symbols ``` $ make dbg=1 ``` * **SMS="A B ..."** - override the SM architectures for which the sample will be built, where "A B ..." is a space-delimited list of SM architectures. For example, to generate SASS for SM 50 and SM 60, use SMS="50 60". ``` $ make SMS="A B ..." ``` * **HOST_COMPILER=** - override the default clang host compiler. See the [Mac Installation Guide](http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html#system-requirements) for a list of supported host compilers. ``` $ make HOST_COMPILER=clang ``` ## References (for more details)