High-performance Tensor Contractions for GPUs - Laboratoire Interdisciplinaire des Sciences du Numérique Access content directly
Conference Papers Year : 2016

High-performance Tensor Contractions for GPUs


We present a computational framework for high-performance tensor contractions on GPUs. High-performance is difficult to obtain using existing libraries, especially for many independent contractions where each contraction is very small, e.g., sub-vector/warp in size. However, using our framework to batch contractions plus application-specifics, we demonstrate close to peak performance results. In particular, to accelerate large scale tensor-formulated high-order finite element method (FEM) simulations, which is the main focus and motivation for this work, we represent contractions as tensor index reordering plus matrix-matrix multiplications (GEMMs). This is a key factor to achieve algorithmically many-fold acceleration (vs. not using it) due to possible reuse of data loaded in fast memory. In addition to using this context knowledge, we design tensor data-structures, tensor algebra interfaces, and new tensor contraction algorithms and implementations to achieve 90+% of a theoretically derived peak on GPUs. On a K40c GPU for contractions resulting in GEMMs on square matrices of size 8 for example, we are 2.8× faster than CUBLAS, and 8.5× faster than MKL on 16 cores of Intel Xeon E5-2670 (Sandy Bridge) 2.60GHz CPUs. Finally, we apply autotuning and code generation techniques to simplify tuning and provide an architecture-aware, user-friendly interface.
Fichier principal
Vignette du fichier
iccs2016.pdf (2.36 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01409251 , version 1 (05-12-2016)



Ahmad Abdelfattah, Marc Baboulin, Veselin Dobrev, Jack J Dongarra, Christopher Earl, et al.. High-performance Tensor Contractions for GPUs. International Conference on Computational Science 2016 (ICCS 2016), Jun 2016, San Diego, CA, United States. pp.108-118, ⟨10.1016/j.procs.2016.05.302⟩. ⟨hal-01409251⟩
497 View
121 Download



Gmail Facebook Twitter LinkedIn More