Review of LLVM Compiler Architecture Enhancements for CUDA

Authors

  • Munesh Singh Chauhan

Keywords:

Graphical Processing Unitt, Compute Unified Device Architetcure, LLVM Compiler

Abstract

Heterogeneous platforms are now becoming increasing omnipresent due to the availability of multicores at commodity prices. In order to benefit from the immense parallel capability of these multicores, more and more applications are now being developed as well as ported using the CUDA framework that programs these cores. Hence it becomes imperative to devise new compiler paradigms that cater varied languages and provide easy and flexible multicore programming. LLVM compilers have traditionally been the bedrock for such endeavors. New runtime processes are being researched to make CUDA platform more amenable for supporting a maximum set of language architectures. An analysis is made to understand the advantages and the accompanying pitfalls of various types of linking techniques.

References

Chris Lattner, Vikram Adve, LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation, University of Illinois at Urbana-Champaign.

Yuan Lin, Building GPU Compilers with libNVVM, GPU Technology Conference.

The LLVM Compiler Infrastructure, http://llvm.org.

Xiang Zhang, Aaron Brewbaker, How to design a language integrated compiler with LLVM, GTC 2014, San Jose, California, March 27, 2014.

Dmitry Mikushin, Nikolay Likhogrud, Eddy Z. Zhang, KERNELGEN – The design and implementation of a next generation compiler platform for accelerating numerical models on GPUs, Parallel & Distributed Processing Symposium Workshops (IPDPSW), 2014 IEEE International.

Adam DeConinck, HPC Systems Engineer, NVIDIA, Introduction to CUDA Toolkit for Building Applications.

Downloads

How to Cite

Chauhan, M. S. (2016). Review of LLVM Compiler Architecture Enhancements for CUDA. Asian Journal of Computer and Information Systems, 4(1). Retrieved from https://www.ajouronline.com/index.php/AJCIS/article/view/3623