Writing your own code

Programming Resources

Introduction to various programming languages and libraries

Table of Contents:   PythonRJuliaChapelMPIOpenMPDebugging and memory debuggingMATLAB


We give many Python-based workshops – you can find quite a few of them in the visualization section.

“Working with the Python DASK library”

Webinar (2019-Oct-16) by Chris Want

“Working with multidimensional datasets in xarray”

Webinar (2020-Sep-30) by Alex Razoumov


“Introduction to high-performance research computing in R”

Webinar (2023-Jan-31) by Marie-Hélène Burle

The programming language R is not known for its speed. However, with some code optimization, R can be used for relatively heavy computations. Additional speedup can be achieved through various parallel techniques, both with multi-threading and distributed computing. This workshop introduces you to working with R from the command line on the Alliance clusters with a focus on performance. We discuss code profiling and benchmarking, various packages for parallelization, as well as using C++ from inside R to speed up your calculations.


(You can also browse some of our Julia programming materials here.)

“Julia at full tilt: profiling and optimizations”

Webinar (2024-Apr-30) by Paul Schrimpf

“ThreadsX.jl: easier multithreading in Julia”

Webinar (2022-Feb-02) by Alex Razoumov

“Easier parallel Julia workflow with Dagger.jl”

Webinar (2021-Oct-27) by Marie-Hélène Burle

Designed specifically for HPC and inspired by the Python library Dask, Dagger is a distributed framework with a scheduler built on top of Distributed.jl for efficient parallel and out-of-core execution of tasks represented by a directed acyclic graph (DAG). Dask supports computing with multiple threads, multiple processes, and on GPUs. Checkpoints are easy to create if you need to interrupt and resume computations. Finally, Dagger provides some debugging and runtime profiling tools.

“Parallel programming in Julia”

Webinar (2021-Mar-17) by Alex Razoumov and Marie-Hélène Burle

In this webinar, we start with a quick review of Julia’s multi-threading features but focus primarily on Distributed standard library and its large array of tools. We show parallelization using three problems: a slowly converging series, a Julia set, and an N-body solver. We run the examples on a multi-core laptop and an HPC cluster.

“High-performance research computing with Julia”

Webinar (2020-Mar-04) by Marie-Hélène Burle


“Intro to Parallel Programming in Chapel”

Webinar series by Alex Razoumov

In this three-part online webinar series, we introduce the main concepts of the Chapel parallel programming language. Chapel is a relatively new language for both shared- and distributed-memory programming, with easy-to-use, high-level features that make it ideal for learning parallel programming for a novice HPC user.

Unlike other high-level data-processing languages and workflows, the primary application of Chapel is numerical modelling and simulation codes, so this workshop is ideal for anyone who wants to learn how to write efficient large-scale numerical codes.

Part 1: Basic language features (2018-Feb-28)
Part 2: Task parallelism in Chapel (2018-Mar-07)
Part 3: Data parallelism in Chapel (2018-Mar-14)

“Computing graphs on an HPC cluster: working with distributed unstructured data in Chapel”

Webinar (2019-Apr-17) by Alex Razoumov

“Working with data files and external C libraries in Chapel”

Webinar (2020-Mar-18) by Alex Razoumov

HPC Carpentry Course

As part of their contribution to HPC Carpentry, WestGrid staff authored a Parallel programming in Chapel course. The materials and exercises presented in this course can be presented as a full-day workshop. If you have questions about the materials, please contact Alex Razoumov - alex.razoumov@westgrid.ca.


“A Brief Introduction to the Boost MPI Library”

Webinar (2018-May-09) by Patrick Mann


Intro to Parallel Programming for Shared Memory Machines

Webinar (2019-Oct) by Ali Kerrache

This online workshop explores how to use OpenMP to improve the speed of serial jobs on multi-core machines. We review how to add OpenMP constructs to a serial program in order to run it using multiple cores. Viewers are led through a series of hands-on, interactive examples, focusing on multi-threading parallel programming.

The topics covered include:

  • Basic OpenMP operations
  • Loops
  • Reduction variables

View the slides

Debugging and memory debugging

“Memory debugging with Valgrind”

Webinar (2019-Feb-20) by Tyson Whitehead


“Data Analytics and Machine Learning with MATLAB”

Webinar (2018-Oct-31) by Sumit Tandon