std::linalg: Linear Algebra Coming to Standard C++ - Mark Hoemmen - CppCon 2023

Описание к видео std::linalg: Linear Algebra Coming to Standard C++ - Mark Hoemmen - CppCon 2023

https://cppcon.org/
---

std::linalg: Linear Algebra Coming to Standard C++ - Mark Hoemmen - CppCon 2023
https://github.com/CppCon/CppCon2023

Many fields depend on linear algebra computations, which include matrix-matrix and matrix-vector multiplies, triangular solves, dot products, and norms. It's hard to implement these fast and accurately for all kinds of number types and data layouts. Wouldn't it be nice if C++ had a built-in library for doing that? Wouldn't it be even nicer if this library used C++ idioms instead of what developers have to do now, which is write nonportable, unsafe, verbose code for calling into an optimized Fortran or C library?

The std::linalg library does just that. It uses the new C++23 feature mdspan to represent matrices and vectors. The library builds on the long history and solid theoretical foundation of the BLAS (Basic Linear Algebra Subroutines), a standard C and Fortran interface with many optimized implementations. The C++ Standard Committee is currently reviewing std::linalg for C++26. The library already has two implementations that work with C++17 or newer compilers, and can take advantage of vendor-specific optimizations. Developers will see how std::linalg can make their C++ safer and more concise without sacrificing performance for use cases that existing BLAS libraries already optimize, while opening up new use cases and potential optimizations.
---

Mark Hoemmen

Mark Hoemmen is a C++ software developer with a background in parallel computing and numerical linear algebra. He joined NVIDIA in spring 2022, and works remotely from Albuquerque, New Mexico, USA. He contributes to various open-source C++ libraries, including CUTLASS, a CUDA C++ library implementing high-performance matrix-matrix multiplication (GEMM) and related computations.

Mark finished his PhD on "communication-avoiding" linear algebra algorithms in 2010. After that, he worked for ten years for Sandia National Laboratories, where he did research on communication-avoiding and fault-tolerant (not at the same time, thankfully) algorithms, and contributed to several scientific computing software projects. He then took a position at a private company, Stellar Science, for two years, before moving to NVIDIA.

Mark's preferred programming language is C++. He has been writing it professionally for 23 years, and has been contributing to the C++ Standard (WG21) process since 2017. Mark is main author of the C++ Standard Library proposal P1673, a linear algebra library based on the BLAS (Basic Linear Algebra Subroutines). He is also coauthor of mdspan (which is in C++23), mdarray, and other related proposals. After C++, he feels most comfortable working in Python, C, Fortran, and Matlab. Mark is familiar with several shared- and distributed-memory parallel programming models, and interactions between them (e.g., CUDA and MPI).
---

Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
YouTube Channel Managed by Digital Medium Ltd: https://events.digital-medium.co.uk
---

Registration for CppCon: https://cppcon.org/registration/

#cppcon #cppprogramming #cpp

Комментарии

Информация по комментариям в разработке