From bf2ea1d8d446352c11f335eb7c2ca1f00e23d5dc Mon Sep 17 00:00:00 2001 From: Max Andriychuk Date: Mon, 14 Jul 2025 15:48:53 +0200 Subject: [PATCH 1/2] Add Max's introductory blogpost --- _posts/2025-14-07-activty-analysis-cuda.md | 39 ++++++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 _posts/2025-14-07-activty-analysis-cuda.md diff --git a/_posts/2025-14-07-activty-analysis-cuda.md b/_posts/2025-14-07-activty-analysis-cuda.md new file mode 100644 index 0000000..97170eb --- /dev/null +++ b/_posts/2025-14-07-activty-analysis-cuda.md @@ -0,0 +1,39 @@ +--- +title: "Activity analysis for reverse-mode differentiation of (CUDA) GPU kernels" +layout: post +excerpt: "A GSoC 2025 contributor project aiming to implement Activity Analysis for (CUDA) GPU kernels" +sitemap: false +author: Maksym Andriichuk +permalink: blogs/2025_maksym_andriichuk_introduction_blog/ +banner_image: /images/blog/gsoc-banner.png +date: 2025-07-14 +tags: gsoc c++ clang root auto-differentiation +--- + +### Introduction +Hi! I’m Maksym Andriichuk, a third-year student of JMU Wuerzburg studying Mathematics. I am exited to be a part of Clad team fo this year's Google Summer of Code. + +### Project description +My project focuses on removing atomic operations when differentiating CUDA kernels. When accessing gpu global memory inside of a gradinet of a kernel data races inevitably occur and atomic operation are used instead, due to how reverse mode differentiation works in Clad. However, in some cases we can guarantee that no data race occur which enables us to drop atomic operations and drastically speeds the execution time of the gradient. + +### Project goals +The main goals of this project are: + +- Implement a mechanism to check whether data races occur in various scenarios. + +- Compare Clad with other tools on benchmarks uncluding RSBench and LULESH. + +### Implementation strategy +- Solve minor CUDA-related issues to get familiar with the codebase. + +- Implement series of visitors to distinguish between different types of scenarious where atomic operations could be dropped + +- Use the existing benchmarks to compare the speedup from the implemented analysis. + +## Conclusion + +By integrating an analysis for (CUDA) GPU kernels we aim to speedup the execution of the gradient by removing atomic operation where posiible. To declare success, we would compare Clad to the other AD tools using different benchmarks. I am exited to be a part of the Clad team this summer and can not wait to share my progress. + +### Related Links + +- [My GitHub profile]https://github.com/ovdiiuv \ No newline at end of file From 34ce279fd50a4af3e93825daba83319f1074e6e2 Mon Sep 17 00:00:00 2001 From: Max Andriychuk Date: Mon, 14 Jul 2025 21:51:40 +0200 Subject: [PATCH 2/2] Fix spelling --- .github/actions/spelling/allow/terms.txt | 1 + ...lysis-cuda.md => 2025-14-07-activity-analysis-cuda.md} | 8 ++++---- 2 files changed, 5 insertions(+), 4 deletions(-) rename _posts/{2025-14-07-activty-analysis-cuda.md => 2025-14-07-activity-analysis-cuda.md} (87%) diff --git a/.github/actions/spelling/allow/terms.txt b/.github/actions/spelling/allow/terms.txt index a580bbf..8f93d2d 100644 --- a/.github/actions/spelling/allow/terms.txt +++ b/.github/actions/spelling/allow/terms.txt @@ -17,6 +17,7 @@ ICHEP IIT JIT'd Jacobians +JMU Jurgaityt LHC LLMs diff --git a/_posts/2025-14-07-activty-analysis-cuda.md b/_posts/2025-14-07-activity-analysis-cuda.md similarity index 87% rename from _posts/2025-14-07-activty-analysis-cuda.md rename to _posts/2025-14-07-activity-analysis-cuda.md index 97170eb..c6bf5dd 100644 --- a/_posts/2025-14-07-activty-analysis-cuda.md +++ b/_posts/2025-14-07-activity-analysis-cuda.md @@ -14,25 +14,25 @@ tags: gsoc c++ clang root auto-differentiation Hi! I’m Maksym Andriichuk, a third-year student of JMU Wuerzburg studying Mathematics. I am exited to be a part of Clad team fo this year's Google Summer of Code. ### Project description -My project focuses on removing atomic operations when differentiating CUDA kernels. When accessing gpu global memory inside of a gradinet of a kernel data races inevitably occur and atomic operation are used instead, due to how reverse mode differentiation works in Clad. However, in some cases we can guarantee that no data race occur which enables us to drop atomic operations and drastically speeds the execution time of the gradient. +My project focuses on removing atomic operations when differentiating CUDA kernels. When accessing gpu global memory inside of a gradient of a kernel data races inevitably occur and atomic operation are used instead, due to how reverse mode differentiation works in Clad. However, in some cases we can guarantee that no data race occur which enables us to drop atomic operations and drastically speeds the execution time of the gradient. ### Project goals The main goals of this project are: - Implement a mechanism to check whether data races occur in various scenarios. -- Compare Clad with other tools on benchmarks uncluding RSBench and LULESH. +- Compare Clad with other tools on benchmarks including RSBench and LULESH. ### Implementation strategy - Solve minor CUDA-related issues to get familiar with the codebase. -- Implement series of visitors to distinguish between different types of scenarious where atomic operations could be dropped +- Implement series of visitors to distinguish between different types of scenarios where atomic operations could be dropped - Use the existing benchmarks to compare the speedup from the implemented analysis. ## Conclusion -By integrating an analysis for (CUDA) GPU kernels we aim to speedup the execution of the gradient by removing atomic operation where posiible. To declare success, we would compare Clad to the other AD tools using different benchmarks. I am exited to be a part of the Clad team this summer and can not wait to share my progress. +By integrating an analysis for (CUDA) GPU kernels we aim to speedup the execution of the gradient by removing atomic operation where possible. To declare success, we would compare Clad to the other AD tools using different benchmarks. I am exited to be a part of the Clad team this summer and can not wait to share my progress. ### Related Links