Andrew Moa Blog Site

Matrix multiplication operation (Ⅲ) - using MPI parallel acceleration

MPI is a parallel computing protocol and is currently the most commonly used interface program for high-performance computing clusters. MPI communicates through inter-process messages and can call multiple cores across nodes to perform parallel computing, which is not available in OpenMP. MPI is implemented on different platforms, such as MS-MPI and Intel MPI under Windows, OpenMPI and MPICH under Linux, etc.

1. MPI parallel acceleration loop calculation

1.1 C Implementation

MPI needs to initialize the main program interface and establish a message broadcast mechanism; at the same time, the array to be calculated is divided and broadcast to different processes. In the past, these operations were implemented internally by OpenMP or other parallel libraries, and programmers did not need to care about how the underlying implementation was implemented. However, using MPI requires programmers to manually allocate the global and local space of each process and control the broadcast of each message, which undoubtedly increases the additional learning cost.

34 minutes to read
Andrew Moa

Matrix multiplication operation (I) - using OpenMP to speed up loop calculation

Speaking of matrices, anyone who studies science and engineering will think of the fear of being dominated by linear algebra classes. Matrix multiplication operations are indispensable for various industrial and scientific research numerical calculations, and are also used in various benchmarking software. The time consumption of matrix multiplication operations is also an important indicator for judging the floating-point operation performance of computers. The purpose of this article is to verify the performance differences of various implementation methods through matrix multiplication operations, and compare the performance differences of different computing platforms to provide a reference for high-performance computing development.
42 minutes to read
Andrew Moa

Use VSCode to develop STAR-CCM+ user library: build dynamic link library through Fortran

Although the official STAR-CCM+ documentation specifically states that FORTRAN is not supported under Windows1. But in fact, as long as the compiler supports it, user library compiled using Fortran under Windows can be loaded and run normally in STAR-CCM+.

1. Build CMake Project

First, we refer to the tutorial case in the official documentation 2 and build a CMake project. The project structure is as follows:

STARCCM_FORTRAN_SAMPLE
   CMakeLists.txt	# CMake Configuration File
   README.md	# Description document, not required
├───.vscode
      launch.json	# Automatically generated file when starting debug mode, not required
      settings.json	# Define CMake related variables
└───src
        initVelocity.f
        StarReal.f.in
        sutherlandViscosity.f
        uflib.f
        zeroGradT.f

The main content of CMake configuration file CMakeLists.txt is as follows:

7 minutes to read
Andrew Moa