1D problems in CUDA and HPC - cuda

I'm looking for some 1D problems in CUDA and HPC, e.g. Black Scholes.
By 1D problems, I mean problems in which all the work is done on 1D arrays. Although matrix multiplication can be expressed in this way, I want problems in which the basic problem is just 1D.
I am trying to develop a 1D library for CUDA and would need some benchmark problems to test it. I realize that a lot of real world problems are expressed as 2D, I would really like to see some real world 1D problems.
Thanks.
EDIT: Thanks for all the answers. It'll be great if the answers contain more HPC problems, e.g. Black Scholes, rather than just generic algorithms.
Thanks.

A common problem in parallel programing is a reduction: You are given an array of numbers and you have to compute a "prefix sum", that is, every element stores a sum of all preceidings elements (+ itself or not. I prefer inclusive).
It is fairly simple problem, but since it is often repeated many times in more complex algorithms, having that efficient is cruicial.
Another common problem is sorting.
There already some papers on that topic, take this one for example:
enter link description here
I think it is a good problem to start with, to solve bigger problems on top of it.

A simple problem you can use for 1 to 3 dimensions is the heat equation. There are several different numerical methods for solving it, some of them can be implementes in parallel.
A method that works at least with OpenMp and MPI is the finite difference method. I suppose if you combine it with a clever stencil you should be able to implement it efficently in Cuda C.

A classical 1D example is provided by the heat equation.
Below, I'm posting a concrete, fully worked CPU/GPU example on this topic exploiting the Jacobi solution scheme. Please, note that two time-step kernels are provided, one not using shared memory and one using shared memory.
#include <stdio.h>
#include <stdlib.h>
#include <thrust\device_vector.h>
#include "Utilities.cuh"
#define BLOCKSIZE 512
/****************************/
/* CPU CALCULATION FUNCTION */
/****************************/
void HeatEquation1DCPU(float * __restrict__ h_T, int *Niter, const float T0, const float Q_N_1, const float dx, const float k, const float rho,
const float cp, const float alpha, const float dt, const float maxErr, const int maxIterNumber, const int N)
{
float *h_DeltaT = (float *)malloc(N * sizeof(float));
// --- Enforcing boundary condition at the left end.
*h_T = T0;
h_DeltaT[0] = 0.f;
float current_max;
do {
// --- Internal region between the two boundaries.
for (int i = 1; i < N - 1; i++) h_DeltaT[i] = dt * alpha * ((h_T[i - 1] + h_T[i + 1] - 2.f * h_T[i]) / (dx * dx));
// --- Enforcing boundary condition at the right end.
h_DeltaT[N - 1] = dt * 2.f * ((k * ((h_T[N - 2] - h_T[N - 1]) / dx) + Q_N_1) / (dx * rho * cp));
// --- Update the temperature and find the maximum DeltaT over all nodes
current_max = h_DeltaT[0]; // --- Remember: h_DeltaT[0] = 0
for (int i = 1; i < N; i++)
{
h_T[i] = h_T[i] + h_DeltaT[i]; // h_T[0] keeps
current_max = abs(h_DeltaT[i]) > current_max ? abs(h_DeltaT[i]) : current_max;
}
// --- Increase iteration counter
(*Niter)++;
} while (*Niter < maxIterNumber && current_max > maxErr);
delete [] h_DeltaT;
}
/**************************/
/* GPU CALCULATION KERNEL */
/**************************/
__global__ void HeatEquation1DGPU_IterationKernel(float * __restrict__ d_T, float * __restrict__ d_DeltaT, const float T0, const float Q_N_1, const float dx, const float k, const float rho,
const float cp, const float alpha, const float dt, const float maxErr, const int maxIterNumber, const int N)
{
const int tid = blockIdx.x * blockDim.x + threadIdx.x;
if (tid < N) {
// --- Internal region between the two boundaries.
if ((tid > 0) && (tid < N - 1) ) d_DeltaT[tid] = dt * alpha *((d_T[tid - 1] + d_T[tid + 1] - 2.f * d_T[tid]) / (dx * dx));
// --- Enforcing boundary condition at the left end.
if (tid == 0) d_DeltaT[0] = 0.f;
// --- Enforcing boundary condition at the right end.
if (tid == N - 1) d_DeltaT[tid] = dt * 2.f * ((k * ((d_T[tid - 1] - d_T[tid]) / dx) + Q_N_1) / (dx * rho * cp));
// --- Update the temperature
d_T[tid] = d_T[tid] + d_DeltaT[tid];
d_DeltaT[tid] = abs(d_DeltaT[tid]);
}
}
__global__ void HeatEquation1DGPU_IterationSharedKernel(float * __restrict__ d_T, float * __restrict__ d_DeltaT, const float T0, const float Q_N_1, const float dx, const float k, const float rho,
const float cp, const float alpha, const float dt, const float maxErr, const int maxIterNumber, const int N)
{
const int tid = blockIdx.x * blockDim.x + threadIdx.x;
// --- Shared memory has 0, 1, ..., BLOCKSIZE - 1, BLOCKSIZE locations, so it has BLOCKSIZE locations + 2 (left and right) halo cells.
__shared__ float d_T_shared[BLOCKSIZE + 2]; // --- Need to know BLOCKSIZE beforehand
if (tid < N) {
// --- Load data from global memory to shared memory locations 1, 2, ..., BLOCKSIZE - 1
d_T_shared[threadIdx.x + 1] = d_T[tid];
// --- Left halo cell
if ((threadIdx.x == 0) && (tid > 0)) { d_T_shared[0] = d_T[tid - 1]; }
// --- Right halo cell
if ((threadIdx.x == blockDim.x - 1) && (tid < N - 1)) { d_T_shared[threadIdx.x + 2] = d_T[tid + 1]; }
__syncthreads();
// --- Internal region between the two boundaries.
if ((tid > 0) && (tid < N - 1) ) d_DeltaT[tid] = dt * alpha *((d_T_shared[threadIdx.x] + d_T_shared[threadIdx.x + 2] - 2.f * d_T_shared[threadIdx.x + 1]) / (dx * dx));
// --- Enforcing boundary condition at the left end.
if (tid == 0) d_DeltaT[0] = 0.f;
// --- Enforcing boundary condition at the right end.
if (tid == N - 1) d_DeltaT[tid] = dt * 2.f * ((k * ((d_T_shared[threadIdx.x] - d_T_shared[threadIdx.x + 1]) / dx) + Q_N_1) / (dx * rho * cp));
// --- Update the temperature
d_T[tid] = d_T[tid] + d_DeltaT[tid];
d_DeltaT[tid] = abs(d_DeltaT[tid]);
}
}
/****************************/
/* GPU CALCULATION FUNCTION */
/****************************/
void HeatEquation1DGPU(float * __restrict__ d_T, int *Niter, const float T0, const float Q_N_1, const float dx, const float k, const float rho,
const float cp, const float alpha, const float dt, const float maxErr, const int maxIterNumber, const int N)
{
// --- Absolute values of DeltaT
float *d_DeltaT; gpuErrchk(cudaMalloc(&d_DeltaT, N * sizeof(float)));
// --- Enforcing boundary condition at the left end.
gpuErrchk(cudaMemcpy(d_T, &T0, sizeof(float), cudaMemcpyHostToDevice));
float current_max = 0.f;
do {
//HeatEquation1DGPU_IterationKernel<<<iDivUp(N, BLOCKSIZE), BLOCKSIZE>>>(d_T, d_DeltaT, T0, Q_N_1, dx, k, rho, cp, alpha, dt, maxErr, maxIterNumber, N);
HeatEquation1DGPU_IterationSharedKernel<<<iDivUp(N, BLOCKSIZE), BLOCKSIZE>>>(d_T, d_DeltaT, T0, Q_N_1, dx, k, rho, cp, alpha, dt, maxErr, maxIterNumber, N);
thrust::device_ptr<float> d = thrust::device_pointer_cast(d_DeltaT);
current_max = thrust::reduce(d, d + N, current_max, thrust::maximum<float>());
// --- Increase iteration counter
(*Niter)++;
} while (*Niter < maxIterNumber && current_max > maxErr);
gpuErrchk(cudaFree(d_DeltaT));
}
/********/
/* MAIN */
/********/
int main()
{
// --- See https://en.wikipedia.org/wiki/Thermal_diffusivity
// --- Parameters of the problem
const float k = 0.19f; // --- Thermal conductivity [W / (m * K)]
const float rho = 930.f; // --- Density [kg / m^3]
const float cp = 1340.f; // --- Specific heat capacity [J / (kg * K)]
const float alpha = k / (rho * cp); // --- Thermal diffusivity [m^2 / s]
const float length = 1.6f; // --- Total length of the domain [m]
const int N = 64 * BLOCKSIZE; // --- Number of grid points
const float dx = (length / (float)(N - 1));// --- Discretization step [m]
const float dt = (float)(dx * dx / (4.f * alpha));
// --- Time step [s]
const float T0 = 0.f; // --- Temperature at the first end of the domain [C]
const float Q_N_1 = 10.f; // --- Heat flux at the second end of the domain [W / m^2]
const float maxErr = 1.0e-5f; // --- Maximum admitted DeltaT
const int maxIterNumber = 10.0 / dt; // --- Number of overall time steps
/********************/
/* GPU CALCULATIONS */
/********************/
float *h_T_final_device = (float *)malloc(N * sizeof(float)); // --- Final "host-side" result of GPU calculations
int Niter_GPU = 0; // --- Iteration counter for GPU calculations
// --- Device temperature allocation and initialization
float *d_T; gpuErrchk(cudaMalloc(&d_T, N * sizeof(float)));
gpuErrchk(cudaMemset(d_T, 0, N * sizeof(float)));
// --- GPU calculations
HeatEquation1DGPU(d_T, &Niter_GPU, T0, Q_N_1, dx, k, rho, cp, alpha, dt, maxErr, maxIterNumber, N);
// --- Transfer the GPU calculation results from device to host
gpuErrchk(cudaMemcpy(h_T_final_device, d_T, N * sizeof(float), cudaMemcpyDeviceToHost));
/********************/
/* CPU CALCULATIONS */
/********************/
// --- Host temperature allocation and initialization
float *h_T_final_host = (float *)malloc(N * sizeof(float));
memset(h_T_final_host, 0, N * sizeof(float));
int Niter_CPU = 0;
HeatEquation1DCPU(h_T_final_host, &Niter_CPU, T0, Q_N_1, dx, k, rho, cp, alpha, dt, maxErr, maxIterNumber, N);
/************************/
/* CHECKING THE RESULTS */
/************************/
for (int i = 0; i < N; i++) {
printf("Node = %i; T_host = %3.10f; T_device = %3.10f\n", i, h_T_final_host[i], h_T_final_device[i]);
if (h_T_final_host[i] != h_T_final_device[i]) {
printf("Error at i = %i; T_host = %f; T_device = %f\n", i, h_T_final_host[i], h_T_final_device[i]);
return 0;
}
}
printf("Test passed!\n");
delete [] h_T_final_device;
gpuErrchk(cudaFree(d_T));
return 0;
}

Reduction (finding min, max or sum of array) and Sorting are best examples of 1D problems. There can be many variables of these algorithms like sorting on structures etc

Related

Solve the Poisson equation using FFT with CUDA

I'm following a tutorial on using the cuFFT library here: http://gpgpu.org/static/sc2007/SC07_CUDA_3_Libraries.pdf
After following line by line of its code, I'm getting really strange results.
I have input data that is an NxN array of floats. The program does a FFT forward transform, solves Poisson's equation, and then does an inverse FFT. The input data (and output data) is referred to as a square image with sidelength N. When I comment out solve_poisson <<<dimGrid, dimBlock>>> (r_complex_d, kx_d, ky_d, N);, it correctly forward transforms the data and then performs an inverse transform, which causes the output data to be the same as the input data. This is supposed to happen.
Here is the output without calling the solve_poisson method.
0 r_initial: 0.00125126 r: 0.00125132
1 r_initial: 0.563585 r: 0.563585
2 r_initial: 0.193304 r: 0.193304
3 r_initial: 0.80874 r: 0.80874
4 r_initial: 0.585009 r: 0.585009
5 r_initial: 0.479873 r: 0.479873
6 r_initial: 0.350291 r: 0.350291
7 r_initial: 0.895962 r: 0.895962
8 r_initial: 0.82284 r: 0.82284
9 r_initial: 0.746605 r: 0.746605
10 r_initial: 0.174108 r: 0.174108
11 r_initial: 0.858943 r: 0.858943
12 r_initial: 0.710501 r: 0.710502
13 r_initial: 0.513535 r: 0.513535
14 r_initial: 0.303995 r: 0.303995
15 r_initial: 0.0149846 r: 0.0149846
Press any key to continue . . .
However, when I uncomment out the solve_poisson method, the output data is inf or nan, which leads me to believe that the scale variable was somehow close to zero in the solve_poisson method.
So I changed float scale = -(kx[idx] * kx[idx] + ky[idy] * ky[idy]); to float scale = -(kx[idx] * kx[idx] + ky[idy] * ky[idy]) + 0.00001f. This change is not in the original tutorial. The results computed here are not supposed to have extreme positive or negative values.
0 r_initial: 0.00125126 r: -11448.1
1 r_initial: 0.563585 r: 11449.3
2 r_initial: 0.193304 r: -11448.3
3 r_initial: 0.80874 r: 11449.2
4 r_initial: 0.585009 r: 11449.4
5 r_initial: 0.479873 r: -11448.4
6 r_initial: 0.350291 r: 11449.5
7 r_initial: 0.895962 r: -11448.6
8 r_initial: 0.82284 r: -11448.5
9 r_initial: 0.746605 r: 11449.4
10 r_initial: 0.174108 r: -11448.3
11 r_initial: 0.858943 r: 11449.3
12 r_initial: 0.710501 r: 11449.2
13 r_initial: 0.513535 r: -11448.4
14 r_initial: 0.303995 r: 11449.3
15 r_initial: 0.0149846 r: -11448.1
Press any key to continue . . .
In the tutorial, a sample calculation on slide 43 on page 22 is computed=0.975879 reference=0.975882, yet my results are completely different and really large.
The following code is what I used.
#include <cuda_runtime.h>
#include <device_launch_parameters.h>
#include <cufft.h>
#include <stdlib.h>
#include <iostream>
#define N 4 //4 X 4 // N is the sidelength of the image -> 16 pixels in entire image
#define block_size_x 2
#define block_size_y 2
__global__ void real2complex(cufftComplex *c, float *a, int n);
__global__ void complex2real_scaled(float *a, cufftComplex *c, float scale, int n);
__global__ void solve_poisson(cufftComplex *c, float *kx, float *ky, int n);
int main()
{
float *kx, *ky, *r;
kx = (float *)malloc(sizeof(float) * N);
ky = (float *)malloc(sizeof(float) * N);
r = (float *)malloc(sizeof(float) * N * N);
float *kx_d, *ky_d, *r_d;
cufftComplex *r_complex_d;
cudaMalloc((void **)&kx_d, sizeof(float) * N);
cudaMalloc((void **)&ky_d, sizeof(float) * N);
cudaMalloc((void **)&r_d, sizeof(float) * N * N);
cudaMalloc((void **)&r_complex_d, sizeof(cufftComplex) * N * N);
for (int y = 0; y < N; y++)
for (int x = 0; x < N; x++)
r[x + y * N] = rand() / (float)RAND_MAX;
//r[x + y * N] = sin(exp(-((x - N / 2.0f) * (x - N / 2.0f) + (N / 2.0f - y) * (N / 2.0f - y)) / (20 * 20))) * 255 / sin(1); //Here is sample data that will high values at the center of the image and low values as you go farther and farther away from the center.
float* r_inital = (float *)malloc(sizeof(float) * N * N);
for (int i = 0; i < N * N; i++)
r_inital[i] = r[i];
for (int i = 0; i < N; i++)
{
kx[i] = i - N / 2.0f; //centers kx values to be at center of image
ky[i] = N / 2.0f - i; //centers ky values to be at center of image
}
cudaMemcpy(kx_d, kx, sizeof(float) * N, cudaMemcpyHostToDevice);
cudaMemcpy(ky_d, ky, sizeof(float) * N, cudaMemcpyHostToDevice);
cudaMemcpy(r_d, r, sizeof(float) * N * N, cudaMemcpyHostToDevice);
cufftHandle plan;
cufftPlan2d(&plan, N, N, CUFFT_C2C);
/* Compute the execution configuration, block_size_x*block_size_y = number of threads */
dim3 dimBlock(block_size_x, block_size_y);
dim3 dimGrid(N / dimBlock.x, N / dimBlock.y);
/* Handle N not multiple of block_size_x or block_size_y */
if (N % block_size_x != 0) dimGrid.x += 1;
if (N % block_size_y != 0) dimGrid.y += 1;
real2complex << < dimGrid, dimBlock >> > (r_complex_d, r_d, N);
cufftExecC2C(plan, r_complex_d, r_complex_d, CUFFT_FORWARD);
solve_poisson << <dimGrid, dimBlock >> > (r_complex_d, kx_d, ky_d, N);
cufftExecC2C(plan, r_complex_d, r_complex_d, CUFFT_INVERSE);
float scale = 1.0f / (N * N);
complex2real_scaled << <dimGrid, dimBlock >> > (r_d, r_complex_d, scale, N);
cudaMemcpy(r, r_d, sizeof(float) * N * N, cudaMemcpyDeviceToHost);
for (int i = 0; i < N * N; i++)
std::cout << i << "\tr_initial: " << r_inital[i] << "\tr: " << r[i] << std::endl;
system("pause");
/* Destroy plan and clean up memory on device*/
free(kx);
free(ky);
free(r);
free(r_inital);
cufftDestroy(plan);
cudaFree(r_complex_d);
cudaFree(kx_d);
}
__global__ void real2complex(cufftComplex *c, float *a, int n)
{
/* compute idx and idy, the location of the element in the original NxN array */
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int idy = blockIdx.y * blockDim.y + threadIdx.y;
if (idx < n && idy < n)
{
int index = idx + idy * n;
c[index].x = a[index];
c[index].y = 0.0f;
}
}
__global__ void complex2real_scaled(float *a, cufftComplex *c, float scale, int n)
{
/* compute idx and idy, the location of the element in the original NxN array */
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int idy = blockIdx.y * blockDim.y + threadIdx.y;
if (idx < n && idy < n)
{
int index = idx + idy * n;
a[index] = scale * c[index].x;
}
}
__global__ void solve_poisson(cufftComplex *c, float *kx, float *ky, int n)
{
/* compute idx and idy, the location of the element in the original NxN array */
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int idy = blockIdx.y * blockDim.y + threadIdx.y;
if (idx < n && idy < n)
{
int index = idx + idy * n;
float scale = -(kx[idx] * kx[idx] + ky[idy] * ky[idy]) + 0.00001f;
if (idx == 0 && idy == 0) scale = 1.0f;
scale = 1.0f / scale;
c[index].x *= scale;
c[index].y *= scale;
}
}
Is there anything I messed up on? I would really appreciate if anyone could help me out.
Although the poster has found the mistake by himself, I want to share my own implementation of the 2D Poisson equation solver.
The implementation slightly differs from the one linked to by the poster.
The theory is reported at Solve Poisson Equation Using FFT.
MATLAB VERSION
I first report the Matlab version for reference:
clear all
close all
clc
M = 64; % --- Number of Fourier harmonics along x (should be a multiple of 2)
N = 32; % --- Number of Fourier harmonics along y (should be a multiple of 2)
Lx = 3; % --- Domain size along x
Ly = 1.5; % --- Domain size along y
sigma = 0.1; % --- Characteristic width of f (make << 1)
% --- Wavenumbers
kx = (2 * pi / Lx) * [0 : (M / 2 - 1) (- M / 2) : (-1)]; % --- Wavenumbers along x
ky = (2 * pi / Ly) * [0 : (N / 2 - 1) (- N / 2) : (-1)]; % --- Wavenumbers along y
[Kx, Ky] = meshgrid(kx, ky);
% --- Right-hand side of differential equation
hx = Lx / M; % --- Grid spacing along x
hy = Ly / N; % --- Grid spacing along y
x = (0 : (M - 1)) * hx;
y = (0 : (N - 1)) * hy;
[X, Y] = meshgrid(x, y);
rSquared = (X - 0.5 * Lx).^2 + (Y - 0.5 * Ly).^2;
sigmaSquared = sigma^2;
f = exp(-rSquared / (2 * sigmaSquared)) .* (rSquared - 2 * sigmaSquared) / (sigmaSquared^2);
fHat = fft2(f);
% --- Denominator of the unknown spectrum
den = -(Kx.^2 + Ky.^2);
den(1, 1) = 1; % --- Avoid division by zero at wavenumber (0, 0)
% --- Unknown determination
uHat = ifft2(fHat ./ den);
% uHat(1, 1) = 0; % --- Force the unknown spectrum at (0, 0) to be zero
u = real(uHat);
u = u - u(1,1); % --- Force arbitrary constant to be zero by forcing u(1, 1) = 0
% --- Plots
uRef = exp(-rSquared / (2 * sigmaSquared));
err = 100 * sqrt(sum(sum(abs(u - uRef).^2)) / sum(sum(abs(uRef).^2)));
errMax = norm(u(:)-uRef(:),inf)
fprintf('Percentage root mean square error = %f\n', err);
fprintf('Maximum error = %f\n', errMax);
surf(X, Y, u)
xlabel('x')
ylabel('y')
zlabel('u')
title('Solution of 2D Poisson equation by spectral method')
CUDA VERSION
Here is the corresponding CUDA version:
#include <stdio.h>
#include <fstream>
#include <iomanip>
// --- Greek pi
#define _USE_MATH_DEFINES
#include <math.h>
#include <cufft.h>
#define BLOCKSIZEX 16
#define BLOCKSIZEY 16
#define prec_save 10
/*******************/
/* iDivUp FUNCTION */
/*******************/
int iDivUp(int a, int b){ return ((a % b) != 0) ? (a / b + 1) : (a / b); }
/********************/
/* CUDA ERROR CHECK */
/********************/
// --- Credit to http://stackoverflow.com/questions/14038589/what-is-the-canonical-way-to-check-for-errors-using-the-cuda-runtime-api
void gpuAssert(cudaError_t code, const char *file, int line, bool abort = true)
{
if (code != cudaSuccess)
{
fprintf(stderr, "GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) { exit(code); }
}
}
void gpuErrchk(cudaError_t ans) { gpuAssert((ans), __FILE__, __LINE__); }
/**************************************************/
/* COMPUTE RIGHT HAND SIDE OF 2D POISSON EQUATION */
/**************************************************/
__global__ void computeRHS(const float * __restrict__ d_x, const float * __restrict__ d_y,
float2 * __restrict__ d_r, const float Lx, const float Ly, const float sigma,
const int M, const int N) {
const int tidx = threadIdx.x + blockIdx.x * blockDim.x;
const int tidy = threadIdx.y + blockIdx.y * blockDim.y;
if ((tidx >= M) || (tidy >= N)) return;
const float sigmaSquared = sigma * sigma;
const float rSquared = (d_x[tidx] - 0.5f * Lx) * (d_x[tidx] - 0.5f * Lx) +
(d_y[tidy] - 0.5f * Ly) * (d_y[tidy] - 0.5f * Ly);
d_r[tidy * M + tidx].x = expf(-rSquared / (2.f * sigmaSquared)) * (rSquared - 2.f * sigmaSquared) / (sigmaSquared * sigmaSquared);
d_r[tidy * M + tidx].y = 0.f;
}
/****************************************************/
/* SOLVE 2D POISSON EQUATION IN THE SPECTRAL DOMAIN */
/****************************************************/
__global__ void solvePoisson(const float * __restrict__ d_kx, const float * __restrict__ d_ky,
float2 * __restrict__ d_r, const int M, const int N)
{
const int tidx = threadIdx.x + blockIdx.x * blockDim.x;
const int tidy = threadIdx.y + blockIdx.y * blockDim.y;
if ((tidx >= M) || (tidy >= N)) return;
float scale = -(d_kx[tidx] * d_kx[tidx] + d_ky[tidy] * d_ky[tidy]);
if (tidx == 0 && tidy == 0) scale = 1.f;
scale = 1.f / scale;
d_r[M * tidy + tidx].x *= scale;
d_r[M * tidy + tidx].y *= scale;
}
/****************************************************************************/
/* SOLVE 2D POISSON EQUATION IN THE SPECTRAL DOMAIN - SHARED MEMORY VERSION */
/****************************************************************************/
__global__ void solvePoissonShared(const float * __restrict__ d_kx, const float * __restrict__ d_ky,
float2 * __restrict__ d_r, const int M, const int N)
{
const int tidx = threadIdx.x + blockIdx.x * blockDim.x;
const int tidy = threadIdx.y + blockIdx.y * blockDim.y;
if ((tidx >= M) || (tidy >= N)) return;
// --- Use shared memory to minimize multiple access to same spectral coordinate values
__shared__ float kx_s[BLOCKSIZEX], ky_s[BLOCKSIZEY];
kx_s[threadIdx.x] = d_kx[tidx];
ky_s[threadIdx.y] = d_ky[tidy];
__syncthreads();
float scale = -(kx_s[threadIdx.x] * kx_s[threadIdx.x] + ky_s[threadIdx.y] * ky_s[threadIdx.y]);
if (tidx == 0 && tidy == 0) scale = 1.f;
scale = 1.f / scale;
d_r[M * tidy + tidx].x *= scale;
d_r[M * tidy + tidx].y *= scale;
}
/******************************/
/* COMPLEX2REAL SCALED KERNEL */
/******************************/
__global__ void complex2RealScaled(float2 * __restrict__ d_r, float * __restrict__ d_result, const int M, const int N, float scale)
{
const int tidx = threadIdx.x + blockIdx.x * blockDim.x;
const int tidy = threadIdx.y + blockIdx.y * blockDim.y;
if ((tidx >= M) || (tidy >= N)) return;
d_result[tidy * M + tidx] = scale * (d_r[tidy * M + tidx].x - d_r[0].x);
}
/******************************************/
/* COMPLEX2REAL SCALED KERNEL - OPTIMIZED */
/******************************************/
__global__ void complex2RealScaledOptimized(float2 * __restrict__ d_r, float * __restrict__ d_result, const int M, const int N, float scale)
{
const int tidx = threadIdx.x + blockIdx.x * blockDim.x;
const int tidy = threadIdx.y + blockIdx.y * blockDim.y;
if ((tidx >= M) || (tidy >= N)) return;
__shared__ float d_r_0[1];
if (threadIdx.x == 0) d_r_0[0] = d_r[0].x;
volatile float2 c;
c.x = d_r[tidy * M + tidx].x;
c.y = d_r[tidy * M + tidx].y;
d_result[tidy * M + tidx] = scale * (c.x - d_r_0[0]);
}
/**************************************/
/* SAVE FLOAT2 ARRAY FROM GPU TO FILE */
/**************************************/
void saveGPUcomplextxt(const float2 * d_in, const char *filename, const int M) {
float2 *h_in = (float2 *)malloc(M * sizeof(float2));
gpuErrchk(cudaMemcpy(h_in, d_in, M * sizeof(float2), cudaMemcpyDeviceToHost));
std::ofstream outfile;
outfile.open(filename);
for (int i = 0; i < M; i++) {
//printf("%f %f\n", h_in[i].c.x, h_in[i].c.y);
outfile << std::setprecision(prec_save) << h_in[i].x << "\n"; outfile << std::setprecision(prec_save) << h_in[i].y << "\n";
}
outfile.close();
}
/*************************************/
/* SAVE FLOAT ARRAY FROM GPU TO FILE */
/*************************************/
template <class T>
void saveGPUrealtxt(const T * d_in, const char *filename, const int M) {
T *h_in = (T *)malloc(M * sizeof(T));
gpuErrchk(cudaMemcpy(h_in, d_in, M * sizeof(T), cudaMemcpyDeviceToHost));
std::ofstream outfile;
outfile.open(filename);
for (int i = 0; i < M; i++) outfile << std::setprecision(prec_save) << h_in[i] << "\n";
outfile.close();
}
/********/
/* MAIN */
/********/
int main()
{
const int M = 64; // --- Number of Fourier harmonics along x (should be a multiple of 2)
const int N = 32; // --- Number of Fourier harmonics along y(should be a multiple of 2)
const float Lx = 3.f; // --- Domain size along x
const float Ly = 1.5f; // --- Domain size along y
const float sigma = 0.1f; // --- Characteristic width of f(make << 1)
// --- Wavenumbers on the host
float *h_kx = (float *)malloc(M * sizeof(float));
float *h_ky = (float *)malloc(N * sizeof(float));
for (int k = 0; k < M / 2; k++) h_kx[k] = (2.f * M_PI / Lx) * k;
for (int k = -M / 2; k < 0; k++) h_kx[k + M] = (2.f * M_PI / Lx) * k;
for (int k = 0; k < N / 2; k++) h_ky[k] = (2.f * M_PI / Ly) * k;
for (int k = -N / 2; k < 0; k++) h_ky[k + N] = (2.f * M_PI / Ly) * k;
// --- Wavenumbers on the device
float *d_kx; gpuErrchk(cudaMalloc(&d_kx, M * sizeof(float)));
float *d_ky; gpuErrchk(cudaMalloc(&d_ky, N * sizeof(float)));
gpuErrchk(cudaMemcpy(d_kx, h_kx, M * sizeof(float), cudaMemcpyHostToDevice));
gpuErrchk(cudaMemcpy(d_ky, h_ky, N * sizeof(float), cudaMemcpyHostToDevice));
// --- Domain discretization on the host
float *h_x = (float *)malloc(M * sizeof(float));
float *h_y = (float *)malloc(N * sizeof(float));
for (int k = 0; k < M; k++) h_x[k] = Lx / (float)M * k;
for (int k = 0; k < N; k++) h_y[k] = Ly / (float)N * k;
// --- Domain discretization on the device
float *d_x; gpuErrchk(cudaMalloc(&d_x, M * sizeof(float)));
float *d_y; gpuErrchk(cudaMalloc(&d_y, N * sizeof(float)));
gpuErrchk(cudaMemcpy(d_x, h_x, M * sizeof(float), cudaMemcpyHostToDevice));
gpuErrchk(cudaMemcpy(d_y, h_y, N * sizeof(float), cudaMemcpyHostToDevice));
// --- Compute right-hand side of the equation on the device
float2 *d_r; gpuErrchk(cudaMalloc(&d_r, M * N * sizeof(float2)));
dim3 dimBlock(BLOCKSIZEX, BLOCKSIZEY);
dim3 dimGrid(iDivUp(M, BLOCKSIZEX), iDivUp(N, BLOCKSIZEY));
computeRHS << <dimGrid, dimBlock >> >(d_x, d_y, d_r, Lx, Ly, sigma, M, N);
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
// --- Create plan for CUDA FFT
cufftHandle plan;
cufftPlan2d(&plan, N, M, CUFFT_C2C);
// --- Compute in place forward FFT of right-hand side
cufftExecC2C(plan, d_r, d_r, CUFFT_FORWARD);
// --- Solve Poisson equation in Fourier space
//solvePoisson << <dimGrid, dimBlock >> > (d_kx, d_ky, d_r, M, N);
solvePoissonShared << <dimGrid, dimBlock >> > (d_kx, d_ky, d_r, M, N);
// --- Compute in place inverse FFT
cufftExecC2C(plan, d_r, d_r, CUFFT_INVERSE);
//saveGPUcomplextxt(d_r, "D:\\Project\\poisson2DFFT\\poisson2DFFT\\d_r.txt", M * N);
// --- With cuFFT, an FFT followed by an IFFT will return the same array times the size of the transform
// --- Accordingly, we need to scale the result.
const float scale = 1.f / ((float)M * (float)N);
float *d_result; gpuErrchk(cudaMalloc(&d_result, M * N * sizeof(float)));
//complex2RealScaled << <dimGrid, dimBlock >> > (d_r, d_result, M, N, scale);
complex2RealScaledOptimized << <dimGrid, dimBlock >> > (d_r, d_result, M, N, scale);
//saveGPUrealtxt(d_result, "D:\\Project\\poisson2DFFT\\poisson2DFFT\\d_result.txt", M * N);
// --- Transfer data from device to host
float *h_result = (float *)malloc(M * N * sizeof(float));
gpuErrchk(cudaMemcpy(h_result, d_result, M * N * sizeof(float), cudaMemcpyDeviceToHost));
return 0;
}
As it shows in the tutorial, the Matlab implementation on slide 33 on page 17 shows that the Poisson calculations are based on the top left corner of the screen as the origin. The x and y data values are then x = (0:(N-1))*h; and y = (0:(N-1))*h;, which is why the meshgrid created from these x and y values both start from 0 and increase, as shown on the graph's x and y axes on slide 31 on page 16. In this case, where the image length was 1 (I refer to the input data of the NxN float array or the meshgrid as an image), the center of the image is actually (0.5, 0.5). I wanted to translate these points, so the center point would instead be (0,0) and followed a typical representation of the Cartesian Plane.
So in my code, instead of the Matlab code of
x = (0:(N-1))*h;
y = (0:(N-1))*h;
which could be implemented as
for (int i = 0; i < N; i++)
{
kx[i] = i;
ky[i] = i;
}
I replaced it with
for (int i = 0; i < N; i++)
{
kx[i] = i - N / 2.0f; //centers kx values to be at center of image
ky[i] = N / 2.0f - i; //centers ky values to be at center of image
}
However, I had forgot to change the Poisson calculation so it recognizes the center of the image as the origin instead of the top right corner as the origin. So as Mr. Robert Crovella said, I would have to
change this line: if (idx == 0 && idy == 0) scale = 1.0f; to this: if
(idx == 2 && idy == 2) scale = 1.0f;
for the case where the image length, or N, is 4.
To generalize this for any image length, this line of code could be then changed to if (idx == n/2 && idy == n/2) scale = 1.0f;

Solving dense linear systems AX = B with CUDA

Can I use the new cuSOLVER library (CUDA 7) to solve linear systems of the form
AX = B
where A, X and B are NxN dense matrices ?
Yes.
Approach nr. 1
In the framework of cuSOLVER you can use QR decomposition, see QR decomposition to solve linear systems in CUDA.
Approach nr. 2
Alternatively, you can calculate the matrix inverse by the successive involation of
cublas<t>getrfBatched()
which calculates the LU decomposition of a matrix, and
cublas<t>getriBatched()
which calculates the inverse of the matrix starting from its LU decomposition.
Approach nr. 3
A final possibility is using
cublas<t>getrfBatched()
followed by a twofold invocation of
cublas<t>trsm()
which solves upper or lower triangular linear systems.
As pointed out by Robert Crovella, the answer may vary on the size and the type of the involved matrices.
Code for approach nr. 1
Please, see QR decomposition to solve linear systems in CUDA.
Code for approaches nr. 2 and nr. 3
Below, I'm reporting a worked example for the implementation of approaches nr. 2 and 3. Hankel matrices are used to feed the approaches with well-conditioned, invertible matrices. Please, note that approach nr. 3 requires permuting (rearranging) the system coefficients vector according to the pivot array obtained following the invokation of cublas<t>getrfBatched(). This permutation can be conveniently done on the CPU.
#include <stdio.h>
#include <fstream>
#include <iomanip>
#include <stdlib.h> /* srand, rand */
#include <time.h> /* time */
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include "cublas_v2.h"
#include "Utilities.cuh"
#include "TimingGPU.cuh"
#define prec_save 10
#define BLOCKSIZE 256
#define BLOCKSIZEX 16
#define BLOCKSIZEY 16
/************************************/
/* SAVE REAL ARRAY FROM CPU TO FILE */
/************************************/
template <class T>
void saveCPUrealtxt(const T * h_in, const char *filename, const int M) {
std::ofstream outfile;
outfile.open(filename);
for (int i = 0; i < M; i++) outfile << std::setprecision(prec_save) << h_in[i] << "\n";
outfile.close();
}
/************************************/
/* SAVE REAL ARRAY FROM GPU TO FILE */
/************************************/
template <class T>
void saveGPUrealtxt(const T * d_in, const char *filename, const int M) {
T *h_in = (T *)malloc(M * sizeof(T));
gpuErrchk(cudaMemcpy(h_in, d_in, M * sizeof(T), cudaMemcpyDeviceToHost));
std::ofstream outfile;
outfile.open(filename);
for (int i = 0; i < M; i++) outfile << std::setprecision(prec_save) << h_in[i] << "\n";
outfile.close();
}
/***************************************************/
/* FUNCTION TO SET THE VALUES OF THE HANKEL MATRIX */
/***************************************************/
// --- https://en.wikipedia.org/wiki/Hankel_matrix
void setHankelMatrix(double * __restrict h_A, const int N) {
double *h_atemp = (double *)malloc((2 * N - 1) * sizeof(double));
// --- Initialize random seed
srand(time(NULL));
// --- Generate random numbers
for (int k = 0; k < 2 * N - 1; k++) h_atemp[k] = rand();
// --- Fill the Hankel matrix. The Hankel matrix is symmetric, so filling by row or column is equivalent.
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
h_A[i * N + j] = h_atemp[(i + 1) + (j + 1) - 2];
free(h_atemp);
}
/***********************************************/
/* FUNCTION TO COMPUTE THE COEFFICIENTS VECTOR */
/***********************************************/
void computeCoefficientsVector(const double * __restrict h_A, const double * __restrict h_xref,
double * __restrict h_y, const int N) {
for (int k = 0; k < N; k++) h_y[k] = 0.f;
for (int m = 0; m < N; m++)
for (int n = 0; n < N; n++)
h_y[m] = h_y[m] + h_A[n * N + m] * h_xref[n];
}
/************************************/
/* COEFFICIENT REARRANGING FUNCTION */
/************************************/
void rearrange(double *vec, int *pivotArray, int N){
for (int i = 0; i < N; i++) {
double temp = vec[i];
vec[i] = vec[pivotArray[i] - 1];
vec[pivotArray[i] - 1] = temp;
}
}
/********/
/* MAIN */
/********/
int main() {
const unsigned int N = 1000;
const unsigned int Nmatrices = 1;
// --- CUBLAS initialization
cublasHandle_t cublas_handle;
cublasSafeCall(cublasCreate(&cublas_handle));
TimingGPU timerLU, timerApproach1, timerApproach2;
double timingLU, timingApproach1, timingApproach2;
/***********************/
/* SETTING THE PROBLEM */
/***********************/
// --- Matrices to be inverted (only one in this example)
double *h_A = (double *)malloc(N * N * Nmatrices * sizeof(double));
// --- Setting the Hankel matrix
setHankelMatrix(h_A, N);
// --- Defining the solution
double *h_xref = (double *)malloc(N * sizeof(double));
for (int k = 0; k < N; k++) h_xref[k] = 1.f;
// --- Coefficient vectors (only one in this example)
double *h_y = (double *)malloc(N * sizeof(double));
computeCoefficientsVector(h_A, h_xref, h_y, N);
// --- Result (only one in this example)
double *h_x = (double *)malloc(N * sizeof(double));
// --- Allocate device space for the input matrices
double *d_A; gpuErrchk(cudaMalloc(&d_A, N * N * Nmatrices * sizeof(double)));
double *d_y; gpuErrchk(cudaMalloc(&d_y, N * sizeof(double)));
double *d_x; gpuErrchk(cudaMalloc(&d_x, N * sizeof(double)));
// --- Move the relevant matrices from host to device
gpuErrchk(cudaMemcpy(d_A, h_A, N * N * Nmatrices * sizeof(double), cudaMemcpyHostToDevice));
gpuErrchk(cudaMemcpy(d_y, h_y, N * sizeof(double), cudaMemcpyHostToDevice));
/**********************************/
/* COMPUTING THE LU DECOMPOSITION */
/**********************************/
timerLU.StartCounter();
// --- Creating the array of pointers needed as input/output to the batched getrf
double **h_inout_pointers = (double **)malloc(Nmatrices * sizeof(double *));
for (int i = 0; i < Nmatrices; i++) h_inout_pointers[i] = d_A + i * N * N;
double **d_inout_pointers;
gpuErrchk(cudaMalloc(&d_inout_pointers, Nmatrices * sizeof(double *)));
gpuErrchk(cudaMemcpy(d_inout_pointers, h_inout_pointers, Nmatrices * sizeof(double *), cudaMemcpyHostToDevice));
free(h_inout_pointers);
int *d_pivotArray; gpuErrchk(cudaMalloc(&d_pivotArray, N * Nmatrices * sizeof(int)));
int *d_InfoArray; gpuErrchk(cudaMalloc(&d_InfoArray, Nmatrices * sizeof(int)));
int *h_InfoArray = (int *)malloc(Nmatrices * sizeof(int));
cublasSafeCall(cublasDgetrfBatched(cublas_handle, N, d_inout_pointers, N, d_pivotArray, d_InfoArray, Nmatrices));
//cublasSafeCall(cublasDgetrfBatched(cublas_handle, N, d_inout_pointers, N, NULL, d_InfoArray, Nmatrices));
gpuErrchk(cudaMemcpy(h_InfoArray, d_InfoArray, Nmatrices * sizeof(int), cudaMemcpyDeviceToHost));
for (int i = 0; i < Nmatrices; i++)
if (h_InfoArray[i] != 0) {
fprintf(stderr, "Factorization of matrix %d Failed: Matrix may be singular\n", i);
cudaDeviceReset();
exit(EXIT_FAILURE);
}
timingLU = timerLU.GetCounter();
printf("Timing LU decomposition %f [ms]\n", timingLU);
/*********************************/
/* CHECKING THE LU DECOMPOSITION */
/*********************************/
saveCPUrealtxt(h_A, "D:\\Project\\solveSquareLinearSystemCUDA\\solveSquareLinearSystemCUDA\\A.txt", N * N);
saveCPUrealtxt(h_y, "D:\\Project\\solveSquareLinearSystemCUDA\\solveSquareLinearSystemCUDA\\y.txt", N);
saveGPUrealtxt(d_A, "D:\\Project\\solveSquareLinearSystemCUDA\\solveSquareLinearSystemCUDA\\Adecomposed.txt", N * N);
saveGPUrealtxt(d_pivotArray, "D:\\Project\\solveSquareLinearSystemCUDA\\solveSquareLinearSystemCUDA\\pivotArray.txt", N);
/******************************************************************************/
/* APPROACH NR.1: COMPUTE THE INVERSE OF A STARTING FROM ITS LU DECOMPOSITION */
/******************************************************************************/
timerApproach1.StartCounter();
// --- Allocate device space for the inverted matrices
double *d_Ainv; gpuErrchk(cudaMalloc(&d_Ainv, N * N * Nmatrices * sizeof(double)));
// --- Creating the array of pointers needed as output to the batched getri
double **h_out_pointers = (double **)malloc(Nmatrices * sizeof(double *));
for (int i = 0; i < Nmatrices; i++) h_out_pointers[i] = (double *)((char*)d_Ainv + i * ((size_t)N * N) * sizeof(double));
double **d_out_pointers;
gpuErrchk(cudaMalloc(&d_out_pointers, Nmatrices*sizeof(double *)));
gpuErrchk(cudaMemcpy(d_out_pointers, h_out_pointers, Nmatrices*sizeof(double *), cudaMemcpyHostToDevice));
free(h_out_pointers);
cublasSafeCall(cublasDgetriBatched(cublas_handle, N, (const double **)d_inout_pointers, N, d_pivotArray, d_out_pointers, N, d_InfoArray, Nmatrices));
gpuErrchk(cudaMemcpy(h_InfoArray, d_InfoArray, Nmatrices * sizeof(int), cudaMemcpyDeviceToHost));
for (int i = 0; i < Nmatrices; i++)
if (h_InfoArray[i] != 0) {
fprintf(stderr, "Inversion of matrix %d Failed: Matrix may be singular\n", i);
cudaDeviceReset();
exit(EXIT_FAILURE);
}
double alpha1 = 1.f;
double beta1 = 0.f;
cublasSafeCall(cublasDgemv(cublas_handle, CUBLAS_OP_N, N, N, &alpha1, d_Ainv, N, d_y, 1, &beta1, d_x, 1));
timingApproach1 = timingLU + timerApproach1.GetCounter();
printf("Timing approach 1 %f [ms]\n", timingApproach1);
/**************************/
/* CHECKING APPROACH NR.1 */
/**************************/
saveGPUrealtxt(d_x, "D:\\Project\\solveSquareLinearSystemCUDA\\solveSquareLinearSystemCUDA\\xApproach1.txt", N);
/*************************************************************/
/* APPROACH NR.2: INVERT UPPER AND LOWER TRIANGULAR MATRICES */
/*************************************************************/
timerApproach2.StartCounter();
double *d_P; gpuErrchk(cudaMalloc(&d_P, N * N * sizeof(double)));
gpuErrchk(cudaMemcpy(h_y, d_y, N * Nmatrices * sizeof(int), cudaMemcpyDeviceToHost));
int *h_pivotArray = (int *)malloc(N * Nmatrices*sizeof(int));
gpuErrchk(cudaMemcpy(h_pivotArray, d_pivotArray, N * Nmatrices * sizeof(int), cudaMemcpyDeviceToHost));
rearrange(h_y, h_pivotArray, N);
gpuErrchk(cudaMemcpy(d_y, h_y, N * Nmatrices * sizeof(double), cudaMemcpyHostToDevice));
// --- Now P*A=L*U
// Linear system A*x=y => P.'*L*U*x=y => L*U*x=P*y
// --- 1st phase - solve Ly = b
const double alpha = 1.f;
// --- Function solves the triangular linear system with multiple right hand sides, function overrides b as a result
// --- Lower triangular part
cublasSafeCall(cublasDtrsm(cublas_handle, CUBLAS_SIDE_LEFT, CUBLAS_FILL_MODE_LOWER, CUBLAS_OP_N, CUBLAS_DIAG_UNIT, N, 1, &alpha, d_A, N, d_y, N));
// --- Upper triangular part
cublasSafeCall(cublasDtrsm(cublas_handle, CUBLAS_SIDE_LEFT, CUBLAS_FILL_MODE_UPPER, CUBLAS_OP_N, CUBLAS_DIAG_NON_UNIT, N, 1, &alpha, d_A, N, d_y, N));
timingApproach2 = timingLU + timerApproach2.GetCounter();
printf("Timing approach 2 %f [ms]\n", timingApproach2);
/**************************/
/* CHECKING APPROACH NR.2 */
/**************************/
saveGPUrealtxt(d_y, "D:\\Project\\solveSquareLinearSystemCUDA\\solveSquareLinearSystemCUDA\\xApproach2.txt", N);
return 0;
}
The Utilities.cu and Utilities.cuh files needed to run such an example are maintained at this github page. The TimingGPU.cu and TimingGPU.cuh files are maintained at this github page.
Some useful references on the third approach:
NAG Fortran Library Routine Document
Scientific Computing Software Library (SCSL) User’s Guide
https://www.cs.drexel.edu/~jjohnson/2010-11/summer/cs680/programs/lapack/Danh/verify_sequential.c
EDIT
Timings (in ms) for approaches nr. 2 and 3 (tests performed on a GTX960 card, cc. 5.2).
N LU decomposition Approach nr. 2 Approach nr. 3
100 1.08 2.75 1.28
500 45.4 161 45.7
1000 302 1053 303
As it emerges, approach nr. 3 is more convenient and its cost is essentially the cost of computing the LU factorization. Furthermore:
Solving linear systems by LU decomposition is faster than using QR decomposition (see QR decomposition to solve linear systems in CUDA);
LU decomposition is limited to square linear systems, while QR decomposition helps in case of non-square linear systems.
The below Matlab code can be used for checking the results
clear all
close all
clc
warning off
N = 1000;
% --- Setting the problem solution
x = ones(N, 1);
%%%%%%%%%%%%%%%%%%%%%
% NxN HANKEL MATRIX %
%%%%%%%%%%%%%%%%%%%%%
% --- https://en.wikipedia.org/wiki/Hankel_matrix
load A.txt
load y.txt
A = reshape(A, N, N);
yMatlab = A * x;
fprintf('Percentage rms between coefficients vectors in Matlab and CUDA %f\n', 100 * sqrt(sum(sum(abs(yMatlab - y).^2)) / sum(sum(abs(yMatlab).^2))));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% COMPUTATION OF THE LU DECOMPOSITION %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[Lmatlab, Umatlab] = lu(A);
load Adecomposed.txt
Adecomposed = reshape(Adecomposed, N, N);
L = eye(N);
for k = 1 : N
L(k + 1 : N, k) = Adecomposed(k + 1 : N, k);
end
U = zeros(N);
for k = 1 : N
U(k, k : N) = Adecomposed(k, k : N);
end
load pivotArray.txt
Pj = eye(N);
for j = 1 : N
tempVector = Pj(j, :);
Pj(j, :) = Pj(pivotArray(j), :);
Pj(pivotArray(j), :) = tempVector;
end
fprintf('Percentage rms between Pj * A and L * U in CUDA %f\n', 100 * sqrt(sum(sum(abs(Pj * A - L * U).^2)) / sum(sum(abs(Pj * A).^2))));
xprime = inv(Lmatlab) * yMatlab;
xMatlab = inv(Umatlab) * xprime;
fprintf('Percentage rms between reference solution and solution in Matlab %f\n', 100 * sqrt(sum(sum(abs(xMatlab - x).^2)) / sum(sum(abs(x).^2))));
load xApproach1.txt
fprintf('Percentage rms between reference solution and solution in CUDA for approach nr.1 %f\n', 100 * sqrt(sum(sum(abs(xApproach1 - x).^2)) / sum(sum(abs(x).^2))));
load xApproach2.txt
fprintf('Percentage rms between reference solution and solution in CUDA for approach nr.2 %f\n', 100 * sqrt(sum(sum(abs(xApproach2 - x).^2)) / sum(sum(abs(x).^2))));

Matrix-vector multiplication in CUDA: benchmarking & performance

I'm updating my question with some new benchmarking results (I also reformulated the question to be more specific and I updated the code)...
I implemented a kernel for matrix-vector multiplication in CUDA C following the CUDA C Programming Guide using shared memory. Let me first present some benchmarking results which I did on a Jetson TK1 (GPU: Tegra K1, compute capability 3.2) and a comparison with cuBLAS:
Here I guess cuBLAS does some magic since it seems that its execution is not affected by the number of columns of A, which, in turn, implies that there is some sort of parallelisation along the columns of A.
Now, here is the source code of my kernel and a host function to call it (file: mv.cuh):
#include <cuda_runtime.h>
#define BLOCK_SIZE 16
/* Set to __restric__ */
#define RESTRICT
/**
* Performs matrix-vector multiplication on the device.
*
* #param dA Address of matrix `A` on the device
* #param dx Address of vector `x` on the device
* #param dev_ptr_y Address of result y = A*x
* #param nRows Number of rows of `A`
* #param nx Size of `x` (number of columns of `A`)
*
* #tparam T Data type
*
*/
template<typename T>
__global__ void matvec_kernel(
const T * RESTRICT dA,
const T * RESTRICT dx,
T * RESTRICT dy,
const unsigned int nRows,
const unsigned int nx);
/**
* Host-side wrapper for #matvec_kernel.
*
* #param dA Address of matrix `A` on the device
* #param dx Address of vector `x` on the device
* #param dev_ptr_y Address of result y = A*x
* #param nRows Number of rows of `A`
* #param nx Size of `x` (number of columns of `A`)
* #param elapsed_time Time for the kernel to complete the execution in `ms`.
* If NULL is passed to this argument, the elapsed time
* will not be computed.
*
* #tparam T Data type for `A` and `x`
*/
template<typename T>
__host__ void matvec(
const T * RESTRICT dA,
const T * RESTRICT dx,
T * RESTRICT dy,
const unsigned int nRows,
const unsigned int nx);
/* -------------------------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------------------------- */
template<typename T>
__global__ void matvec_kernel(const T * RESTRICT dA, const T * RESTRICT dx,
T * RESTRICT dy,
const unsigned int nRows, const unsigned int nx)
{
unsigned int bid = blockIdx.x;
unsigned int row = threadIdx.x;
const unsigned int block_size = blockDim.x;
const unsigned int num_hor_blocks = ((nx + block_size - 1)/ block_size);
unsigned int n_star;
unsigned int idx_x;
unsigned int idx_Asub;
unsigned int idx_y;
const T * Asub;
const T * xsub;
/* Only `x` is copied to shared memory */
__shared__ T x_shared[BLOCK_SIZE];
idx_y = bid * block_size;
T * y_sub = dy + idx_y;
T y_val = 0.0;
#pragma unroll
for (unsigned int m = 0; m < num_hor_blocks; ++m)
{
idx_Asub = block_size * (bid + m * nRows);
idx_x = m * block_size;
Asub = dA + idx_Asub;
xsub = dx + idx_x;
if (idx_x + row < nx) {
x_shared[row] = xsub[row];
}
__syncthreads();
/* If the tiling is exact */
if ( (nRows % block_size == 0 && nx % block_size == 0 ) ||
(m != block_size - 1 || bid != gridDim.x - 1)) {
y_val += Asub[row] * x_shared[0];
y_val += Asub[row + nRows] * x_shared[1];
y_val += Asub[row + 2 * nRows] * x_shared[2];
y_val += Asub[row + 3 * nRows] * x_shared[3];
y_val += Asub[row + 4 * nRows] * x_shared[4];
y_val += Asub[row + 5 * nRows] * x_shared[5];
y_val += Asub[row + 6 * nRows] * x_shared[6];
y_val += Asub[row + 7 * nRows] * x_shared[7];
y_val += Asub[row + 8 * nRows] * x_shared[8];
y_val += Asub[row + 9 * nRows] * x_shared[9];
y_val += Asub[row + 10 * nRows] * x_shared[10];
y_val += Asub[row + 11 * nRows] * x_shared[11];
y_val += Asub[row + 12 * nRows] * x_shared[12];
y_val += Asub[row + 13 * nRows] * x_shared[13];
y_val += Asub[row + 14 * nRows] * x_shared[14];
y_val += Asub[row + 15 * nRows] * x_shared[15];
} else {
n_star = min(BLOCK_SIZE, nx - idx_x);
#pragma unroll
for (unsigned int e = 0; e < n_star; ++e) {
y_val += Asub[row + e * nRows] * x_shared[e];
}
}
__syncthreads();
}
if (row + idx_y < nRows)
y_sub[row] = y_val;
}
template<typename T>
__host__ void matvec(
const T * RESTRICT dA,
const T * RESTRICT dx,
T * RESTRICT dy,
const unsigned int nRows,
const unsigned int nx)
{
dim3 dim_grid( (nRows + BLOCK_SIZE -1)/ BLOCK_SIZE );
dim3 dim_block(BLOCK_SIZE);
matvec_kernel<T> <<<dim_grid, dim_block>>>(dA, dx, dy, nRows, nx);
}
I'm using this to time my execution (file: cuda_timer.cuh):
#include <cuda_runtime.h>
#include "error_handles.cuh"
static cudaEvent_t start;
static cudaEvent_t stop;
static short timer_running = 0;
static short tic_called = 0;
/**
* Sets up the timer.
*
* Must be called before any invocation to
* tic() or toc(), preferrably at the beginning of your
* application.
*/
void start_tictoc();
/**
* Starts the timer.
*
* Use `toc()` to get the elapsed time; `tic()` must
* be called before a `toc()`.
*/
void tic();
/**
* Returns the elapsed time between its invocation
* and a previous invocation of `toc()`. Returns `-1`
* and prints a warning message if `toc()` was not
* previously called. Returns `-2` and prints and error
* message if `start_tictoc()` has not been called.
*
* #return Elapsed time between `tic()` and `toc()` in milliseconds
* with a resolution of `0.5` microseconds.
*/
float toc();
/**
* This function should be called when the
* time will not be being used any more. It destroys
* the events used to time CUDA kernels. If the timer
* is not running, this function does nothing and
* prints a warning message.
*/
void stop_tictoc();
void start_tictoc() {
_CUDA(cudaEventCreate(&start));
_CUDA(cudaEventCreate(&stop));
timer_running = 1;
}
void tic() {
if (timer_running) {
_CUDA(cudaEventRecord(start, 0));
tic_called = 1;
} else {
printf("WARNING: tic() called without a timer running!\n");
}
}
float toc() {
float elapsed_time;
if (tic_called == 0) {
printf("WARNING: toc() called without a previous tic()!\n");
return -1;
}
if (timer_running == 1) {
// _CUDA(cudaDeviceSynchronize()); // Removed! (See discussion below)
_CUDA(cudaEventRecord(stop, 0));
_CUDA(cudaEventSynchronize(stop));
_CUDA(cudaEventElapsedTime(&elapsed_time, start, stop));
tic_called = 0;
return elapsed_time;
} else {
printf("WARNING: toc() called without a timer running!\n");
return -2;
}
}
void stop_tictoc()
{
if (timer_running == 1){
_CUDA(cudaEventDestroy(start));
_CUDA(cudaEventDestroy(stop));
timer_running = 0;
} else{
printf("WARNING: stop_tictoc() called without a timer running!\n");
}
}
and my main file (main.cu) is the following:
#include <stdio.h>
#include <stdlib.h>
#include <cuda_runtime.h>
#include <assert.h>
#include "cublas_v2.h"
#include <math.h>
#include <curand.h>
#include <stdbool.h>
#include "mv.cuh"
#include "cuda_timer.cuh"
#include "error_handles.cuh"
typedef float real_t;
#define _CUDA(x) do { if((x)!=cudaSuccess) { \
printf("Error at %s:%d\n",__FILE__,__LINE__);\
exit(EXIT_FAILURE);}} while(0)
#define _CUBLAS(x) do { if((x) != CUBLAS_STATUS_SUCCESS) { \
printf("Error at %s:%d\n",__FILE__,__LINE__);\
exit(EXIT_FAILURE);}} while(0)
#define _CURAND(x) do { if((x) != CURAND_STATUS_SUCCESS) { \
printf("Error at %s:%d\n",__FILE__,__LINE__);\
exit(EXIT_FAILURE);}} while(0)
#define TEST_COLUMNS 1
#define TEST_ROWS 0
/**
* If `TEST_WRT_` is set to `TEST_COLUMNS`, then a benchmark
* will be performed with respect to columns (with a fixed
* number of rows). If it is set to `TEST_ROWS`, then a benchmark will
* run with respect to rows (fixed number of columns).
*/
#define TEST_WRT_ TEST_ROWS
#define CONSTANT_COLS 300
#define CONSTANT_ROWS 256
/**
* In order to estimate the execution time, every
* kernel is run `RUNS` times and the average is taken.
*/
#define RUNS 50
void compare_results(real_t *dev_y_cublas, real_t * dev_y,unsigned int nrows)
{
real_t * hst_y_cublas;
real_t * hst_y;
const size_t s = nrows * sizeof(real_t);
hst_y_cublas = (real_t*) malloc(s);
hst_y = (real_t*) malloc(s);
_CUDA(cudaMemcpy(hst_y, dev_y, s, cudaMemcpyDeviceToHost));
_CUDA(cudaMemcpy(hst_y_cublas, dev_y_cublas, s, cudaMemcpyDeviceToHost));
for (unsigned int i = 0; i < nrows; ++i) {
if (fabsf(hst_y_cublas[i] - hst_y[i]) > 0.001) {
printf("ERROR ------ %f\n", fabsf(hst_y_cublas[i] - hst_y[i]));
exit(EXIT_FAILURE);
}
}
if (hst_y_cublas) free(hst_y_cublas);
if (hst_y) free(hst_y);
}
void do_benchmark() {
curandGenerator_t gen;
real_t *dev_rand_data = NULL; // Random data will be allocated here!
real_t *dev_y = NULL;
real_t *dev_y_cublas = NULL;
real_t t;
real_t t_cublas;
const size_t n_rows_max = 1500;
const size_t n_cols_max = 300;
const size_t ntot = n_cols_max * (1 + n_rows_max);
const size_t size_tot = sizeof(real_t) * ntot;
float alpha = 1.0, beta = 0.0; // beta was initially set to 1.0 by mistake
cublasHandle_t handle;
_CUBLAS(cublasCreate(&handle));
start_tictoc();
_CUDA(cudaMalloc((void** )&dev_rand_data, size_tot));
_CUDA(cudaMalloc((void** )&dev_y, n_rows_max * sizeof(real_t)));
_CUDA(cudaMalloc((void** )&dev_y_cublas, n_rows_max * sizeof(real_t)));
_CURAND(curandCreateGenerator(&gen, CURAND_RNG_PSEUDO_DEFAULT));
_CURAND(curandSetPseudoRandomGeneratorSeed(gen, 1234ULL));
tic();
_CURAND(curandGenerateUniform(gen, dev_rand_data, ntot));
t = toc();
printf("RNG in %f ms\n", t);
_CURAND(curandDestroyGenerator(gen));
size_t ncols = CONSTANT_COLS;
size_t nrows = CONSTANT_ROWS;
size_t runs = RUNS;
cudaMemset(dev_y_cublas, 0, n_rows_max * sizeof(real_t));
matvec<real_t>(dev_rand_data + ncols, dev_rand_data, dev_y, nrows, ncols);
_CUBLAS(cublasSgemv(handle, CUBLAS_OP_N, nrows, ncols, &alpha, dev_rand_data + ncols,
nrows, dev_rand_data, 1, &beta, dev_y_cublas, 1));
/* Compare results */
compare_results(dev_y_cublas,dev_y, nrows);
FILE * pFile;
char filename[50];
#if (TEST_WRT_ == TEST_COLUMNS)
sprintf(filename, "times_rows%lu_cols.txt", nrows);
#else
sprintf(filename, "times_cols%lu_rows.txt", ncols);
#endif
printf("Logging to : '%s'\n", filename);
pFile = fopen(filename, "w");
if (pFile == NULL) {
perror("Error opening file.");
exit(79);
}
#if (TEST_WRT_ == TEST_COLUMNS)
fprintf(pFile, "0, %lu, 0, 0\n", nrows);
for (ncols = 32; ncols < n_cols_max; ncols += 32) {
#else
fprintf(pFile, "1, %lu, 0, 0\n", ncols);
for (nrows = 32; nrows < n_rows_max; nrows += 32) {
#endif
tic();
for (short i = 0; i < runs; i++) {
matvec<real_t>(dev_rand_data + ncols, dev_rand_data, dev_y, nrows,
ncols);
}
t = toc() / runs;
tic();
for (short i = 0; i < runs; i++) {
_CUBLAS(cublasSgemv(handle, CUBLAS_OP_N, nrows, ncols, &alpha, dev_rand_data + ncols,
nrows, dev_rand_data, 1, &beta, dev_y_cublas, 1));
}
t_cublas = toc() / runs;
#if (TEST_WRT_ == TEST_COLUMNS)
fprintf(pFile, "%lu, %f, %f\n", ncols, t, t_cublas);
#else
fprintf(pFile, "%lu, %f, %f\n", nrows, t, t_cublas);
#endif
}
_CUBLAS(cublasDestroy(handle));
fclose(pFile);
if (dev_rand_data != NULL)
_CUDA(cudaFree(dev_rand_data));
stop_tictoc();
}
int main(void)
{
do_benchmark();
return EXIT_SUCCESS;
}
Finally, this is a MATLAB script I'm using to plot the execution times:
fetch_this = 'times_cols512_rows.txt';
username = 'ubuntu';
target_hostname = 'jetson';
% Do not modify below this line
eval_this=['! scp ' username '#' target_hostname ':~/mv/Debug/' fetch_this ' .'];
eval(eval_this)
set(0, 'DefaultAxesFontSize', 14);
r = csvread(fetch_this);
r_header = r(1,:);
plot(r(2:end,1), r(2:end,2)*1000, '-');
hold on
plot(r(2:end,1), r(2:end,3)*1000, '-r');
grid on;
fig_title = 'Matvec on Tegra K1 - %d %s';
if (r_header(1)==1),
xlabel('Number of rows');
title(sprintf(fig_title, r_header(2),'columns'));
else
xlabel('Number of columns');
title(sprintf(fig_title, r_header(2),'rows'));
end
ylabel('Computation time [us]');
legend('Kernel', 'cuBLAS');
axis tight
I am concerned about the performance and the scalability of my kernel, so first I would like to know how to improve the scalability with respect to the number of rows of matrix A. Second, I know that it is not very good practice to have branch divergence (and my code has), but I'm feeling I want some hints to improve it.
UPDATE :
Thanks to all your comments and suggestions, I reached the conclusion that cudaDeviceSynchronized() caused, in the first place, some peculiarities with my timing so my initial measurements were inaccurate. Row-major ordering leads to worse results. The size of the blocks is an important tuning parameter and changing from 16 to 32 or 64 improves the execution time. Further benchmarking is necessary to choose the block size. To this end, one may use the following API for the kernel:
template<typename T, const uint_t blk>
__global__ void matvec_kernel(const T * RESTRICT dA, const T * RESTRICT dx,
T * RESTRICT dy, const uint_t nRows, const uint_t nx);
and call it like this from the host:
template<typename T>
__host__ void matvec(const T * RESTRICT dA, const T * RESTRICT dx,
T * RESTRICT dy, const uint_t nRows, const uint_t nx) {
uint_t blk_size_opt = 64;
/* Add code to decide the value of `blk_size_opt` */
if (blk_size_opt == 32) {
matvec_engine<T, 32>(dA, dx, dy, nRows, nx);
} else if (blk_size_opt == 64) {
matvec_engine<T, 64>(dA, dx, dy, nRows, nx);
} else if (blk_size_opt == 128) {
matvec_engine<T, 128>(dA, dx, dy, nRows, nx);
} else if (blk_size_opt == 256) {
matvec_engine<T, 256>(dA, dx, dy, nRows, nx);
}
}
Let me provide some benchmarking results. First a comparison with cublasSgemv:
and the effect of block size on the execution time:
First, let me write down the full working Matrix-Vector multiplication kernel employing shared memory:
template<typename T>
__global__ void matvec_kernel(const T * __restrict__ dA, const T * __restrict__ dx, T * __restrict__ dy, const unsigned int nRows, const unsigned int nCols)
{
const unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
__shared__ T x_shared[BLOCK_SIZE];
T y_val = 0.0;
#pragma unroll
for (unsigned int m = 0; m < ((nCols + BLOCK_SIZE - 1)/ BLOCK_SIZE); ++m)
{
if ((m * BLOCK_SIZE + threadIdx.x) < nCols) x_shared[threadIdx.x] = dx[threadIdx.x + m * BLOCK_SIZE];
else x_shared[threadIdx.x] = 0.f;
__syncthreads();
#pragma unroll
for (unsigned int e = 0; e < BLOCK_SIZE; ++e) {
// --- Column-major ordering - faster
y_val += dA[tid + (e + BLOCK_SIZE * m) * nRows] * x_shared[e];
// --- Row-major ordering - slower
//y_val += dA[tid * nCols + (e + BLOCK_SIZE * m)] * x_shared[e];
}
__syncthreads();
}
if (tid < nRows) dy[tid] = y_val;
}
Unless differently specified, all the tests will be done on a GT540M card.
A first parameter to be optimized is the BLOCK_SIZE. Changing the BLOCK_SIZE changes the algorithm performance, as witnessed by the following graph:
The following graphs compares row-major ordering vs. column-major ordering. The latter is faster:
Another optimization you may wish to try is using more Instruction Level Parallelism (ILP) by this modified kernel employing ILP = 2
template<typename T>
__global__ void matvec_kernel_ILP2(const T * __restrict__ dA, const T * __restrict__ dx, T * __restrict__ dy, const unsigned int nRows, const unsigned int nCols)
{
const unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
__shared__ T x_shared[BLOCK_SIZE];
T y_val1 = 0.0;
T y_val2 = 0.0;
#pragma unroll
for (unsigned int m = 0; m < ((nCols + BLOCK_SIZE - 1)/ BLOCK_SIZE); ++m)
{
if ((m * BLOCK_SIZE + threadIdx.x) < nCols) x_shared[threadIdx.x] = dx[threadIdx.x + m * BLOCK_SIZE];
else x_shared[threadIdx.x] = 0.f;
__syncthreads();
#pragma unroll
for (unsigned int e = 0; e < BLOCK_SIZE; ++e) {
y_val1 += dA[tid + (e + BLOCK_SIZE * m) * nRows] * x_shared[e];
y_val2 += dA[tid + gridDim.x * BLOCK_SIZE + (e + BLOCK_SIZE * m) * nRows] * x_shared[e];
}
__syncthreads();
}
if (tid < nRows) dy[tid] = y_val1;
if ((tid + gridDim.x * BLOCK_SIZE) < nRows) dy[tid + gridDim.x * BLOCK_SIZE] = y_val2;
}
This kernel should be called with half of the threads, as
dim3 dim_grid((nRows/2 + BLOCK_SIZE -1)/ BLOCK_SIZE);
dim3 dim_block(BLOCK_SIZE);
matvec_kernel_ILP2<T> <<<dim_grid, dim_block>>>(dA, dx, dy, nRows, nx);
Finally, since you are using a device with compute capability 3.2, you can try using shuffle operations. I'm providing here the kernel using shuffle operations instead of shared memory. In this case, you should set BLOCK_SIZE = 32:
template<typename T>
__global__ void matvec_kernel_shfl(const T * __restrict__ dA, const T * __restrict__ dx, T * __restrict__ dy, const unsigned int nRows, const unsigned int nCols)
{
const unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
T x_shfl_src, x_shfl_dest;
T y_val = 0.0;
#pragma unroll
for (unsigned int m = 0; m < ((nCols + BLOCK_SIZE - 1)/ BLOCK_SIZE); ++m)
{
if ((m * BLOCK_SIZE + threadIdx.x) < nCols) x_shfl_src = dx[threadIdx.x + m * BLOCK_SIZE];
else x_shfl_src = 0.f;
__syncthreads();
// #pragma unroll
for (int e = 0; e < 32; ++e) {
// --- Column-major ordering - faster
x_shfl_dest = __shfl(x_shfl_src, e);
y_val += dA[tid + (e + BLOCK_SIZE * m) * nRows] * x_shfl_dest;
// --- Row-major ordering - slower
//y_val += dA[tid * nCols + (e + BLOCK_SIZE * m)] * x_shared[e];
}
__syncthreads();
}
if (tid < nRows) dy[tid] = y_val;
}
Shuffle operations improve the performance over shared memory for BLOCK_SIZE = 32 on a Kepler K20c as shown by the graph below:
Looking at your code I think that the way you traverse the elements of A may be the problem:
for (unsigned int e = 0; e < n_star; ++e) {
y_val += Asub[row + e * nRows] * x_shared[e];
}
So, when nRows becomes large, you actually read from the global memory (that is where A is stored) with a large stride. In particular this happens in every block: threads inside the same block will read from the global memory in a non-consecutive fashion. This can be improved if you consider storing from the beginning the values of A row-by-row (i.e., using row-major order). This is just a guess and I would have written a comment, but it requires a higher score on Stackoverflow...

Instructions Per Count (IPC) and Instruction Level Parallelism (ILP) in CUDA

I observe IPC drops as ILP goes up for 32-bit int operations when trying to speed up my cryptographic kernel. The kernel consists of fairly unrolled loops of long sequence of ADD and XOR operations, which should have a throughput of 160 ops per 192 cores per cycle on Kepler (GTX Titan/780).
IPC for my kernel hits the upper bound of 3.28. Using ILP even drops IPC. Apparently ILP fails to help achieve my goal -- fully utilize the pipeline, so I wrote some little experiments. I put the code for ILP 4 at the end.
Profiler Measurements
Results are measured on GTX Titan.
cubin outputs are examined to make sure no instructions are eliminated during optimization.
Executed IPC is almost the same as issued IPC, so I just list one of them.
ADD instructions (XORs have identical behavior)
| ILP 1 | ILP 2 | ILP 4 | ILP 8
--------------------------------------------------
IPC | 4.00 | 3.32 | 2.72 | 3.44
--------------------------------------------------
Issue Slot | 99.17% | 59.34% | 48.61% | 61.71%
Utilization | | | |
I expect ILP 2, 4 and 8 would give better performance, but not.
Recall the integer throughput is 160. The 4 warp scheduler per SM should dual issue up to 5 instructions per cycle, so that IPC should go up towards 5. How can I explain what I observed? Why is the issue slot 99% utilized when IPC = 4?
Float / Int ADD instruction mix
If I modify the code for ILP 4 to do two int ADDs and two float ADDs:
IPC: 5.1
Issue slot utilization: 99.12%
Strangely enough, it seems that the warp scheduler does a better job to issue floating operations.
Discussion
Available literature suggests using ILP help reach the peak performance for floating point operations. Why doesn't ILP apply to integers? How can I do this for integer operations?
My kernel theoretically should do 2.25 integer operations per candidate. This is consistent with what I observed in cuobjdump. There are 2^48 candidates, so the minimun runtime on GTX Titan should be 2.25 * 2^48 / (2688 * 160/192) / 876 MHz = 322.75s. Is this estimation reasonable?
The measured performance for my kernel is 523s. This does imply that integer throughput is only about 160 * 3.28 (measure IPC) / 5 (max IPC).
ILP test code
__device__ int x[10];
__global__ void test(int flag = 0)
{
int a = x[0], b = x[1], c = x[2], d = x[3];
int _a = x[4], _b = x[5], _c = x[6], _d = x[7];
#pragma unroll 128
for (int i = 0; i < 51200; ++i)
{
asm volatile("add.u32 %0, %0, %1;": "+r"(a): "r"(_a));
asm volatile("add.u32 %0, %0, %1;": "+r"(b): "r"(_b));
asm volatile("add.u32 %0, %0, %1;": "+r"(c): "r"(_c));
asm volatile("add.u32 %0, %0, %1;": "+r"(d): "r"(_d));
}
int v = a + b + c + d;
if (flag * v == 1)
x[0] = v;
}
Code fragment for 4 candidates
Each candidate takes 9 / 4 = 2.25 ops. Cuobjdump also verifies this.
d ^= d2(1, 3); // d2 is located in constant memory
s ^= d;
t ^= d2(1, 16);
u ^= d2(1, 17);
v ^= some_const;
flag_s = min(flag_s, s); // int min has throughput of 160
flag_t = flag_t || (s == t); // setp.or should be the same
flag_u = flag_u || (s == u);
flag_v = flag_v || (s == v);
I'm providing an answer to remove this question from the unanswered list.
I do not observe a change in executed Instructions Per Count (IPC) with Instruction Level Parallelism. Overall, it is difficult to argue the reason for the effect observed by the OP without knowing any further information but that provided by the OP himself (f.i., the launch configuration).
In the code below, I'm considering an example using floats, although I have tested the same code with ints without changing the conceptual results. The code implements cyclical Multiply Add (MAD) operations with ILP=1, ILP=2 and ILP=4.
The executed IPC has been the following
ILP IPC FLOPs
1 3.924 67108864
2 4.323 67108864
4 4.016 67108864
for N=8192. The code has been compiled with CUDA 8.0 and run on an NVIDIA GT920M. As it can be seen, IPC keeps almost constant for the differently considered values of ILP. The Floating Point Operations (FLOPs) as estimated by the code assuming 2 FLOPs per MAD coincides with that measured by the Visual Profiler.
THE CODE
#include<stdio.h>
#define N_ITERATIONS 8192
#include "Utilities.cuh"
#include "TimingGPU.cuh"
#define BLOCKSIZE 512
//#define DEBUG
/********************************************************/
/* KERNEL0 - NO INSTRUCTION LEVEL PARALLELISM (ILP = 0) */
/********************************************************/
__global__ void kernel0(float * __restrict__ d_a, const float * __restrict__ d_b, const float * __restrict__ d_c, const int N) {
const int tid = threadIdx.x + blockIdx.x * blockDim.x;
if (tid < N) {
float a = d_a[tid];
float b = d_b[tid];
float c = d_c[tid];
for (unsigned int i = 0; i < N_ITERATIONS; i++) {
a = a * b + c;
}
d_a[tid] = a;
}
}
/*****************************************************/
/* KERNEL1 - INSTRUCTION LEVEL PARALLELISM (ILP = 2) */
/*****************************************************/
__global__ void kernel1(float * __restrict__ d_a, const float * __restrict__ d_b, const float * __restrict__ d_c, const int N) {
const int tid = threadIdx.x + blockIdx.x * blockDim.x;
if (tid < N / 2) {
float a1 = d_a[tid];
float b1 = d_b[tid];
float c1 = d_c[tid];
float a2 = d_a[tid + N / 2];
float b2 = d_b[tid + N / 2];
float c2 = d_c[tid + N / 2];
for (unsigned int i = 0; i < N_ITERATIONS; i++) {
a1 = a1 * b1 + c1;
a2 = a2 * b2 + c2;
}
d_a[tid] = a1;
d_a[tid + N / 2] = a2;
}
}
/*****************************************************/
/* KERNEL2 - INSTRUCTION LEVEL PARALLELISM (ILP = 4) */
/*****************************************************/
__global__ void kernel2(float * __restrict__ d_a, const float * __restrict__ d_b, const float * __restrict__ d_c, const int N) {
const int tid = threadIdx.x + blockIdx.x * blockDim.x;
if (tid < N / 4) {
float a1 = d_a[tid];
float b1 = d_b[tid];
float c1 = d_c[tid];
float a2 = d_a[tid + N / 4];
float b2 = d_b[tid + N / 4];
float c2 = d_c[tid + N / 4];
float a3 = d_a[tid + N / 2];
float b3 = d_b[tid + N / 2];
float c3 = d_c[tid + N / 2];
float a4 = d_a[tid + 3 * N / 4];
float b4 = d_b[tid + 3 * N / 4];
float c4 = d_c[tid + 3 * N / 4];
for (unsigned int i = 0; i < N_ITERATIONS; i++) {
a1 = a1 * b1 + c1;
a2 = a2 * b2 + c2;
a3 = a3 * b3 + c3;
a4 = a4 * b4 + c4;
}
d_a[tid] = a1;
d_a[tid + N / 4] = a2;
d_a[tid + N / 2] = a3;
d_a[tid + 3 * N / 4] = a4;
}
}
/********/
/* MAIN */
/********/
int main() {
//const int N = 8192 * 64;
const int N = 8192;
//const int N = 1024;
TimingGPU timerGPU;
float *h_a = (float*)malloc(N*sizeof(float));
float *h_a_result_host = (float*)malloc(N*sizeof(float));
float *h_a_result_device = (float*)malloc(N*sizeof(float));
float *h_b = (float*)malloc(N*sizeof(float));
float *h_c = (float*)malloc(N*sizeof(float));
for (int i = 0; i<N; i++) {
h_a[i] = 2.;
h_b[i] = 1.;
h_c[i] = 2.;
h_a_result_host[i] = h_a[i];
for (unsigned int k = 0; k < N_ITERATIONS; k++) {
h_a_result_host[i] = h_a_result_host[i] * h_b[i] + h_c[i];
}
}
float *d_a; gpuErrchk(cudaMalloc((void**)&d_a, N*sizeof(float)));
float *d_b; gpuErrchk(cudaMalloc((void**)&d_b, N*sizeof(float)));
float *d_c; gpuErrchk(cudaMalloc((void**)&d_c, N*sizeof(float)));
gpuErrchk(cudaMemcpy(d_a, h_a, N*sizeof(float), cudaMemcpyHostToDevice));
gpuErrchk(cudaMemcpy(d_b, h_b, N*sizeof(float), cudaMemcpyHostToDevice));
gpuErrchk(cudaMemcpy(d_c, h_c, N*sizeof(float), cudaMemcpyHostToDevice));
/***********/
/* KERNEL0 */
/***********/
timerGPU.StartCounter();
kernel0 << <iDivUp(N, BLOCKSIZE), BLOCKSIZE >> >(d_a, d_b, d_c, N);
#ifdef DEBUG
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
#endif
// --- Remember: timing is in ms
printf("Number of operations = %f; GFlops = %f\n", (float)N*(float)N_ITERATIONS, (1.e-6)*((float)N*(float)N_ITERATIONS) / timerGPU.GetCounter());
gpuErrchk(cudaMemcpy(h_a_result_device, d_a, N*sizeof(float), cudaMemcpyDeviceToHost));
for (int i = 0; i<N; i++) if (h_a_result_device[i] != h_a_result_host[i]) { printf("Error at i=%i! Host = %f; Device = %f\n", i, h_a_result_host[i], h_a_result_device[i]); return 1; }
/***********/
/* KERNEL1 */
/***********/
gpuErrchk(cudaMemcpy(d_a, h_a, N*sizeof(float), cudaMemcpyHostToDevice));
timerGPU.StartCounter();
kernel1 << <iDivUp(N / 2, BLOCKSIZE), BLOCKSIZE >> >(d_a, d_b, d_c, N);
#ifdef DEBUG
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
#endif
// --- Remember: timing is in ms
printf("Number of operations = %f; GFlops = %f\n", (float)N*(float)N_ITERATIONS, (1.e-6)*((float)N*(float)N_ITERATIONS) / timerGPU.GetCounter());
gpuErrchk(cudaMemcpy(h_a_result_device, d_a, N*sizeof(float), cudaMemcpyDeviceToHost));
for (int i = 0; i<N; i++) if (h_a_result_device[i] != h_a_result_host[i]) { printf("Error at i=%i! Host = %f; Device = %f\n", i, h_a_result_host[i], h_a_result_device[i]); return 1; }
/***********/
/* KERNEL2 */
/***********/
gpuErrchk(cudaMemcpy(d_a, h_a, N*sizeof(float), cudaMemcpyHostToDevice));
timerGPU.StartCounter();
kernel2 << <iDivUp(N / 4, BLOCKSIZE), BLOCKSIZE >> >(d_a, d_b, d_c, N);
#ifdef DEBUG
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
#endif
// --- Remember: timing is in ms
printf("Number of operations = %f; GFlops = %f\n", (float)N*(float)N_ITERATIONS, (1.e-6)*((float)N*(float)N_ITERATIONS) / timerGPU.GetCounter());
gpuErrchk(cudaMemcpy(h_a_result_device, d_a, N*sizeof(float), cudaMemcpyDeviceToHost));
for (int i = 0; i<N; i++) if (h_a_result_device[i] != h_a_result_host[i]) { printf("Error at i=%i! Host = %f; Device = %f\n", i, h_a_result_host[i], h_a_result_device[i]); return 1; }
cudaDeviceReset();
return 0;
}

Unable to execute device kernel in CUDA

I am trying to call a device kernel within a global kernel. My global kernel is a Matrix Multiplication and my device kernel is finding the maximum value and the index in each column of the product matrix. Following is the code :
__device__ void MaxFunction(float* Pd, float* max)
{
int x = (threadIdx.x + blockIdx.x * blockDim.x);
int y = (threadIdx.y + blockIdx.y * blockDim.y);
int k = 0;
int temp = 0; int temp_idx = 0;
for (k = 0; k < wB; ++k) {
if(Pd[x*wB + y] > temp){
temp = Pd[x*wB + y];
temp_idx = x*wB + y;
}
max[y*2 + 0] = temp;
max[y*2 + 1] = temp_idx;
}
}
__global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, float* max)
{
// declare cache in the shared memory
__shared__ float Mds[blockD][blockD];
__shared__ float Nds[blockD][blockD];
float Pvalue = 0;
// Loop over the Md and Nd block dimension required to compute the Pd element
for (int m = (wA * blockD * blockIdx.y), n = (blockD * blockIdx.x);
m < ((wA * blockD * blockIdx.y)+wA-1);
m += blockD, n += (blockD*hB)){
// collaboratively loading of Md and Nd blocks into shared memory
Mds[threadIdx.y][threadIdx.x] = Md[m + wA * threadIdx.y + threadIdx.x];
Nds[threadIdx.y][threadIdx.x] = Nd[n + wA * threadIdx.y + threadIdx.x];
__syncthreads();
// keep track of the running sum
for (int k = 0; k < blockD; k++)
Pvalue += Mds[threadIdx.y][k] * Nds[k][threadIdx.x];
__syncthreads();
}
// write back to the global memory
int p = hB * blockD * blockIdx.y + blockD * blockIdx.x;
Pd[p + hB * threadIdx.y + threadIdx.x] = Pvalue;
__syncthreads();
MaxFunction(Pd, max);
}
The Main code :
#include<stdio.h>
#include "cuda.h"
#include<stdlib.h>
#define blockD 32
const int wA = 128;
const int hA = 1024;
const int wB = 128;
const int hB = wA;
main(void){
void MatrixMultiplication(float *, float *, float *, float *);
int size_A = wA * hA * sizeof(float);
int size_B = wB * hB * sizeof(float);
int size_C = wB * hA * sizeof(float);
int size_max = 2 * wB * sizeof(float);
float *M, *N, *P, *C;
// allocate memory on the CPU
M = (float*)malloc(size_A);
N = (float*)malloc(size_B);
P = (float*)malloc(size_max);
C = (float*)malloc(size_C);
// initialize the matrices
for (int y=0; y < hA; y++) {
for (int x=0; x < wA; x++){
M[y*wA + x] = x;
}
}
for (int y=0; y<hB; y++) {
for (int x=0; x<wB; x++){
N[y*wB + x] = x;
}
}
MatrixMultiplication(M, N, P, C);
//Write
FILE *f1;
int i, j;
f1 = fopen("max_val.txt","w");
for(i=0; i < (wB * 2); i+=2){
fprintf(f1,"%d\t%d\n",int(P[i]),int(P[i+1]));
}
fclose(f1);
f1 = fopen("Prod_mat.txt","w");
for(i=0; i < 2; i++){
for(j=0; j < wB; j++){
fprintf(f1,"%d\t",int(C[i*wB + j]));
}
fprintf(f1,"\n");
}
fclose(f1);
free( M );
free( N );
free( P );
free( C );
cudaDeviceReset();
return 0;
}
void MatrixMultiplication(float *M, float *N, float *P, float *C) {
int size_A = wA * hA * sizeof(float);
int size_B = wB * hB * sizeof(float);
int size_C = wB * hA * sizeof(float);
int size_max = 2 * wB * sizeof(float);
float *Md, *Nd, *Pd, *max;
// allocate memory on the GPU
cudaMalloc((void**)&Md, size_A);
cudaMalloc((void**)&Nd, size_B);
cudaMalloc((void**)&Pd, size_C);
cudaMalloc((void**)&max, size_max);
// transfer M and N to device memory
cudaMemcpy(Md, M, size_A, cudaMemcpyHostToDevice);
cudaMemcpy(Nd, N, size_B, cudaMemcpyHostToDevice);
// kernel invocation code
dim3 dimBlock(blockD, blockD);
dim3 dimGrid(wA/blockD, hB/blockD);
//Execute Kernel
MatrixMulKernel<<<dimGrid, dimBlock>>>( Md, Nd, Pd, max);
// transfer P from device
cudaMemcpy(P, max, size_max, cudaMemcpyDeviceToHost);
cudaMemcpy(C, Pd, size_C, cudaMemcpyDeviceToHost);
cudaFree(Md);
cudaFree(Nd);
cudaFree(Pd);
cudaFree(max);
}
The Matrix Multiplication result is fine (Verified using Matlab), but I am not able to get the max values and their corresponding index. I would appreciate if anyone can kindly point out at what I am doing wrong. The max variable has only garbage when I run the above code.
Apparently you are attempting to find the maximum value in each column, as well as the offset to that value.
But all of your threads in y are hammering on the same location for max value (max[x*2 + 0]). This isn't recommended, as there is no way to sort out a race condition. You should use atomic operations, or other methods (e.g. reduction) to handle multiple threads updating a single max value this way.
Since you have a need to update two values atomically (the max value and it's location), it's not a simple matter of replacing your plain access with a standard atomic function. However, since you are dealing with two 32-bit adjacent quantities, you may be interested in my answer here.
By the way I think matlab's native matrix multiply on gpuArray should be faster than any matrix multiply code you write. But it would require the Parallel Compute Toolbox.