Anatomy of a DPC++ ProgramΒΆ

We start with a DPC++ example application to illustrate basic DPC++ concepts. We continue by breaking down the DPC++ programming model into 4 areas as follows.

The following example uses the oneAPI programming model to add 2 vectors. When compiled and executed, the sample program computes the 1024 element vector add in parallel on the accelerator. This assumes the accelerator has multiple compute elements capable of executing in parallel. This sample illustrates the models that software developers need to employ in their program. We identify sections of code by line number and discuss their role, highlighting their relation to the programming and execution models.

Note

This sample code is intended to illustrate the models that comprise the oneAPI program model; it is not intended to be a typical program.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
#include <CL/sycl.hpp>
using namespace sycl;

const int SIZE = 10;

void show_platforms() {
  auto platforms = platform::get_platforms();

  for (auto &platform : platforms) {
    std::cout << "Platform: "
	      << platform.get_info<info::platform::name>()
	      << std::endl;

    auto devices = platform.get_devices();
    for (auto &device : devices ) {
      std::cout << "  Device: "
		<< device.get_info<info::device::name>()
		<< std::endl;
    }
  }
}

void vec_add(int *a, int *b, int *c) {
  range<1> a_size{SIZE};

  buffer<int> a_buf(a, a_size);
  buffer<int> b_buf(b, a_size);
  buffer<int> c_buf(c, a_size);

  queue q;

  q.submit([&](handler &h) {
      auto c_res = c_buf.get_access<access::mode::write>(h);
      auto a_in = a_buf.get_access<access::mode::read>(h);
      auto b_in = b_buf.get_access<access::mode::read>(h);

      h.parallel_for(a_size,
		     [=](id<1> idx) {
		       c_res[idx] = a_in[idx] + b_in[idx];
		     });
    });
}

int main() {
  int a[SIZE], b[SIZE], c[SIZE];

  for (int i = 0; i < SIZE; ++i) {
    a[i] = i;
    b[i] = i;
    c[i] = i;
  }

  show_platforms();
  vec_add(a, b, c);

  for (int i = 0; i < SIZE; i++) std::cout << c[i] << std::endl;
  return 0;
}

With the following output:

Platform: Intel(R) FPGA Emulation Platform for OpenCL(TM)
  Device: Intel(R) FPGA Emulation Device
Platform: Intel(R) OpenCL
  Device: Intel(R) Core(TM) i5-7300U CPU @ 2.60GHz
Platform: Intel(R) CPU Runtime for OpenCL(TM) Applications
  Device: Intel(R) Core(TM) i5-7300U CPU @ 2.60GHz
Platform: SYCL host platform
  Device: SYCL host device
0
2
4
6
8
10
12
14
16
18

DPC++ is single source, which means the host code and the device code can be placed in the same file and compiled with a single invocation of the compiler. Therefore, when examining a DPC++ program, the first step is to understand the delineation between host code and device code. DPC++ programs have 3 different scopes.

Application scope includes all program lines not in the command group scope. The application scope is responsible for creating DPC++ queues that are connected to devices (line 30), allocating data that can be accessed from the device (lines 26-28), and submit tasks to the queues (line 32).

Kernel scope is the body of the lambda function found on lines 38-40. Each invocation of the kernel function adds a single element of the a and b vectors.

Command group scope can be found on lines 32-41. A command group contains a single kernel function and code to coordinate the passing of data and control between the host and the device. Lines 33-35 create accessors, which enable the kernel to access the data in the buffers created on lines 26-28. The parallel_for on line 37 launches an instance of the kernel on every element of an index space and passes the coordinates of the point in the index space to the function. The index space is defined on line 24. It is a one-dimensional space that ranges from 0 to 1023.

Now we walk through line by line. Every DPC++ program must include sycl.hpp (line 2). All types are in the sycl namespace and the line 2 namespace command is a convenience.

Lines 6-21 illustrate the use of the platform model by enumerating all the platforms available on the system and the devices contained in the platform.

Lines 26-28 and 33-35 show the role of the memory model. Device and host do not access the same memory by default. The memory model defines the rules for access. Line 45 allocates memory on the host for the vectors. Lines 26-28 wrap buffers around that memory. Kernels read/write buffer data via the accessors that are created on lines 33-35. The accessor on line 33 gives the kernel write access to buffer names as c_res, and the accessors on lines 34 & 35 give the kernel read access to the other buffers.

Lines 30-32 demonstrate the use of the application execution model. A command group is submitted to a queue on line 32. The SYCL runtime launches the kernel function in the command group on the device connected to the queue when the requirements of the command group are met. For this example, the accessors create the requirement that the buffers be accessible on the device. Queueing the command group triggers the copying of the data contained in the buffer from host memory to the device. The runtime launches the kernel when the data movements complete.

Lines 38-40 illustrate the function of kernel execution model. The parallel_for launches an instance of the kernel function for every point in the index space denoted by a_size. The instances are distributed among the processing elements of the device.

DPC++ uses C++ scopes and object models to concisely express synchronization. The vectors start in host memory. When the host memory for the vector is passed to the buffer constructor on lines 26-28, the buffers take ownership of the host memory and any use of the original host memory is undefined. When the destructor for the buffer runs because the containing scope ends, the runtime ensures that kernel accessing the buffer has ended and syncs the data back to the original host memory.

The next sections discuss in more details those four models: Platform model, Execution model, Memory model, and Kernel model.