Quick Start#
Before You Begin#
oneDAL is located in <install_dir>/dal directory where <install_dir>
is the directory in which Intel® oneAPI Base Toolkit was installed.
The current version of oneDAL with
SYCL is available for Linux* and Windows* 64-bit operating systems. The
prebuilt oneDAL libraries can be found in the <install_dir>/dal/<version>/redist
directory.
The following dependencies are required in order to build the oneDAL examples with SYCL extensions:
Intel® oneAPI DPC++/C++ Compiler 2021.1 release or later (for support)
OpenCL™ runtime 1.2 or later (to run the SYCL runtime)
GNU* Make on Linux*, nmake on Windows*
End-to-end Example#
Below you can find a typical usage workflow for a oneDAL algorithm on GPU. The example is provided for Principal Component Analysis algorithm (PCA).
The following steps depict how to:
Read the data from CSV file
Run the training and inference operations for PCA
Access intermediate results obtained at the training stage
Include the following header that makes all oneDAL declarations available.
#include "oneapi/dal.hpp" /* Standard library headers required by this example */ #include <cassert> #include <iostream>
Create a SYCL* queue with the desired device selector. In this case, GPU selector is used:
const auto queue = sycl::queue{ sycl::gpu_selector_v };
Since all oneDAL declarations are in the
oneapi::dalnamespace, import all declarations from theoneapinamespace to usedalinstead ofoneapi::dalfor brevity:using namespace oneapi;
Use CSV data source to read the data from the CSV file into a table:
const auto data = dal::read<dal::table>(queue, dal::csv::data_source{"data.csv"});
Create a PCA descriptor, configure its parameters, and run the training algorithm on the data loaded from CSV.
const auto pca_desc = dal::pca::descriptor<float> .set_component_count(3) .set_deterministic(true); const dal::pca::train_result train_res = dal::train(queue, pca_desc, data);
Print the learned eigenvectors:
const dal::table eigenvectors = train_res.get_eigenvectors(); const auto acc = dal::row_accessor<const float>{eigenvectors}; for (std::int64_t i = 0; i < eigenvectors.row_count(); i++) { /* Get i-th row from the table, the eigenvector stores pointer to USM */ const dal::array<float> eigenvector = acc.pull(queue, {i, i + 1}); assert(eigenvector.get_count() == eigenvectors.get_column_count()); std::cout << i << "-th eigenvector: "; for (std::int64_t j = 0; j < eigenvector.get_count(); j++) { std::cout << eigenvector[j] << " "; } std::cout << std::endl; }
Use the trained model for inference to reduce dimensionality of the data:
const dal::pca::model model = train_res.get_model(); const dal::table data_transformed = dal::infer(queue, pca_desc, data).get_transformed_data(); assert(data_transformed.column_count() == 3);
Build and Run Examples#
Perform the following steps to build and run examples demonstrating the
basic usage scenarios of oneDAL with DPCPP. Go to
<install_dir>/dal/<version> and then set up an environment as shown in the example below:
Note
All content below that starts with # is considered a comment and
should not be run with the code.
Set up the required environment for oneDAL (variables such as
CPLUS_INCLUDE_PATH,LIBRARY_PATH, andLD_LIBRARY_PATH):On Linux, there are two possible ways to set up the required environment: via
vars.shscript or viamodulefiles.To set up oneDAL environment via
vars.shscript, runsource ./env/vars.sh.To set up oneDAL environment via
setvars.shscript, runsource ./setvars.sh.To set up oneDAL environment via
modulefiles:Initialize
modules:source $MODULESHOME/init/bash
Note
Refer to Environment Modules documentation for details.
Provide
moduleswith a path to themodulefilesdirectory:module use ./modulefiles
Run the module:
module load dal
To set up oneDAL environment, run
call /env/vars.batorcall setvars.bat.Copy
./examples/oneapi/dpcto a writable directory if necessary (since it creates temporary files):cp –r ./examples/oneapi/dpc ${WRITABLE_DIR}
Set up the compiler environment for Intel® oneAPI DPC++/C++ Compiler. See Get Started with Intel® oneAPI DPC++/C++ Compiler for details.
Set up oneMKL environment in case of static linking:
source mkl/latest/env/vars.sh
call mkl/latest/env/vars.batBuild and run examples:
Note
You need to have write permissions to the
examplesfolder to build examples, and execute permissions to run them. Otherwise, you need to copyexamples/oneapi/dpcandexamples/oneapi/datafolders to the directory with right permissions. These two folders must be retained in the same directory level relative to each other. If you want to build examples from /examples/oneapi/cpp you can set CC and CXX to other compilers as well.# Navigate to examples directory and build examples cd /examples/oneapi/dpc export CC=icx export CXX=icpx or export CXX=icx cmake -G "Unix Makefiles" -DEXAMPLES_LIST=svm_two_class_thunder # This would generate makefiles for all svm examples matching passed name make # This will compile and run generated svm examples cmake -G "Unix Makefiles" -DONEDAL_LINK=static # This wouldgenerate make for static version make # This will compile and run all the examples
# Navigate to examples directory and build examples cd /examples/oneapi/dpc set CC=icx set CXX=icx cmake -G "NMake Makefiles" -DCMAKE_BUILD_TYPE=Release -DEXAMPLES_LIST=svm_two_class_thunder # This would generate makefiles for all svm examples matching passed name nmake # This will compile and run generated svm examples cmake -G "NMake Makefiles" -DCMAKE_BUILD_TYPE=Release -DONEDAL_LINK=static # This wouldgenerate make for static version nmake # This will compile and run all the examples
The resulting example binaries and log files are written into the
_cmake_resultsdirectory.Note
You should run the examples from
examples/oneapi/dpcfolder, not from_cmake_resultsfolder. Most examples require data to be stored inexamples/oneapi/datafolder and to have a relative link to it started fromexamples/oneapi/dpcfolder.You can build traditional C++ examples located in
examples/oneapi/cppfolder in a similar way.
See also