Intel® Extension for Scikit-learn Ridge Regression for Higgs dataset
[1]:
from timeit import default_timer as timer
from sklearn import metrics
from sklearn.model_selection import train_test_split
import warnings
from sklearn.datasets import fetch_openml
from sklearn.preprocessing import LabelEncoder
from IPython.display import HTML
warnings.filterwarnings("ignore")
Download the data
[2]:
dataset = fetch_openml(data_id=45570, as_frame=True)
Preprocessing
Let’s encode categorical features with LabelEncoder
[3]:
# Access the data and target
x = dataset.data
y = dataset.target
for col in ['jet 3 b-tag', 'jet 1 b-tag', 'jet 4 b-tag', 'jet 2 b-tag']:
le = LabelEncoder().fit(x[col])
x[col] = le.transform(x[col])
[4]:
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1, random_state=0)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
[4]:
((9900000, 28), (1100000, 28), (9900000,), (1100000,))
[5]:
from sklearn.preprocessing import MinMaxScaler, StandardScaler
scaler_x = MinMaxScaler()
scaler_y = StandardScaler()
[6]:
y_train = y_train.to_numpy().reshape(-1, 1)
y_test = y_test.to_numpy().reshape(-1, 1)
scaler_x.fit(x_train)
x_train = scaler_x.transform(x_train)
x_test = scaler_x.transform(x_test)
scaler_y.fit(y_train)
y_train = scaler_y.transform(y_train).ravel()
y_test = scaler_y.transform(y_test).ravel()
Patch original Scikit-learn with Intel® Extension for Scikit-learn
Intel® Extension for Scikit-learn (previously known as daal4py) contains drop-in replacement functionality for the stock Scikit-learn package. You can take advantage of the performance optimizations of Intel® Extension for Scikit-learn by adding just two lines of code before the usual Scikit-learn imports:
[7]:
from sklearnex import patch_sklearn
patch_sklearn()
Intel(R) Extension for Scikit-learn* enabled (https://github.com/uxlfoundation/scikit-learn-intelex)
[8]:
from sklearn.linear_model import Ridge
params = {
"alpha": 0.3,
"fit_intercept": False,
"random_state": 0,
"copy_X": False,
}
start = timer()
model = Ridge(random_state=0).fit(x_train, y_train)
train_patched = timer() - start
f"Intel® extension for Scikit-learn time: {train_patched:.2f} s"
[8]:
'Intel® extension for Scikit-learn time: 0.46 s'
[9]:
y_predict = model.predict(x_test)
mse_metric_opt = metrics.mean_squared_error(y_test, y_predict)
f"Patched Scikit-learn MSE: {mse_metric_opt}"
[9]:
'Patched Scikit-learn MSE: 0.9023910206230603'
Train the same algorithm with original Scikit-learn
In order to cancel optimizations, we use unpatch_sklearn and reimport the class Ridge
[11]:
from sklearnex import unpatch_sklearn
unpatch_sklearn()
[12]:
from sklearn.linear_model import Ridge
start = timer()
model = Ridge(random_state=0).fit(x_train, y_train)
train_unpatched = timer() - start
f"Original Scikit-learn time: {train_unpatched:.2f} s"
[12]:
'Original Scikit-learn time: 4.45 s'
[13]:
y_predict = model.predict(x_test)
mse_metric_original = metrics.mean_squared_error(y_test, y_predict)
f"Original Scikit-learn MSE: {mse_metric_original}"
[13]:
'Original Scikit-learn MSE: 0.9023910206230603'
[14]:
HTML(
f"<h3>Compare MSE metric of patched Scikit-learn and original</h3>"
f"MSE metric of patched Scikit-learn: {mse_metric_opt} <br>"
f"MSE metric of unpatched Scikit-learn: {mse_metric_original} <br>"
f"Metrics ratio: {mse_metric_opt/mse_metric_original} <br>"
f"<h3>With Scikit-learn-intelex patching you can:</h3>"
f"<ul>"
f"<li>Use your Scikit-learn code for training and prediction with minimal changes (a couple of lines of code);</li>"
f"<li>Get comparable model quality</li>"
f"<li>Get a <strong>{(train_unpatched/train_patched):.1f}x</strong> speedup.</li>"
f"</ul>"
)
[14]:
Compare MSE metric of patched Scikit-learn and original
MSE metric of patched Scikit-learn: 0.9023910206230603MSE metric of unpatched Scikit-learn: 0.9023910206230603
Metrics ratio: 1.0
With Scikit-learn-intelex patching you can:
- Use your Scikit-learn code for training and prediction with minimal changes (a couple of lines of code);
- Get comparable model quality
- Get a 9.7x speedup.