Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 327

Using Intel® Math Kernel Library with MathWorks* MATLAB* on Intel® Xeon Phi™ Coprocessor System

$
0
0

Overview

This guide is intended to help developers use the latest version of Intel® Math Kernel Library (Intel® MKL) with MathWorks* MATLAB* on Intel® Xeon Phi™ Coprocessor System.

Intel MKL is a computational math library designed to accelerate application performance and reduce development time. It includes highly optimized and threaded dense and sparse Linear Algebra routines, Fast Fourier transforms (FFT) routines, Vector Math routines, and Statistical functions for Intel processors and coprocessors.

MATLABis an interactive software program that performs mathematical computations and visualization. Internally MATLAB uses Intel MKL Basic Linear Algebra Subroutines (BLAS) and Linear Algebra package (LAPACK) routines to perform the underlying computations when running on Intel processors.

Intel MKL now includes a new Automatic Offload (AO) feature that enables computationally intensive Intel MKL functions to offload partial workload to attached Intel Xeon Phi coprocessors automatically and transparently.

As a result, MATLAB performance can benefit from Intel Xeon Phi coprocessors via the Intel MKL AO feature when problem sizes are large enough to amortize the cost of transferring data to the coprocessors. The article describes how to enable Intel MKL AO when Intel Xeon Phi coprocessors are present within a MATLAB computing environment.

Prerequisite

Prior to getting started, obtain access to the following software and hardware:

  1. The Latest Version of Intel MKL or Intel® Composer XE, which includes the Intel® C/C++ Compiler and Intel MKL available from  https://registrationcenter.intel.com/regcenter/register.aspx, or register at https://software.intel.com/en-us/ to get a free 30-day evaluation copy
  2. The Latest Version of MATLAB available from http://www.mathworks.com/products/matlab/
  3. An Intel Xeon Phi Coprocessor Development System as described at https://software.intel.com/en-us/mic-developer

The 64bit version of Intel MKL and MATLAB should be installed at least on the development system.This article was created based on MATLAB R2014a and Intel MKL for Windows* 11.1 update 1 and update 2 on the system+ :

Host machine: Intel® Xeon® CPU E5-2697 v2, 2 Twelve-Core CPUs (30MB LLC, 2.7GHz), 128GB of RAM; OS: Windows Server 2008 R2 Enterprise

Coprocessors: 2 Intel® Xeon Phi™ Coprocessors 7120A, each with 61 cores (30.5MB total cache, 1.2GHz), 16GB GDDR5 Memory

Software: Intel® Math Kernel Library (Intel® MKL) 11.1 update 1 and update 2, Intel® Manycore Platform Software Stack (MPSS) 3.2.27270.1

+ 11.1 update 1 was upgraded to update 2 when the article was drafted, so two versions were tested in the article.

The below is the outline of the steps performed.  Here is the link to the whole article.

Steps

Step 1: Determine which version of Intel MKL is used within MATLAB via the MATLAB command “version -blas”

  • Intel MKL version 11.0.5 is used within MATLAB R2014a

Step 2: Check if the Intel MKL version inside of MATLAB supports Intel Xeon Phi coprocessors

  • Intel MKL has supported for Intel Xeon Phi coprocessor since release 11.0 for Linux OS, and since release 11.1 for Windows OS.

Step 3: Upgrade Intel MKL version in MATLAB

  • Use mkl_rt.dll 
  • Creation of custom dynamic library (Optional, click Download Button)

Step 4: Enable Intel MKL Automatic Offload (AO) in MATLAB via MKL_MIC_ENABLE 

  • Set MKL_MIC_MAX_MEMORY=16G; set MKL_MIC_ENABLE=1

Step 5: Verify the Intel MKL version and ensure that AO is enabled on the Intel Xeon Phi coprocessors

  • Run version -blas and version –lapack, getenv(‘MKL_MIC_ENABLE’) commands and check the output list

Step 6: Compare performance 

  • Accelerate the common used matrix multiply A*B in MATLAB
  • Accelerate the BLAS function dgemm() in MATLAB (Optional, click Download Button to get matrixMultiplyM.c file)

Summary

Intel MKL provides automatic offload (AO) feature for the Intel Xeon Phi coprocessor. With AO feature, certain MKL function can transfer part of the computation to Intel Xeon Phi coprocessor automatically. When problem sizes are large enough to amortize the cost of data transferring, the MKL functions performance can benefit from using both the host CPU and the Intel Xeon Phi coprocessor for computation. Because offloading happens transparently in AO, third-party software that uses Intel MKL functions can automatically benefit from this feature, easily making them to run faster on systems with Intel Xeon Phi coprocessor.

The article describes how to enable Intel MKL AO for MathWorks MATLAB on Intel Xeon Phi coprocessors system. The general steps are as below 

  1. Source environment using compilervars.sh or mklvars.sh intel64
  2. Upgrade the intel MKL version in MATLAB to the latest version supporting Intel Xeon Phi coprocessors
  3. Set MKL_MIC_MAX_MEMORY=16G; set MKL_MIC_ENABLE=1
  4. Run MATLAB

A simple test shows that on one system with two Intel Xeon phi coprocessors, the common used matrix multiplication within MATLAB(C=A*B) achieves a 2.6 times speedup when Intel MKL AO is enabled, comparing to doing the same computation on the cpu only.


Viewing all articles
Browse latest Browse all 327

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>