Quantcast
Channel: Intel® Software - Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
Viewing all articles
Browse latest Browse all 3005

MKL lapack slow on Xeon Phi KNL

$
0
0

I'm running a Xeon Phi Knights Landing (64 core) and an Intel i7-6900K side by side for speed comparisons.  I'm in Python3, with latest Numpy (1.11.1) linked with all the latest MKL (11.3.3) libraries on both (via Anaconda installation).

The operation in question is a call to numpy.linalg.lstsq, which in turn calls lapack.  With MKL_NUM_THREADS=1, and vector dimension ranging from 100 to 1,000, I observe about 5x faster performance by the i7.  Increasing the number of threads scales better on the i7.  Without setting MKL_NUM_THREADS, the difference can be about 8x in favor of the i7.

This is surprising to me, since I have done speed tests of matrix multiply (using Theano's check_blas.py), in which performance is either roughly comparable or even favorable to the KNL, on a per core basis, and can be 10x in favor of KNL with no thread limit.

I'm not fully knowledgeable on least squares solving routines, but the majority (4/5) of counts during the lstsq routine do report using vectorization  (using perf stat -e r20C2,r40C2 ...).  Maybe it's not really utilizing the full register width (8 double precision) most of the time?  Could this alone explain the difference?  Of course, the matrix multiply reports an overwhelming majority of operations being vectorized.

Is there any hope of improvement?  

Happy to provide more numbers or test scripts.

Thanks,

Adam

Thread Topic: 

Question

Viewing all articles
Browse latest Browse all 3005

Trending Articles