Quantcast
Channel: Intel® Software - Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
Viewing all 3005 articles
Browse latest View live

Visual Studio 2015, Parallel Studio 2017 Update 2, link does not include the FFTW etc libraries

$
0
0

'I have just repaired my VS 2017 installation after some weird problem (maybe caused by PS 2017 Update 2 install)

​After that I had to completely uninstall and reinstall  all my PS products because there was no other way to get it to reintegrate into VS 2015  (change components via Control Panel  Uninstall... for example did not do it)

With this done the Intel Performance Libraries appears in the VS pulldowns again, and the "use MKL" choice now lets C++ find the include files.   The link, on the other hand, is a dismal failure.    It doesn't work, and there is no obvious way of deciding why it doesn't work - for example the "whole command line" property on the project - linker property page does not include any MKL stuff even when it does work

To say that I find the integration of PS into VS to be a source of frustration rather than help is an understatement.  It is poorly documented and a continuous source of problems.

1>findBestPoints.obj : error LNK2019: unresolved external symbol LAPACKE_dlasrt referenced in function "public: virtual void __cdecl FindBestPoints_Impl_Sort::prepareToPop(void)" (?prepareToPop@FindBestPoints_Impl_Sort@@UEAAXXZ)
1>map3d_optimizer_old.obj : error LNK2001: unresolved external symbol LAPACKE_dlasrt
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_init_threads referenced in function "public: void __cdecl FFTWBase::init(double *,double (*)[2],unsigned __int64,unsigned __int64,unsigned __int64,int)" (?init@FFTWBase@@QEAAXPEANPEAY01N_K22H@Z)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_plan_with_nthreads referenced in function "public: void __cdecl FFTWBase::init(double *,double (*)[2],unsigned __int64,unsigned __int64,unsigned __int64,int)" (?init@FFTWBase@@QEAAXPEANPEAY01N_K22H@Z)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_cleanup_threads referenced in function "public: void __cdecl FFTWBase::fini(void)" (?fini@FFTWBase@@QEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_plan_dft_r2c_2d referenced in function "protected: void __cdecl FFTWBase::create_plan(void)" (?create_plan@FFTWBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_plan_dft_c2r_2d referenced in function "protected: void __cdecl FFTWBase::create_plan(void)" (?create_plan@FFTWBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_plan_dft_r2c_3d referenced in function "protected: void __cdecl FFTWBase::create_plan(void)" (?create_plan@FFTWBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_plan_dft_c2r_3d referenced in function "protected: void __cdecl FFTWBase::create_plan(void)" (?create_plan@FFTWBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_destroy_plan referenced in function "protected: void __cdecl FFTWBase::delete_plan(void)" (?delete_plan@FFTWBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_cleanup referenced in function "protected: void __cdecl FFTWBase::delete_plan(void)" (?delete_plan@FFTWBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftw_execute referenced in function "protected: void __cdecl FFTWBase::forward_execute(void)" (?forward_execute@FFTWBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_init_threads referenced in function "public: void __cdecl FFTWFBase::init(float *,float (*)[2],unsigned __int64,unsigned __int64,unsigned __int64,int)" (?init@FFTWFBase@@QEAAXPEAMPEAY01M_K22H@Z)
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_plan_with_nthreads referenced in function "public: void __cdecl FFTWFBase::init(float *,float (*)[2],unsigned __int64,unsigned __int64,unsigned __int64,int)" (?init@FFTWFBase@@QEAAXPEAMPEAY01M_K22H@Z)
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_cleanup_threads referenced in function "public: void __cdecl FFTWFBase::fini(void)" (?fini@FFTWFBase@@QEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_plan_dft_r2c_2d referenced in function "protected: void __cdecl FFTWFBase::create_plan(void)" (?create_plan@FFTWFBase@@IEAAXXZ)
1>spider.obj : error LNK2001: unresolved external symbol fftwf_plan_dft_r2c_2d
1>spider_old.obj : error LNK2001: unresolved external symbol fftwf_plan_dft_r2c_2d
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_plan_dft_c2r_2d referenced in function "protected: void __cdecl FFTWFBase::create_plan(void)" (?create_plan@FFTWFBase@@IEAAXXZ)
1>spider.obj : error LNK2001: unresolved external symbol fftwf_plan_dft_c2r_2d
1>spider_old.obj : error LNK2001: unresolved external symbol fftwf_plan_dft_c2r_2d
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_plan_dft_r2c_3d referenced in function "protected: void __cdecl FFTWFBase::create_plan(void)" (?create_plan@FFTWFBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_plan_dft_c2r_3d referenced in function "protected: void __cdecl FFTWFBase::create_plan(void)" (?create_plan@FFTWFBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_destroy_plan referenced in function "protected: void __cdecl FFTWFBase::delete_plan(void)" (?delete_plan@FFTWFBase@@IEAAXXZ)
1>spider.obj : error LNK2001: unresolved external symbol fftwf_destroy_plan
1>spider_old.obj : error LNK2001: unresolved external symbol fftwf_destroy_plan
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_cleanup referenced in function "protected: void __cdecl FFTWFBase::delete_plan(void)" (?delete_plan@FFTWFBase@@IEAAXXZ)
1>fft_base.obj : error LNK2019: unresolved external symbol fftwf_execute referenced in function "protected: void __cdecl FFTWFBase::forward_execute(void)" (?forward_execute@FFTWFBase@@IEAAXXZ)
1>spider.obj : error LNK2001: unresolved external symbol fftwf_execute
1>spider_old.obj : error LNK2001: unresolved external symbol fftwf_execute
1>fft_fftw3.obj : error LNK2019: unresolved external symbol fftw_malloc referenced in function "public: __cdecl FFTWTransformer::FFTWTransformer(unsigned __int64,unsigned __int64,unsigned __int64)" (??0FFTWTransformer@@QEAA@_K00@Z)
1>fft_fftw3.obj : error LNK2019: unresolved external symbol fftw_free referenced in function "public: __cdecl FFTWTransformer::~FFTWTransformer(void)" (??1FFTWTransformer@@QEAA@XZ)
1>fft_fftw3.obj : error LNK2019: unresolved external symbol fftwf_malloc referenced in function "public: __cdecl FFTWFTransformer::FFTWFTransformer(unsigned __int64,unsigned __int64,unsigned __int64)" (??0FFTWFTransformer@@QEAA@_K00@Z)
1>spider_old.obj : error LNK2001: unresolved external symbol fftwf_malloc
1>fft_fftw3.obj : error LNK2019: unresolved external symbol fftwf_free referenced in function "public: __cdecl FFTWFTransformer::~FFTWFTransformer(void)" (??1FFTWFTransformer@@QEAA@XZ)
1>spider_old.obj : error LNK2001: unresolved external symbol fftwf_free
1>initialize.obj : error LNK2019: unresolved external symbol LAPACKE_dgels referenced in function "void __cdecl mkl_solveNotdetermined(double *,int,int,double *,int)" (?mkl_solveNotdetermined@@YAXPEANHH0H@Z)
1>pca_optimizer.obj : error LNK2001: unresolved external symbol LAPACKE_dgels
1>initialize.obj : error LNK2019: unresolved external symbol LAPACKE_sgels referenced in function "void __cdecl mkl_solveNotdetermined(float *,int,int,float *,int)" (?mkl_solveNotdetermined@@YAXPEAMHH0H@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol vsldSSNewTask referenced in function "private: int __cdecl Pca::mkl_cov(double *,int,int,double *,double *)" (?mkl_cov@Pca@@AEAAHPEANHH00@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol vsldSSEditCovCor referenced in function "private: int __cdecl Pca::mkl_cov(double *,int,int,double *,double *)" (?mkl_cov@Pca@@AEAAHPEANHH00@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol vsldSSCompute referenced in function "private: int __cdecl Pca::mkl_cov(double *,int,int,double *,double *)" (?mkl_cov@Pca@@AEAAHPEANHH00@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol vslSSDeleteTask referenced in function "private: int __cdecl Pca::mkl_cov(double *,int,int,double *,double *)" (?mkl_cov@Pca@@AEAAHPEANHH00@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol vsldSSEditTask referenced in function "private: double __cdecl Pca::mkl_meanVec(double *,int)" (?mkl_meanVec@Pca@@AEAANPEANH@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol LAPACKE_dsytrd referenced in function "private: double * __cdecl Pca::mkl_eig(double *,int)" (?mkl_eig@Pca@@AEAAPEANPEANH@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol LAPACKE_dorgtr referenced in function "private: double * __cdecl Pca::mkl_eig(double *,int)" (?mkl_eig@Pca@@AEAAPEANPEANH@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol LAPACKE_dsteqr referenced in function "private: double * __cdecl Pca::mkl_eig(double *,int)" (?mkl_eig@Pca@@AEAAPEANPEANH@Z)
1>pca_optimizer.obj : error LNK2019: unresolved external symbol cblas_dgemm referenced in function "private: double * __cdecl Pca::mkl_multiplyMat(double *,int,int,double *,int,int)" (?mkl_multiplyMat@Pca@@AEAAPEANPEANHH0HH@Z)
1>C:\local\ipccsb\ROME1.1\Windows\rome_map3d\x64\Debug\rome_map3d.exe : fatal error LNK1120: 36 unresolved externals
 

Zone: 

Thread Topic: 

Bug Report

Why MPI impact the speed of MKL's DFT

$
0
0

My code:

// -*- C++ -*-

# include <cmath>
# include <ctime>
# include <cstring>
# include <cstdio>

# include "mkl.h"

int main (int argc, char * argv[])
{
  MKL_LONG D[2] = {SIZE, SIZE};
  MKL_LONG C = COUNT;
  MKL_LONG ST[3] = {0, (D[1] * sizeof(double) + 63) / 64 * (64 / sizeof(double)), 1};
  MKL_LONG DI = D[0] * ST[1];
  MKL_LONG SI = D[0] * ST[1];
  double SC = 1.0 / std::sqrt((double)SI);
  struct timespec BE, EN;

  double*const Efft_r = (double*)_mm_malloc(sizeof(double) * SI * C * 2, 64);
  memset(Efft_r, 0, sizeof(double) * SI  * C * 2);
  double*const Efft_i = Efft_r + SI * C;

  Efft_r[0] = 1.0;

  clock_gettime (CLOCK_REALTIME, &BE);
  for (int i=0; i<LOOP; ++i)
    {
      MKL_LONG status;
      DFTI_DESCRIPTOR_HANDLE hand;
      DftiCreateDescriptor(&hand, DFTI_DOUBLE, DFTI_COMPLEX, 2, D);
      DftiSetValue(hand, DFTI_INPUT_STRIDES, ST);
      DftiSetValue(hand, DFTI_OUTPUT_STRIDES, ST);
      DftiSetValue(hand, DFTI_NUMBER_OF_TRANSFORMS, C);
      DftiSetValue(hand, DFTI_INPUT_DISTANCE, DI);
      DftiSetValue(hand, DFTI_COMPLEX_STORAGE, DFTI_REAL_REAL);
      DftiSetValue(hand, DFTI_FORWARD_SCALE, SC);
      DftiSetValue(hand, DFTI_BACKWARD_SCALE, SC);
      DftiSetValue(hand, DFTI_THREAD_LIMIT, 1);
      DftiSetValue(hand, DFTI_NUMBER_OF_USER_THREADS, 1);
      DftiCommitDescriptor(hand);
      __assume_aligned(Efft_r, 64);
      __assume_aligned(Efft_i, 64);
      DftiComputeForward(hand, Efft_r, Efft_i);
      DftiFreeDescriptor(&hand);
    }
  clock_gettime (CLOCK_REALTIME, &EN);
  printf("DFTI_COMPLEX_STORAGE: DFTI_REAL_REAL\nLOOP:   \t%d\nSIZE:   \t%d X %d\nSTRIDES:\t%d %d %d\nNUMBER: \t%d\nDISTANCE:\t%d\n\t\t\t\t%.9fs\n",
	 LOOP,
	 D[0], D[1],
	 ST[0], ST[1], ST[2],
	 C,
	 DI,
	 double(EN.tv_sec-BE.tv_sec)+double(EN.tv_nsec-BE.tv_nsec)/1e9);
  _mm_free(Efft_r);

  return 0;
}

This code was compiled by icpc with flag "-mkl DSIZE=4096 -DLOOP=1 -DCOUNT=3".

When I run this program without MPI, the output is below:

$ ./a.out
DFTI_COMPLEX_STORAGE: DFTI_REAL_REAL
LOOP:   	1
SIZE:   	4096 X 4096
STRIDES:	0 4096 1
NUMBER: 	3
DISTANCE:	16777216
				0.322017125s

When I run the same program with MPI, the output is below:

$ mpirun -n 1 ./a.out
DFTI_COMPLEX_STORAGE: DFTI_REAL_REAL
LOOP:   	1
SIZE:   	4096 X 4096
STRIDES:	0 4096 1
NUMBER: 	3
DISTANCE:	16777216
				1.606980538s

The program without MPI runs much faster than with MPI. I have tried different value of SIZE, but the results are alike.

I have not known why. If I must use MPI, is there any way to keep the speed of MKL?

Linear System Solving For Tiny Matrix

$
0
0

We are looking for to solve Linear Systems of tiny complex matrix like 10x10 in a very fast way.

We would like to use iterative methods because we have to solve a sequence with about 100 linear systems and solution of each system is a very good initial conditions for next system.

We've verified the direct solution method in the last MKL 2017 Update 2 but the speed is not enough.
I believe iterative methods could be the right solution.

What do you suggest?

Thank you

Gianluca

intel MKL library working fine in windows but cause "segmentation fault" in unix after the method "mkl_dcsrcoo"

$
0
0

I have attached a sample code below:

main:

=====
int main(int argc, char **argv)
{
    
    Mat test = makeTestMat(3, 3);
wlsFilter(test, test);
return 0;
}

wlsFilter:
=======
void wlsFilter(Mat& src, Mat& dst, float lambda, float alpha)
{

    float eps = 2.2204e-11; 
    float smallNum = 0.0001;

    Mat L;
    cv::log(src + eps, L);

    int r = src.rows;
    int c = src.cols;
    int k = r*c;

    Mat dy;
    diffy(L, dy);
    dy = F(dy, lambda, alpha, smallNum);
    copyMakeBorder(dy, dy, 0, 1, 0, 0, BORDER_CONSTANT, Scalar::all(0));
    dy = dy.t();
    dy = dy.reshape(1, k);

    Mat dx;
    diffx(L, dx);
    dx = F(dx, lambda, alpha, smallNum);
    copyMakeBorder(dx, dx, 0, 0, 0, 1, BORDER_CONSTANT, Scalar::all(0));
    dx = dx.t();
    dx = dx.reshape(1, k);

    Mat B = Mat(k, 2, CV_32FC1);
    dx.copyTo(B.col(0));
    dy.copyTo(B.col(1));

    Mat e = dx.clone();
    Mat w;
    copyMakeBorder(dx, w, r, 0, 0, 0, BORDER_CONSTANT, Scalar::all(0));
    w = w.rowRange(0, w.rows - r);

    Mat s = dy.clone();
    Mat n;
    copyMakeBorder(dy, n, 1, 0, 0, 0, BORDER_CONSTANT, Scalar::all(0));
    n = n.rowRange(0, n.rows - 1);

    Mat D = 1 - (e + w + s + n);

    solveSparse_MKL(src, B, D, r, dst);

}

solveSparse_MKL:
================
void solveSparse_MKL(Mat& img, Mat& B, Mat& D, int r, Mat &dst)
{
    MKL_INT* i_csr = 0;
    MKL_INT* j_csr = 0;
    double* a_csr = 0;

    _DOUBLE_PRECISION_t* rhs = 0;

    MKL_INT nNonZeros = spDiag2_MKL(B, -r, -1, D, i_csr, j_csr, a_csr);
.
.
.
.
}

spDiag2_MKL:
============
int spDiag2_MKL(Mat& B, int d1, int d2, Mat& D, MKL_INT*& i_csr, MKL_INT*& j_csr, double*& a_csr)
{
    MKL_INT* rowind = 0;
    MKL_INT* colind = 0;
    double* acoo = 0;

    MKL_INT nnz = 0;

    Vec2i off1, off2;
    if (d1 > 0) { off1 = Vec2i(0, d1); }
    else { off1 = Vec2i(-d1, 0); }

    if (d2 > 0) { off2 = Vec2i(0, d2); }
    else { off2 = Vec2i(-d2, 0); }

    for (int i = 0; i < B.rows; ++i)
    {
        int i1 = i + off1[0];
        int j1 = i + off1[1];

        int i2 = i + off2[0];
        int j2 = i + off2[1];

        if (i1 < B.rows && j1 < B.rows)
        {
            nnz++;
        }
        if (i2 < B.rows && j2 < B.rows)
        {
            nnz++;
        }
        ++nnz;
    }
    rowind = new MKL_INT[nnz];
    colind = new MKL_INT[nnz];
    acoo = new double[nnz];
    MKL_INT ind = 0;
    for (int i = 0; i < B.rows; ++i)
    {
        int i1 = i + off1[0];
        int j1 = i + off1[1];

        int i2 = i + off2[0];
        int j2 = i + off2[1];

        if (i1 < B.rows && j1 < B.rows)
        {
            if (j1 > i1)
            {
                rowind[ind] = i1 + 1;
                colind[ind] = j1 + 1;
                acoo[ind] = B.at<float>(i, 0);
                ++ind;
            }
            else
            {
                rowind[ind] = j1 + 1;
                colind[ind] = i1 + 1;
                acoo[ind] = B.at<float>(i, 0);
                ++ind;
            }
        }
        if (i2 < B.rows && j2 < B.rows)
        {
            if (j2 > i2)
            {
                rowind[ind] = i2 + 1;
                colind[ind] = j2 + 1;
                acoo[ind] = B.at<float>(i, 1);
                ++ind;
            }
            else
            {
                rowind[ind] = j2 + 1;
                colind[ind] = i2 + 1;
                acoo[ind] = B.at<float>(i, 1);
                ++ind;
            }
        }
        rowind[ind] = i + 1;
        colind[ind] = i + 1;
        acoo[ind] = D.at<float>(i);
        ++ind;
    }
    MKL_INT m = B.rows;
    MKL_INT n = B.rows;

    a_csr = new double[nnz];
    i_csr = new MKL_INT[m + 1]; // m+1
    j_csr = new MKL_INT[nnz];

    MKL_INT info;
    MKL_INT job[8] = { 2, // COO to CSR
        1, // 1 based indexing in CSR rows
        1, // 1 based indexing in CSR cols
        0, //
        nnz, // number of the non-zero elements
        0, // job indicator
        0,
        0
    };

    
    mkl_dcsrcoo(job, &m, a_csr, j_csr, i_csr, &nnz, acoo, rowind, colind, &info);
.
.
.
.
}

 

 

Description:

I the above code "mkl_dcsrcoo(job, &m, a_csr, j_csr, i_csr, &nnz, acoo, rowind, colind, &info);"

The values of m,nnz,acoo,rowind,colind are same in both windows and unix 
But the a_csr,j_csr,i_csr values differs.

Can you pls tell why its differs?

Before calling the mkl_dcsrcoo the values are same,but after calling the method the values differs,so i hope the issue occurs in the "mkl_dcsrcoo" method.
Pls suggest any solutions?

Thanks in advance.

 

Regards,

CIBIN

 

Thread Topic: 

Question

zgetrf error handling regression

$
0
0

Hello,

We are in the process of trying to upgrade our MKL from 11.2.2.1 to 2017 update 2. We noticed a change in how a singular matrix is handled by zgetrf. In 11.2.2.1, zgetrf would return (via info) a positive number indicating the problem pivot point. Now 2017.2 throws a floating point division by zero exception and we do not know the problem pivot number. Was this an intentional change? If so, how do we find out the problem pivot number?

A simple test case is a 4x4 complex matrix represented by:

+        [0]    {d_re=0.00000000000000000 d_im=1000000.0000000000 }    complex
+        [1]    {d_re=0.00000000000000000 d_im=1000000.0000000000 }    complex
+        [2]    {d_re=0.00000000000000000 d_im=0.00000000000000000 }    complex
+        [3]    {d_re=0.00000000000000000 d_im=0.00000000000000000 }    complex
+        [4]    {d_re=0.00000000000000000 d_im=1000000.0000000000 }    complex
+        [5]    {d_re=0.00000000000000000 d_im=1000000.0000000000 }    complex
+        [6]    {d_re=0.00000000000000000 d_im=0.00000000000000000 }    complex
+        [7]    {d_re=0.00000000000000000 d_im=0.00000000000000000 }    complex
+        [8]    {d_re=0.00000000000000000 d_im=0.00000000000000000 }    complex
+        [9]    {d_re=0.00000000000000000 d_im=0.00000000000000000 }    complex
+        [10]    {d_re=1000000.0005243536 d_im=-16.191775445209515 }    complex
+        [11]    {d_re=-1000000.0000000000 d_im=0.00000000000000000 }    complex
+        [12]    {d_re=0.00000000000000000 d_im=0.00000000000000000 }    complex
+        [13]    {d_re=0.00000000000000000 d_im=0.00000000000000000 }    complex
+        [14]    {d_re=-1000000.0000000000 d_im=0.00000000000000000 }    complex
+        [15]    {d_re=1000000.0005243536 d_im=-16.191775445209515 }    complex

Exception info:

First-chance exception at 0x00007FFBFAFEC926 (mkl_avx2.dll) in blah.exe: 0xC000008E: Floating-point division by zero (parameters: 0x0000000000000000). In our exe, we translate select structured exceptions like this to C++ exceptions so we can deal with computation errors at a higher level.

 

Thanks,

Paul

 

NO SUPPORT FOR RANDOM NUMBER GENERATOR routines ?

$
0
0

I am referring to the two routines:

call random_number() and Call random_seed()

 

Those are not included in any of the categories, and your A to Z index is not available.

Are they supported elsewhere ?

 

Using of free version of MKL and then purchase it

$
0
0

Hi,

Hope I chose the right place for my question.
The question is about licence for MKL.

We tried the free version of MKL 2017 u2 and enjoyed it. So we started the procedure of buying. It takes a time in a company. But we already have binaries and can build our software. It will be great to finish testing and buying the licence at the same time.

We use MKL from Intel Parallel Studio XE Composer Edition for C++ for Windows (static linking) and Linux (dynamic linking).
Should we rebuild all with bought version of MKL? Or is it just enough install the licence key?

Sorry if that information is published somewhere I didn't find it.

Kind regards,
Oleg

Thread Topic: 

Question

Undefined reference to many functions in the MKL library

$
0
0

I'm using eclipse for the MKL library of the Academic Research Performance Libraries from Intel (Linux*).  I've added the library to the project properties path but I'm still facing the undefined reference to many functions from those certain classes (mkl_lapacke.h and mkl_cblas.h). So what's wrong with it? what am I missing?


A problem with installation

$
0
0

Hi everybody!

I tried to install l_mkl_2017.2.174 (Manual RPM) on Ubuntu 17.04 but I did not succeed.

 

~/Downloads/l_mkl_2017.2.174/rpm$ rpm -ivh --nodeps --ignorearch --force-debian *.rpm
warning: intel-comp-l-all-vars-174-17.0.2-174.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 1911e097: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:intel-tbb-libs-174-2017.4-174    ################################# [  1%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-tbb-libs-174-2017.4-174.noarch: install failed
   2:intel-openmp-l-all-174-17.0.2-174################################# [  3%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux/compiler: cpio: mkdir
error: intel-openmp-l-all-174-17.0.2-174.i486: install failed
   3:intel-mkl-ps-common-174-2017.2-17################################# [  4%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-common-174-2017.2-174.i486: install failed
   4:intel-openmp-l-ps-libs-174-17.0.2################################# [  5%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux/compiler/lib/intel64_lin_mic: cpio: mkdir
error: intel-openmp-l-ps-libs-174-17.0.2-174.x86_64: install failed
   5:intel-mkl-ps-ss-tbb-174-2017.2-17################################# [  7%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-ss-tbb-174-2017.2-174.i486: install failed
   6:intel-mkl-ps-mic-174-2017.2-174  ################################# [  8%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux/mkl: cpio: mkdir
error: intel-mkl-ps-mic-174-2017.2-174.x86_64: install failed
   7:intel-mkl-ps-doc-2017.2-174      ################################# [ 10%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-doc-2017.2-174.noarch: install failed
   8:intel-mkl-common-174-2017.2-174  ################################# [ 11%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-common-174-2017.2-174.noarch: install failed
   9:intel-mkl-ps-common-174-2017.2-17################################# [ 12%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-common-174-2017.2-174.noarch: install failed
  10:intel-mkl-ps-doc-jp-2017.2-174   ################################# [ 14%]
error: unpacking of archive failed on file /opt/intel/documentation_2017/ja: cpio: mkdir
error: intel-mkl-ps-doc-jp-2017.2-174.noarch: install failed
  11:intel-mkl-ps-tbb-mic-174-2017.2-1################################# [ 15%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-tbb-mic-174-2017.2-174.x86_64: install failed
  12:intel-openmp-l-ps-libs-jp-174-17.################################# [ 16%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux/compiler: cpio: mkdir
error: intel-openmp-l-ps-libs-jp-174-17.0.2-174.x86_64: install failed
  13:intel-mkl-ps-common-c-174-2017.2-################################# [ 18%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-common-c-174-2017.2-174.noarch: install failed
  14:intel-mkl-ps-common-jp-174-2017.2################################# [ 19%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/licensing/mkl/ja: cpio: mkdir
error: intel-mkl-ps-common-jp-174-2017.2-174.noarch: install failed
  15:intel-openmp-l-ps-libs-jp-174-17.################################# [ 21%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-openmp-l-ps-libs-jp-174-17.0.2-174.i486: install failed
  16:intel-mkl-ps-ss-tbb-174-2017.2-17################################# [ 22%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-ps-ss-tbb-174-2017.2-174.x86_64: install failed
  17:intel-psxe-common-doc-2017.2-050 ################################# [ 23%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-psxe-common-doc-2017.2-050.noarch: install failed
  18:intel-psxe-common-050-2017.2-050 ################################# [ 25%]
error: unpacking of archive failed on file /opt/intel/parallel_studio_xe_2017.2.050: cpio: mkdir
error: intel-psxe-common-050-2017.2-050.noarch: install failed
  19:intel-openmp-l-all-174-17.0.2-174################################# [ 26%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-openmp-l-all-174-17.0.2-174.x86_64: install failed
  20:intel-mkl-rt-174-2017.2-174      ################################# [ 27%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-rt-174-2017.2-174.x86_64: install failed
  21:intel-mkl-rt-174-2017.2-174      ################################# [ 29%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-rt-174-2017.2-174.i486: install failed
  22:intel-mkl-psxe-050-2017.2-050    ################################# [ 30%]
error: unpacking of archive failed on file /opt/intel/parallel_studio_xe_2017.2.050/licensing: cpio: mkdir
error: intel-mkl-psxe-050-2017.2-050.noarch: install failed
  23:intel-mkl-ps-tbb-mic-rt-174-2017.################################# [ 32%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-tbb-mic-rt-174-2017.2-174.x86_64: install failed
  24:intel-mkl-ps-ss-tbb-rt-174-2017.2################################# [ 33%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-ss-tbb-rt-174-2017.2-174.x86_64: install failed
  25:intel-mkl-ps-ss-tbb-rt-174-2017.2################################# [ 34%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-ss-tbb-rt-174-2017.2-174.i486: install failed
  26:intel-mkl-ps-rt-jp-174-2017.2-174################################# [ 36%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-rt-jp-174-2017.2-174.x86_64: install failed
  27:intel-mkl-ps-rt-jp-174-2017.2-174################################# [ 37%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-rt-jp-174-2017.2-174.i486: install failed
  28:intel-mkl-ps-pgi-rt-174-2017.2-17################################# [ 38%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-pgi-rt-174-2017.2-174.x86_64: install failed
  29:intel-mkl-ps-pgi-f-174-2017.2-174################################# [ 40%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-pgi-f-174-2017.2-174.x86_64: install failed
  30:intel-mkl-ps-pgi-c-174-2017.2-174################################# [ 41%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-pgi-c-174-2017.2-174.x86_64: install failed
  31:intel-mkl-ps-pgi-174-2017.2-174  ################################# [ 42%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-pgi-174-2017.2-174.x86_64: install failed
  32:intel-mkl-ps-mic-rt-jp-174-2017.2################################# [ 44%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-mic-rt-jp-174-2017.2-174.x86_64: install failed
  33:intel-mkl-ps-mic-rt-174-2017.2-17################################# [ 45%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-mic-rt-174-2017.2-174.x86_64: install failed
  34:intel-mkl-ps-mic-f-174-2017.2-174################################# [ 47%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-mic-f-174-2017.2-174.x86_64: install failed
  35:intel-mkl-ps-mic-cluster-rt-174-2################################# [ 48%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-mic-cluster-rt-174-2017.2-174.x86_64: install failed
  36:intel-mkl-ps-mic-cluster-174-2017################################# [ 49%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-mic-cluster-174-2017.2-174.x86_64: install failed
  37:intel-mkl-ps-mic-c-174-2017.2-174################################# [ 51%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-mic-c-174-2017.2-174.x86_64: install failed
  38:intel-mkl-ps-gnu-f-rt-174-2017.2-################################# [ 52%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-gnu-f-rt-174-2017.2-174.x86_64: install failed
  39:intel-mkl-ps-gnu-f-rt-174-2017.2-################################# [ 53%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-gnu-f-rt-174-2017.2-174.i486: install failed
  40:intel-mkl-ps-gnu-f-174-2017.2-174################################# [ 55%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-gnu-f-174-2017.2-174.x86_64: install failed
  41:intel-mkl-ps-gnu-f-174-2017.2-174################################# [ 56%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-gnu-f-174-2017.2-174.i486: install failed
  42:intel-mkl-ps-f95-mic-174-2017.2-1################################# [ 58%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-f95-mic-174-2017.2-174.x86_64: install failed
  43:intel-mkl-ps-f95-common-174-2017.################################# [ 59%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-f95-common-174-2017.2-174.noarch: install failed
  44:intel-mkl-ps-f95-174-2017.2-174  ################################# [ 60%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174: cpio: mkdir
error: intel-mkl-ps-f95-174-2017.2-174.x86_64: install failed
  45:intel-mkl-ps-f95-174-2017.2-174  ################################# [ 62%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-f95-174-2017.2-174.i486: install failed
  46:intel-mkl-ps-doc-f-jp-2017.2-174 ################################# [ 63%]
error: unpacking of archive failed on file /opt/intel/documentation_2017/ja: cpio: mkdir
error: intel-mkl-ps-doc-f-jp-2017.2-174.noarch: install failed
  47:intel-mkl-ps-doc-f-2017.2-174    ################################# [ 64%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-doc-f-2017.2-174.noarch: install failed
  48:intel-mkl-ps-doc-c-jp-2017.2-174 ################################# [ 66%]
error: unpacking of archive failed on file /opt/intel/documentation_2017/ja/mkl: cpio: mkdir
error: intel-mkl-ps-doc-c-jp-2017.2-174.noarch: install failed
  49:intel-mkl-ps-common-f-64bit-174-2################################# [ 67%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-common-f-64bit-174-2017.2-174.x86_64: install failed
  50:intel-mkl-ps-common-f-174-2017.2-################################# [ 68%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-ps-common-f-174-2017.2-174.noarch: install failed
  51:intel-mkl-ps-common-f-174-2017.2-################################# [ 70%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-common-f-174-2017.2-174.i486: install failed
  52:intel-mkl-ps-common-64bit-174-201################################# [ 71%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-ps-common-64bit-174-2017.2-174.x86_64: install failed
  53:intel-mkl-ps-cluster-rt-174-2017.################################# [ 73%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-cluster-rt-174-2017.2-174.x86_64: install failed
  54:intel-mkl-ps-cluster-f-174-2017.2################################# [ 74%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-ps-cluster-f-174-2017.2-174.noarch: install failed
  55:intel-mkl-ps-cluster-c-174-2017.2################################# [ 75%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-cluster-c-174-2017.2-174.noarch: install failed
  56:intel-mkl-ps-cluster-64bit-174-20################################# [ 77%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-ps-cluster-64bit-174-2017.2-174.x86_64: install failed
  57:intel-mkl-ps-cluster-174-2017.2-1################################# [ 78%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-ps-cluster-174-2017.2-174.noarch: install failed
  58:intel-mkl-gnu-rt-174-2017.2-174  ################################# [ 79%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-gnu-rt-174-2017.2-174.x86_64: install failed
  59:intel-mkl-gnu-rt-174-2017.2-174  ################################# [ 81%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-gnu-rt-174-2017.2-174.i486: install failed
  60:intel-mkl-gnu-c-174-2017.2-174   ################################# [ 82%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-gnu-c-174-2017.2-174.x86_64: install failed
  61:intel-mkl-gnu-c-174-2017.2-174   ################################# [ 84%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-gnu-c-174-2017.2-174.i486: install failed
  62:intel-mkl-gnu-174-2017.2-174     ################################# [ 85%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux: cpio: mkdir
error: intel-mkl-gnu-174-2017.2-174.x86_64: install failed
  63:intel-mkl-gnu-174-2017.2-174     ################################# [ 86%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-gnu-174-2017.2-174.i486: install failed
  64:intel-mkl-doc-c-2017.2-174       ################################# [ 88%]
error: unpacking of archive failed on file /opt/intel/documentation_2017: cpio: mkdir
error: intel-mkl-doc-c-2017.2-174.noarch: install failed
  65:intel-mkl-doc-2017.2-174         ################################# [ 89%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-doc-2017.2-174.noarch: install failed
  66:intel-mkl-common-eula-174-2017.2-################################# [ 90%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/licensing: cpio: mkdir
error: intel-mkl-common-eula-174-2017.2-174.noarch: install failed
  67:intel-mkl-common-c-64bit-174-2017################################# [ 92%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-common-c-64bit-174-2017.2-174.x86_64: install failed
  68:intel-mkl-common-c-174-2017.2-174################################# [ 93%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux/mkl/examples/examples_core_c.tgz;58fe1f5a: cpio: open
error: intel-mkl-common-c-174-2017.2-174.noarch: install failed
  69:intel-mkl-common-c-174-2017.2-174################################# [ 95%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-common-c-174-2017.2-174.i486: install failed
  70:intel-mkl-174-2017.2-174         ################################# [ 96%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/linux/mkl/lib: cpio: mkdir
error: intel-mkl-174-2017.2-174.x86_64: install failed
  71:intel-mkl-174-2017.2-174         ################################# [ 97%]
error: unpacking of archive failed on file /opt/intel: cpio: mkdir
error: intel-mkl-174-2017.2-174.i486: install failed
  72:intel-compxe-pset-050-2017.2-050 ################################# [ 99%]
error: unpacking of archive failed: cpio: mkdir
error: intel-compxe-pset-050-2017.2-050.noarch: install failed
  73:intel-comp-l-all-vars-174-17.0.2-################################# [100%]
error: unpacking of archive failed on file /opt/intel/compilers_and_libraries_2017.2.174/licensing/ja: cpio: mkdir
error: intel-comp-l-all-vars-174-17.0.2-174.noarch: install failed

 

 

machine information: 4.10.0-19-generic #21-Ubuntu SMP Thu Apr 6 17:04:57 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

distrubitive: Ubuntu 17.04 \n \l

MKL package name: l_mkl_2017.2.174

 

 

PARDISO

$
0
0

I have been using PARDISO for a while and it is really good. 

I have the solver linked up to an old Fortran program that generates a standard stiffness matrix for a beam. 

I was looking at the output for a timber beam that is modelled as 4 uniform lengths -- I get the first four modes as the same value, but if I perturb the lengths slightly I get 4 different values that are now closer to the measured values provided I play with the constants a bit. 

Interesting problem, I suppose I should print out the structures matrix and look for some form of symmetry that is providing the first answer. 

Any thoughts would be appreciated? 

John

BLACS examples are not working

$
0
0

hi, i want to use BLACS so i tested HELLO example code in here.

http://www.netlib.org/blacs/BLACS/Examples.html#HELLO

 it seems to easy, but it was not working. i tried to check why, but it was failed.

(actually, i tried C++ version together, but it also has same problem. c++ code is this.

https://andyspiros.wordpress.com/2011/07/08/an-example-of-blacs-with-c/ )

 

 my fortran code is this.

      PROGRAM HELLO
*     -- BLACS example code --
*     Written by Clint Whaley 7/26/94
*     Performs a simple check-in type hello world
*     ..
*     .. External Functions ..
      INTEGER BLACS_PNUM
      EXTERNAL BLACS_PNUM
*     ..
*     .. Variable Declaration ..
      INTEGER CONTXT, IAM, NPROCS, NPROW, NPCOL, MYPROW, MYPCOL
      INTEGER ICALLER, I, J, HISROW, HISCOL
*    
*     Determine my process number and the number of processes in
*     machine
*    
      WRITE(*,*) '!'
      CALL BLACS_PINFO(IAM, NPROCS)
*    
*     If in PVM, create virtual machine if it doesn't exist
*    
      IF (NPROCS .LT. 1) THEN
         IF (IAM .EQ. 0) THEN
            WRITE(*, 1000)
            READ(*, 2000) NPROCS
         END IF
         CALL BLACS_SETUP(IAM, NPROCS)
      END IF
*    
      WRITE(*,*) '@'
*     Set up process grid that is as close to square as possible
*    
      NPROW = INT( SQRT( REAL(NPROCS) ) )
      NPCOL = NPROCS / NPROW
*    
*     Get default system context, and define grid

*    
      CALL BLACS_GET(0, 0, CONTXT)
      CALL BLACS_GRIDINIT(CONTXT, 'Row', NPROW, NPCOL)
      CALL BLACS_GRIDINFO(CONTXT, NPROW, NPCOL, MYPROW, MYPCOL)
*    
      WRITE(*,*) '#'
*     If I'm not in grid, go to end of program
*    
      IF ( (MYPROW.GE.NPROW) .OR. (MYPCOL.GE.NPCOL) ) GOTO 30

*    
*     Get my process ID from my grid coordinates
*    
      ICALLER = BLACS_PNUM(CONTXT, MYPROW, MYPCOL)
*    
*     If I am process {0,0}, receive check-in messages from
*     all nodes
*    
      WRITE(*,*) '$'
      IF ( (MYPROW.EQ.0) .AND. (MYPCOL.EQ.0) ) THEN
           
         WRITE(*,*) ''

         DO 20 I = 0, NPROW-1
            DO 10 J = 0, NPCOL-1
     
               IF ( (I.NE.0) .OR. (J.NE.0) ) THEN
                  CALL IGERV2D(CONTXT, 1, 1, ICALLER, 1, I, J)
               END IF
*    
*              Make sure ICALLER is where we think in process grid

*    
              CALL BLACS_PCOORD(CONTXT, ICALLER, HISROW, HISCOL)
              IF ( (HISROW.NE.I) .OR. (HISCOL.NE.J) ) THEN
                 WRITE(*,*) 'Grid error!  Halting . . .'

                 STOP
              END IF
              WRITE(*, 3000) I, J, ICALLER


     
10         CONTINUE
20      CONTINUE
        WRITE(*,*) ''
        WRITE(*,*) 'All processes checked in.  Run finished.'
*    
*     All processes but {0,0} send process ID as a check-in


*    
      ELSE

         CALL IGESD2D(CONTXT, 1, 1, ICALLER, 1, 0, 0)
      END IF

    
30    CONTINUE
             


      CALL BLACS_EXIT(0)

1000  FORMAT('How many processes in machine?')
2000  FORMAT(I)
3000  FORMAT('Process {',i2,',',i2,'} (node number =',I,
     $       ') has checked in.')
 
      STOP
      END

 

compile command is this.

$ mpiifort hello.f -mkl -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_ilp64

$ mpirun -n 8 ./a.out

 

error is this

[blacs_example]$ mpirun -n 8 ./a.out
 !
 !
 !
 !
 !
 !
 !
 !
 @
 @
 @
 @
 @
 @
Fatal error in PMPI_Comm_group: Invalid communicator, error stack:
PMPI_Comm_group(179): MPI_Comm_group(comm=0x2e, group=0x7fff39f2d2e0) failed
PMPI_Comm_group(133): Invalid communicator
Fatal error in PMPI_Comm_group: Invalid communicator, error stack:
PMPI_Comm_group(179): MPI_Comm_group(comm=0x88c9f740, group=0x7ffe88c9f3e0) failed
PMPI_Comm_group(133): Invalid communicator
Fatal error in PMPI_Comm_group: Invalid communicator, error stack:
PMPI_Comm_group(179): MPI_Comm_group(comm=0x259c5904, group=0x7fff259c55e0) failed
PMPI_Comm_group(133): Invalid communicator
Fatal error in PMPI_Comm_group: Invalid communicator, error stack:
PMPI_Comm_group(179): MPI_Comm_group(comm=0x3ff, group=0x7ffc216c03e0) failed
PMPI_Comm_group(133): Invalid communicator
Fatal error in PMPI_Comm_group: Invalid communicator, error stack:
PMPI_Comm_group(179): MPI_Comm_group(comm=0x0, group=0x7fffdec349e0) failed
PMPI_Comm_group(133): Invalid communicator
Fatal error in PMPI_Comm_group: Invalid communicator, error stack:
PMPI_Comm_group(179): MPI_Comm_group(comm=0x18bdf740, group=0x7ffe18bdf3e0) failed
PMPI_Comm_group(133): Invalid communicator
 @
 @
Fatal error in PMPI_Comm_group: Invalid communicator, error stack:
PMPI_Comm_group(179): MPI_Comm_group(comm=0x3f, group=0x7ffe09bceae0) failed
PMPI_Comm_group(133): Invalid communicator
Fatal error in PMPI_Comm_group: Invalid communicator, error stack:
PMPI_Comm_group(179): MPI_Comm_group(comm=0x0, group=0x7ffd1e8069e0) failed
PMPI_Comm_group(133): Invalid communicator

 

please tell me why these are not working...

thank you

Add libmkl_avx2.so to solve Intel MKL FATAL ERROR

$
0
0

I compiler numpy with MKL, everything is ok. But I come across a peculiar question. I have three case:

  •  case_1(perinfoMKL1): I add only `mkl_rt` lib in site.cfg file
  •  case_2(perinfoMKL2): I add `mkl_intel_lp64, mkl_intel_thread, mkl_core, iomp5, mkl_rt` lib in site.cfg file
  •  case_3(perinfoMKL3): I and `mkl_intel_lp64, mkl_intel_thread, mkl_core, iomp5, mkl_rt, mkl_avx2` in site.cfg file

And then, I compiler it, build and install process are ok, no problems. But when I use case_2, a error is occured: Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.  I setted LD_LIBRARY_PATH, but when I add mkl_avx2 libs and compiler it again after, test ok. 

Based on LD_DEBUG, I use LD_DEBUG="files libs" LD_DEBUG_OUTPUT=log ./test.py to display process for input file. I see relocation dependency words, so I guess the problem is related to this. I am not clear in underlying computer inspect, someone can explain it, thanks. I am confused of it a little.

My understanding: Because mkl_rtaccording to relocation meaning, I guess that when group_1(mkl_intel_lp64, mkl_intel_thread, mkl_core, iomp5) and group_2(mkl_rt) exist together, executable program will find a new symbol table rather than specific one libs(such as: group_1), and some aren't associated with the symbol table to link other symbol table. And then, finally executable program will not find a symbols, because function of libs relocataion again, so it's not find. When I add new lib to compiler again, errored symbols link new symbols successfully.

But according to Intel Forums, someone says that it's bug. So, I am confused of it again:-(  [https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/...

Environments: 
  hp computer, CPU——Intel Core i5 4 core,Memory——8G, OS——CentOS 7, miniconda3(Python3.6), numpy 1.13(numpy with mkl, I configure some configuration MKL-information to site.cfg file) I compiled from source code.
  
Below followings are my process and attention points:

  1. I seted LD_LIBRARY_PATH. My setting: LD_LIBRARY_PATH="/tmp/mkl-nfs/lib" (Since I will make my environment clean and tidy)
  2. Same test code: test.py  , and every test-code will be modified suitable Python interpreter path.
  3. Compiler source code is ok for every case.

 

$ cat test.py
#!/home/yancy/miniconda3/envs/perinfoMKL1/bin/python
# -*- coding: utf-8 -*-

import numpy as np
import time

start_time = time.time()
a = 10 ** 4
A = np.random.random((a, a))
B = np.random.random((a, a))
C = A.dot(B)
print("Time: ", time.time() - start_time)
(perinfoMKL1)$ tail numpy/site.cfg
#[fftw]
#libraries = fftw3
#
# For djbfft, numpy.distutils will look for either djbfft.a or libdjbfft.a .
#[djbfft]
#include_dirs = /usr/local/djbfft/include
#library_dirs = /usr/local/djbfft/lib
[mkl]
library_dirs = /tmp/mkl-nfs/lib
mkl_libs = mkl_rt


# No error, execute successfully
(perinfoMKL1) $ ./test.py
Time:  35.454288959503174

(perinfoMKL1)$ grep "libmkl_rt.so" log.27250
     27250:    file=libmkl_rt.so [0];  needed by /home/yancy/miniconda3/envs/perinfoMKL1/lib/python3.6/site-packages/numpy-1.13.0.dev0+4408f74-py3.6-linux-x86_64.egg/numpy/core/multiarray.cpython-36m-x86_64-linux-gnu.so [0]
     27250:    find library=libmkl_rt.so [0]; searching
     27250:      trying file=/home/yancy/miniconda3/envs/perinfoMKL1/lib/tls/x86_64/libmkl_rt.so
     27250:      trying file=/home/yancy/miniconda3/envs/perinfoMKL1/lib/tls/libmkl_rt.so
     27250:      trying file=/home/yancy/miniconda3/envs/perinfoMKL1/lib/x86_64/libmkl_rt.so
     27250:      trying file=/home/yancy/miniconda3/envs/perinfoMKL1/lib/libmkl_rt.so
     27250:      trying file=/home/yancy/miniconda3/envs/perinfoMKL1/bin/../lib/libmkl_rt.so
     27250:      trying file=/tmp/mkl-nfs/lib/libmkl_rt.so
     27250:    file=libmkl_rt.so [0];  generating link map
     27250:    calling init: /tmp/mkl-nfs/lib/libmkl_rt.so
     27250:    file=/tmp/mkl-nfs/lib/libmkl_core.so [0];  dynamically loaded by /tmp/mkl-nfs/lib/libmkl_rt.so [0]
     27250:    file=/tmp/mkl-nfs/lib/libiomp5.so [0];  dynamically loaded by /tmp/mkl-nfs/lib/libmkl_rt.so [0]
     27250:    file=/tmp/mkl-nfs/lib/libmkl_intel_thread.so [0];  dynamically loaded by /tmp/mkl-nfs/lib/libmkl_rt.so [0]
     27250:    file=/tmp/mkl-nfs/lib/libmkl_intel_lp64.so [0];  dynamically loaded by /tmp/mkl-nfs/lib/libmkl_rt.so [0]
     27250:    calling fini: /tmp/mkl-nfs/lib/libmkl_rt.so [0]
(perinfoMKL1)]$ grep "libmkl_avx2.so" log.27250
     27250:    file=/tmp/mkl-nfs/lib/libmkl_avx2.so [0];  dynamically loaded by /tmp/mkl-nfs/lib/libmkl_core.so [0]
     27250:    file=/tmp/mkl-nfs/lib/libmkl_avx2.so [0];  generating link map
     27250:    file=/tmp/mkl-nfs/lib/libmkl_core.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_avx2.so [0] (relocation dependency)
     27250:    calling init: /tmp/mkl-nfs/lib/libmkl_avx2.so
     27250:    opening file=/tmp/mkl-nfs/lib/libmkl_avx2.so [0]; direct_opencount=1
     27250:    file=/tmp/mkl-nfs/lib/libmkl_intel_thread.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_avx2.so [0] (relocation dependency)
     27250:    calling fini: /tmp/mkl-nfs/lib/libmkl_avx2.so [0]

 

 

 

(perinfoMKL2)$ tail numpy/site.cfg
#[fftw]
#libraries = fftw3
#
# For djbfft, numpy.distutils will look for either djbfft.a or libdjbfft.a .
#[djbfft]
#include_dirs = /usr/local/djbfft/include
#library_dirs = /usr/local/djbfft/lib
[mkl]
library_dirs = /tmp/mkl-nfs/lib
mkl_libs = mkl_intel_lp64, mkl_intel_thread, mkl_core, iomp5, mkl_rt
(perinfoMKL2)$ ./test.py
Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.


# Error occur, LD_LIBRARY_PATH I setted.
(perinfoMKL2)$ ./test.py
Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.
(perinfoMKL2)$ grep "libmkl_rt.so" test.log.26855
     26855:    file=libmkl_rt.so [0];  needed by /home/yancy/miniconda3/envs/perinfoMKL2/lib/python3.6/site-packages/numpy-1.13.0.dev0+4408f74-py3.6-linux-x86_64.egg/numpy/core/multiarray.cpython-36m-x86_64-linux-gnu.so [0]
     26855:    find library=libmkl_rt.so [0]; searching
     26855:      trying file=/home/yancy/miniconda3/envs/perinfoMKL2/lib/libmkl_rt.so
     26855:      trying file=/home/yancy/miniconda3/envs/perinfoMKL2/bin/../lib/libmkl_rt.so
     26855:      trying file=/tmp/mkl-nfs/lib/libmkl_rt.so
     26855:    file=libmkl_rt.so [0];  generating link map
     26855:    file=/tmp/mkl-nfs/lib/libmkl_intel_lp64.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_rt.so [0] (relocation dependency)
     26855:    file=/tmp/mkl-nfs/lib/libmkl_core.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_rt.so [0] (relocation dependency)
     26855:    calling init: /tmp/mkl-nfs/lib/libmkl_rt.so
     26855:    calling fini: /tmp/mkl-nfs/lib/libmkl_rt.so [0]
(perinfoMKL2) $ grep "libmkl_avx2.so" test.log.26855
     26855:    file=/tmp/mkl-nfs/lib/libmkl_avx2.so [0];  dynamically loaded by /tmp/mkl-nfs/lib/libmkl_core.so [0]
     26855:    file=/tmp/mkl-nfs/lib/libmkl_avx2.so [0];  generating link map
     26855:    /tmp/mkl-nfs/lib/libmkl_avx2.so: error: symbol lookup error: undefined symbol: mkl_dft_fft_fix_twiddle_table_32f (fatal)
     26855:    file=/tmp/mkl-nfs/lib/libmkl_avx2.so [0];  destroying link map
     26855:    file=/home/yancy/miniconda3/envs/perinfoMKL2/bin/libmkl_avx2.so [0];  dynamically loaded by /tmp/mkl-nfs/lib/libmkl_core.so [0]
     26855:    file=libmkl_avx2.so [0];  dynamically loaded by /tmp/mkl-nfs/lib/libmkl_core.so [0]
     26855:    find library=libmkl_avx2.so [0]; searching
     26855:      trying file=/home/yancy/miniconda3/envs/perinfoMKL2/lib/libmkl_avx2.so
     26855:      trying file=/home/yancy/miniconda3/envs/perinfoMKL2/bin/../lib/libmkl_avx2.so
     26855:      trying file=/tmp/mkl-nfs/lib/libmkl_avx2.so
     26855:    file=libmkl_avx2.so [0];  generating link map
     26855:    /tmp/mkl-nfs/lib/libmkl_avx2.so: error: symbol lookup error: undefined symbol: mkl_dft_fft_fix_twiddle_table_32f (fatal)
     26855:    file=/tmp/mkl-nfs/lib/libmkl_avx2.so [0];  destroying link map
(perinfoMKL3)$ tail numpy/site.cfg

#[fftw]
#libraries = fftw3
#
# For djbfft, numpy.distutils will look for either djbfft.a or libdjbfft.a .
#[djbfft]
#include_dirs = /usr/local/djbfft/include
#library_dirs = /usr/local/djbfft/lib
[mkl]
library_dirs = /tmp/mkl-nfs/lib
mkl_libs = mkl_intel_lp64, mkl_intel_thread, mkl_core, iomp5, mkl_rt, mkl_avx2

# No error when I add mkl_avx2
(perinfoMKL3)$ ./test.py
Time:  33.996660232543945

(perinfoMKL3) $ grep "libmkl_rt.so" log.27384
     27384:    file=libmkl_rt.so [0];  needed by /home/yancy/miniconda3/envs/perinfoMKL3/lib/python3.6/site-packages/numpy-1.13.0.dev0+4408f74-py3.6-linux-x86_64.egg/numpy/core/multiarray.cpython-36m-x86_64-linux-gnu.so [0]
     27384:    find library=libmkl_rt.so [0]; searching
     27384:      trying file=/home/yancy/miniconda3/envs/perinfoMKL3/lib/libmkl_rt.so
     27384:      trying file=/home/yancy/miniconda3/envs/perinfoMKL3/bin/../lib/libmkl_rt.so
     27384:      trying file=/tmp/mkl-nfs/lib/libmkl_rt.so
     27384:    file=libmkl_rt.so [0];  generating link map
     27384:    file=/tmp/mkl-nfs/lib/libmkl_intel_lp64.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_rt.so [0] (relocation dependency)
     27384:    file=/tmp/mkl-nfs/lib/libmkl_core.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_rt.so [0] (relocation dependency)
     27384:    calling init: /tmp/mkl-nfs/lib/libmkl_rt.so
     27384:    calling fini: /tmp/mkl-nfs/lib/libmkl_rt.so [0]
(perinfoMKL3)$ grep "libmkl.avx2.so" log.27384
     27384:    file=libmkl_avx2.so [0];  needed by /home/yancy/miniconda3/envs/perinfoMKL3/lib/python3.6/site-packages/numpy-1.13.0.dev0+4408f74-py3.6-linux-x86_64.egg/numpy/core/multiarray.cpython-36m-x86_64-linux-gnu.so [0]
     27384:    find library=libmkl_avx2.so [0]; searching
     27384:      trying file=/home/yancy/miniconda3/envs/perinfoMKL3/lib/libmkl_avx2.so
     27384:      trying file=/home/yancy/miniconda3/envs/perinfoMKL3/bin/../lib/libmkl_avx2.so
     27384:      trying file=/tmp/mkl-nfs/lib/libmkl_avx2.so
     27384:    file=libmkl_avx2.so [0];  generating link map
     27384:    file=/tmp/mkl-nfs/lib/libmkl_core.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_avx2.so [0] (relocation dependency)
     27384:    file=/tmp/mkl-nfs/lib/libmkl_intel_lp64.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_avx2.so [0] (relocation dependency)
     27384:    file=/tmp/mkl-nfs/lib/libmkl_intel_thread.so [0];  needed by /tmp/mkl-nfs/lib/libmkl_avx2.so [0] (relocation dependency)
     27384:    calling init: /tmp/mkl-nfs/lib/libmkl_avx2.so
     27384:    opening file=/tmp/mkl-nfs/lib/libmkl_avx2.so [0]; direct_opencount=1
     27384:    calling fini: /tmp/mkl-nfs/lib/libmkl_avx2.so [0]

 

I can not find mklvars.bat

$
0
0

Hi again.

I installed Intel MKL and I want to check my installation. I did not find mklvars.bat so I could not set Environment Variables.

Please help me.

Thanks again.

Very large error in calculations

$
0
0

I am having a very large error in this code and I thought I could use your kindly attention.

This function is made to calculate determinant of a matrix:

real*8 function mklDet(A)
    real*8, intent(in), dimension(:,:) :: A         !< input matrix A
    real*8, dimension(size(A,1)) :: ipiv            !! pivot indices
    integer N, info, i                              !> dimension, status

    N = size(A, 1)

    ! DGETRF computes an LU factorization of a general M-by-N matrix A
    ! using partial pivoting with row interchanges.
    call DGETRF(n, n, A, n, ipiv, info)

    !< This part of code by adopted from intel software forum topic 309460
    mklDet = 0.d0
    if (info > 0) then
        return
    else if (info < 0) then
        print *, 'Fatal error calculating determinant!'
    end if
    mklDet = 1.d0
    do i = 1, n
        if (ipiv(i).ne.i) then
            mklDet = -mklDet * a(i,i)
        else
            mklDet = mklDet * a(i,i)
        endif
    end do
    !>
end function mklDet   

When I try to calculate determinant of a matrix with linear dependent rows (i.e. determinant is 0.0) :

print *, 'zero determinant' , mklDet(reshape((/ 6.d10, 3.d10, 5.d10, 4.d10, 4.d10, 7.d10, 12.d10, 6.d10, 10.d10 /),(/3, 3/)))

as you can see the outcome is a very large number:

 zero determinant -5.329070518200751E+015

but if the numbers are not large:

print *, 'zero determinant' , mklDet(reshape((/ 6.d0, 3.d0, 5.d0, 4.d0, 4.d0, 7.d0, 12.d0, 6.d0, 10.d0 /),(/3, 3/)))

The outcome is much closer to 0.0 (I think because we divided by 10^10 then determinant should increase 10^-30 times):

 zero determinant  1.065814103640150E-014

How do I work around this problem? Is it possible to calculate determinant more accurately?
 

Thread Topic: 

How-To

MKL FFT 2D

$
0
0

Hello,

I am trying to compute the solution to the Laplace differential equation using a 2D FFT and MKL. 

Are there sample codes to compute the solution to 2nd order differential equations using forward and inverse FFT?

The issue appears to be when I multiply the transformed values by the wavenumbers.

Thanks

Thread Topic: 

How-To

Using community license of Intel MKL for multiple users

$
0
0

The community license is is a "Named-User" license.

My understanding is that multiple copies of the library has to be installed for each end user. However, this proves to be a little technical difficulty for us. Security is a major concern for us. The server will be isolated from internet except for limited access for certain business activities. Each end user will not be able to transfer data in or out the server. Only the administrator can do it.

 

 

Is it OK for the root user to install a single copy of the library system wide (/opt/intel is the default installation directory), and each end user will still register, and active an license, which will be put into each user’s home directory.

 

Thread Topic: 

Question

Why does MKL accelerate the matrix operation and FFT

$
0
0

Can anyone tell me?

Or tell me where to find the answer?

Thread Topic: 

Question

Why is the parallel version of zheevd slower than the sequential one?

$
0
0

I wrote a piece of code to test the speed of the zheevd function(below). Then I use the Intel link advisor to compile it. These are the times I measured:

sequential: 20.15s

TBB: 30.64s

OpenMP: 30.5s

Why is the sequential code faster then either of the parallel version? Is it possible to speedup the zheevd through parallelization?

 

program STB
    !use mkl_service
    implicit none
    integer(4)      :: num, i
    call test_herm()
contains
    Subroutine  test_herm()
        Implicit None
        integer(4), parameter         :: N =  4000, LDA =  N, LWMAX =  100000
        integer(4)                    :: info, LWORK, LIWORK, LRWORK, i,j
        real(8)                       :: r,c

        integer(4), dimension(LWMAX)  :: IWORK
        real(8), dimension(N)         :: W
        real(8), dimension(LWMAX)     :: RWORK
        complex(16), dimension(LDA, N):: A
        complex(16), dimension(LWMAX) :: WORK

        call mkl_set_num_threads(4)
        call random_seed()
        do i =  1,N
            do j =  1,i-1
                call random_number(r)
                call random_number(c)
                A(i,j) = cmplx(r,c)
                A(j,i) = conjg(A(i,j))
            enddo
        enddo

        do i =  1,N
            call random_number(r)
            A(i,i) = cmplx(r,0)
        enddo

        LWORK  = LWMAX
        LIWORK = LWMAX
        LRWORK = LWMAX

        !call zheevd('N', 'L', N, A, LDA, W, WORK, LWORK, RWORK, &
                     !LRWORK, IWORK, LIWORK, info)
        CALL ZHEEVD( 'N', 'Lower', N, A, LDA, W, WORK, LWORK, RWORK,&
                  LRWORK, IWORK, LIWORK, INFO )

        write (*,*) "Info: ", info
        write (*,*) "Lwork: ", LWORk
        write (*,*) "Liwork: ", LIWORK
        write (*,*) "LRWORK: ", LRWORK
        !write (*,*) W
    End Subroutine test_herm
end program STB

 

Thread Topic: 

Question

Quad precision for MKL functions

$
0
0

Is it possible to use quad precision for MKL functions and maintain the accuracy?

Zone: 

Thread Topic: 

Question

Threaded band-triangular solvers in MKL

$
0
0

We have an application where we repeatedly need to solve systems of equations where the coefficient matrix is on triangular band form, and is symmetric positive definite. Thus we factorize once with DPBTRF and solve with DPBTRS. The problem we have is that DPBTRS does not seem to get any benefit at all from running on multiple threads. The factorization scales nicely, but as we call DPBTRS thousands of times, the solution becomes the main bottleneck in the code and it does not scale at all with the number of threads used. We do this within an ARPACK iteration scheme, so we can only solve for a single right hand side at a time, and we typically solve for hundreds of eigenpairs,

A typical example in our case has a matrix size of 200 000 and a bandwidth of 2500. This is quite narrow, so we do understand that there is not much room for parallelism here. However, in order to understand our situation, we have tried to experiment with different matrix sizes and bandwidths but we see no parallel improvements whatsoever. According to the MKL Developer Guide, PBTRS should be threaded. We also tried to write our own test code for comparison. With a simple text-book parallelization of the forward- and backward substitution i PBTRS we achieved a speedup for bandwidths of 4000 and larger, whereas MKL gives no speedup for any bandwidth we tried (in our tests, the largest bandwidth we could fit in memory was 64000). We have also tried other banded solvers with similar results.

We have run our tests on a dual socket Intel Xeon E5-2640 system with a total of 12 cores and 96 MB of memory. We have tried Composer XE 2016 and 2017 with similar results.

My question is: When can we expect parallel improvements in banded solvers? 

Thread Topic: 

Question
Viewing all 3005 articles
Browse latest View live