Quantcast
Channel: Intel® Software - Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
Viewing all 3005 articles
Browse latest View live

Should we generate random number (using VSL_RNG) on the fly or prior to the loop?

$
0
0

Hello,

I am currently learning about how to use random functions, and am using the mkl version VSL_RNG.

I have made this simple code which compares the efficiency with generating all random numbers at once or doing so  on the fly.  The code runs in parallel where I am using VSL_BRNG_WH+rank to generate a different generator for each MPI process.

For generating nmax=1e8 numbers I get the following:

time = 0.35 seconds for generating all numbers at once (n=1 setting in the code)

time = 16 seconds for generating on the fly (n=2 setting in the code)

 

Is this  an expected behaviour. Is it generally expected that the speed is much faster for doing all generating numbers at once before entering a loop?

 

include 'mkl_vsl.f90'

program rnd_test

use MKL_VSL
use MKL_VSL_TYPE
use mpi

implicit none
   real(kind=8) t1,t2  ! buffer for random numbers
      real(kind=8) s        ! average
      real(kind=8) a, sigma ! parameters of normal distribution
      real(kind=8), allocatable :: r(:) ! buffer for random numbers

      TYPE (VSL_STREAM_STATE)::stream

      integer errcode
      integer i,j, n11, nloop, nn
      integer brng,method,seed,n, ierr, size, rank
      integer(kind=8) :: nskip, nmax
      call mpi_init(ierr)

      call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr)
      call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)

      n = 1
      s = 0.0
      a = 5.0
      sigma  = 2.0

      nmax = 1e8

!-----------------------------------------------------------------------
      nn = 2 ! (1): all at once. >1: on the fly
!----------------------------------------------------------------------






      nloop = 0


      if(nn>1)then
         nloop=nmax
         nn = 1
      else
         nloop=1
         nn = nmax
      endif


      allocate(r(nn))

      method=VSL_RNG_METHOD_GAUSSIAN_ICDF
      seed=777
      brng = VSL_BRNG_WH+rank

!     ***** Initializing *****
      errcode=vslnewstream( stream, brng,  seed )

      t1 = 0.
      t2 = 0.
      t1 = mpi_wtime()

!     ***** Generating *****
      do i = 1, nloop
          errcode=vdrnggaussian( method, stream, nn, r, a, sigma )
!         s = s + sum(r)
      end do

      t2= mpi_wtime()

!      s = s / 10000.0

      print*, "time: ", t2-t1
      call mpi_barrier(MPI_COMM_WORLD,ierr)
!     ***** Deinitialize *****
      errcode=vsldeletestream( stream )




end program

 

best

Ali

 


Problem running PARDISO example

$
0
0

Hi--I am trying to use the PARDISO solver on a MacOS and get this error when I try to complie and run the pardiso_unsym_f.f file.  I'm compiling like this:

gfortran pardiso_unsym_f.f -o pardiso_unsym_fexec  ${MKLROOT}/lib/libmkl_intel_ilp64.a ${MKLROOT}/lib/libmkl_intel_thread.a ${MKLROOT}/lib/libmkl_core.a -liomp5 -lpthread -lm -ldl

and I get this error when I run it:

LMC-062490:Fortran jpolk$ ./pardiso_unsym_fexec
 Reordering completed ...
 The following ERROR was detected:           -1
STOP 1

I would appreciate any help!

Dynamically link without using CPU-specific DLLs

$
0
0

Hello.

How can I compile a program to link MKL dynamically and so that it only uses the basic DLLs (mkl_intel_thread.dll, mkl_core.dll, libiomp5md.dll) and won't try to load different ones like mkl_avx.dll, mkl_avx2.dll, etc. on different processors?

Or is there an environment variable or something to do that?

 

complex-valued scalar products

$
0
0

Is there any problem calling BLAS functions zdotc, zdotu in recent version of MKL?

When calling these functions from a C-code using recent version of the MKL, the code crashes or returns fault values. It works properly when using soft-coded FORTRAN BLAS. I am using LINUX, I tried three different versions of the MKL (2018.5.274, 2019.1.144 und 2019.3.199).

 

Here is how I call the MKL

cc -O -fPIC -fopenmp -m64 -mcmodel=medium -I $MKLROOT/include test.c -L $MKLROOT/lib -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lm

 

Here is a sample C-code

#include <stdio.h>
#include <stdlib.h>

typedef struct { double r, i; } doublecomplex;

doublecomplex zdotc_(int *,doublecomplex *,int *, doublecomplex *,int *),
                         zdotu_(int *,doublecomplex *,int *, doublecomplex *,int *);

#define N 4

int main(int argc, char **argv)
{
  doublecomplex *v, *w, val;
  int i=N,j=1,k=2,l,m;
  v=(doublecomplex *)malloc((size_t)N  *sizeof(doublecomplex));
  w=(doublecomplex *)malloc((size_t)2*N*sizeof(doublecomplex));

  for (l=0; l<N; l++) {
      v[l].r  = 1.0; v[l].i  =(double)l;

      w[l].r  =-1.0; w[l].i  = 1.0;
      w[N+l].r= 1.0; w[N+l].i= 0.0;
  }

  val=zdotc_(&i, v,&j, w,&k);
  printf("val=(%8.1le,%8.1le)\n",val.r,val.i);

  val=zdotu_(&i, v,&j, w,&k);
  printf("val=(%8.1le,%8.1le)\n",val.r,val.i);

  free(v);
  free(w);
 
  return 0;
}

 

FEAST multiplicity of eigenvalues

$
0
0

Hi,

I was wondering how the FEAST algorithm determines the multiplicity of the eigenvalues, for the case of repeated eigenvalues. More precisely, I would like to know if there exists a special tolerance for when two or more eigenvalues are treated as repeated.

The reason behind the questions is that I need the derivative of eigenvalues and thus special treatment is required for the case of repeated eigenvalues, since the eigenvectors of repeated eigenvalue can be linearly combined in an infinite number of ways.

Best,

Anna

 

Compilation with MKL_VSL gives compiler error

$
0
0

Hello

 

This is probably quite simple. When I am using MKL modules like MKL_VSL within my own module I get a compiler error saying Error in opening the compiled module file. Check INCLUDE path

 

However, I do have -I$(MKLROOT)/include when I am compiling my code. Am I missing something? I read somewhere that I might need an "include mkl_vsl.fi". However, If I put this into my own Fortran module I get other clashes, as I am now trying to include a module within a module

 

 

Thanks

 

best

Ali

Intel® MKL version 2019 Update 5 is now available

$
0
0

Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance.

Intel MKL 2019 Update 5 packages are now ready for download.

Intel MKL is available as part of the Intel® Parallel Studio XE and Intel® System Studio. Please visit the Intel® Math Kernel Library Product Page.

Please see What's new in Intel MKL 2019 and in MKL 2019 Update 5 follow this link - https://software.intel.com/en-us/articles/intel-math-kernel-library-rele...

and here is the link to the MKL 2019 Bug Fix list - https://software.intel.com/en-us/articles/intel-math-kernel-library-2019...

How to use the pardiso for partial solution with correct iparm(31) value?

$
0
0

Dear All

I am testing the example given in the MKL manual for pardiso with partial solution in Fortran.

As stated for iparm(31), when its value is 3, selected components of the solution vectors will be computed.  However, I tried many times, with different perm values, iparm(31) values(=1, 2 or 3), and paramater locations,  it always give the whole solution.

Since for my real problem(with thousands of DOF), only a few elements in rhs are nonzero and  only a few selected components of the solution are needed. I have to realize the partial solution function to cut the time cost.

I hope you can help me. Thanks a lot.

The attachment is my fortran code.

Jim Cheng

AttachmentSize
Downloadapplication/octet-streamPardisoT.f905.28 KB

access denied: cannot open mkl documentation pages for c

Linker conflict between libmmt and libucrt

$
0
0

The basic problem is addressed in this thread:

https://software.intel.com/en-us/forums/intel-c-compiler/topic/622985

I am seeing this problem when using MKL 2019 update 4. The conflict is in linking __ldexp. If I ignore libmmt, then I get an unresolved symbol __pow8i8.

The solution given in the thread at that link is to make sure that the floatingpointmodel setting is the same for both compilers, but since I didn't build MKL I can't control that.

I am using Visual C++ 2015 for the rest of the code.

Is this a known problem? Is there a work-around? Thanks!

-John Weeks

How to get the good performance from buiding the Netlib HPL from Source Code

$
0
0

My KNL platform is based on Intel(R) Xeon Phi(TM) CPU 7250 @ 1.40GHz, 1 node, 68 cores, 96 GB memory.

Firstly, I checked the performance of Intel Distribution for LINPACK Benchmark on 1 node at this locate ./benchmarks/mp_linpack/  and I got the good performance about 1700 Gflops for case: N=40000, NB = 336, P = 1, Q=1 and "mpirun -np 1 ./xhpl ". 

Secondly in HPL 2.3, if the same input value above but the performance really bad, it's only 723 Gflops. If I executed with N = 100000, it got about 942 Gflops. But it until lower than comparing with LINPACK benchmark.

And another thing, when I check micprun , it has error ( attach files). 

Is this the problem in Make.Intel64 file?

what should I do to get the higher result in HPL 2.3? 

Thanks a lot.

SHELL        = /bin/sh
#
CD           = cd
CP           = cp
LN_S         = ln -fs
MKDIR        = mkdir -p
RM           = /bin/rm -f
TOUCH        = touch
#
# ----------------------------------------------------------------------
# - Platform identifier ------------------------------------------------
# ----------------------------------------------------------------------
#
#ARCH         = Linux_Intel64
ARCH          = $(arch)
#
# ----------------------------------------------------------------------
# - HPL Directory Structure / HPL library ------------------------------
# ----------------------------------------------------------------------
#
#TOPdir       = $(HOME)/hpl
TOPdir       = /home/tuyen1/HPL/hpl-2.3/install_hpl
INCdir       = $(TOPdir)/include
BINdir       = $(TOPdir)/bin/$(ARCH)
LIBdir       = $(TOPdir)/lib/$(ARCH)
#
HPLlib       = $(LIBdir)/libhpl.a
#
# ----------------------------------------------------------------------
# - Message Passing library (MPI) --------------------------------------
# ----------------------------------------------------------------------
# MPinc tells the  C  compiler where to find the Message Passing library
# header files,  MPlib  is defined  to be the name of  the library to be
# used. The variable MPdir is only used for defining MPinc and MPlib.
#
# MPdir        = /opt/intel/mpi/4.1.0
# MPinc        = -I$(MPdir)/include64
# MPlib        = $(MPdir)/lib64/libmpi.a
MPdir          =/opt/intel/compilers_and_libraries_2018.5.274/linux/mpi
MPinc        = -I$(MPdir)/include64
MPlib        = $(MPdir)/lib64/libmpi.a
# ----------------------------------------------------------------------
# - Linear Algebra library (BLAS or VSIPL) -----------------------------
# ----------------------------------------------------------------------
# LAinc tells the  C  compiler where to find the Linear Algebra  library
# header files,  LAlib  is defined  to be the name of  the library to be
# used. The variable LAdir is only used for defining LAinc and LAlib.
#
LAdir        = /opt/intel/compilers_and_libraries_2018.5.274/linux/mkl
ifndef  LAinc
LAinc        = $(LAdir)/include
endif
ifndef  LAlib
LAlib        = -L$(LAdir)/lib/intel64 \
               -Wl,--start-group \
                $(LAdir)/lib/intel64/libmkl_intel_lp64.a \
                $(LAdir)/lib/intel64/libmkl_intel_thread.a \
                $(LAdir)/lib/intel64/libmkl_core.a \
                -Wl,--end-group -lpthread -ldl
 endif
 #
 # ----------------------------------------------------------------------
 # - F77 / C interface --------------------------------------------------
 # ----------------------------------------------------------------------
 # You can skip this section  if and only if  you are not planning to use
 # a  BLAS  library featuring a Fortran 77 interface.  Otherwise,  it  is
 # necessary  to  fill out the  F2CDEFS  variable  with  the  appropriate
 # options.  **One and only one**  option should be chosen in **each** of
 # the 3 following categories:
 #
 # 1) name space (How C calls a Fortran 77 routine)
 #
 # -DAdd_              : all lower case and a suffixed underscore  (Suns,
 #                       Intel, ...),                           [default]
 # -DNoChange          : all lower case (IBM RS6000),
 # -DUpCase            : all upper case (Cray),
 # -DAdd__             : the FORTRAN compiler in use is f2c.
 #
 # 2) C and Fortran 77 integer mapping
 #
 # -DF77_INTEGER=int   : Fortran 77 INTEGER is a C int,         [default]
 # -DF77_INTEGER=long  : Fortran 77 INTEGER is a C long,
 # -DF77_INTEGER=short : Fortran 77 INTEGER is a C short.
 #
 # 3) Fortran 77 string handling
 #
 # -DStringSunStyle    : The string address is passed at the string loca-
 #                       tion on the stack, and the string length is then
 #                       passed as  an  F77_INTEGER  after  all  explicit
 #                       stack arguments,                       [default]
 # -DStringStructPtr   : The address  of  a  structure  is  passed  by  a
 #                       Fortran 77  string,  and the structure is of the
 #                       form: struct {char *cp; F77_INTEGER len;},
 # -DStringStructVal   : A structure is passed by value for each  Fortran
 #                       77 string,  and  the  structure is  of the form:
 #                       struct {char *cp; F77_INTEGER len;},
 # -DStringCrayStyle   : Special option for  Cray  machines,  which  uses
 #                       Cray  fcd  (fortran  character  descriptor)  for
 #                       interoperation.
 #
 F2CDEFS      = -DAdd__ -DF77_INTEGER=int -DStringSunStyle
#
# ----------------------------------------------------------------------
# - HPL includes / libraries / specifics -------------------------------
# ----------------------------------------------------------------------
#
HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) -I$(LAinc) $(MPinc)
HPL_LIBS     = $(HPLlib) $(LAlib) $(MPlib)
#
# - Compile time options -----------------------------------------------
#
# -DHPL_COPY_L           force the copy of the panel L before bcast;
# -DHPL_CALL_CBLAS       call the cblas interface;
# -DHPL_CALL_VSIPL       call the vsip  library;
# -DHPL_DETAILED_TIMING  enable detailed timers;
#
# By default HPL will:
#    *) not copy L before broadcast,
#    *) call the BLAS Fortran 77 interface,
#    *) not display detailed timing information.
#
#HPL_OPTS     = -DHPL_DETAILED_TIMING -DHPL_PROGRESS_REPORT
HPL_OPTS     = -DASYOUGO -DHYBRID
#
# ----------------------------------------------------------------------
#
HPL_DEFS     = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES)
#
# ----------------------------------------------------------------------
# - Compilers / linkers - Optimization flags ---------------------------
# ----------------------------------------------------------------------
#
CC       = mpiicc
CCNOOPT  = $(HPL_DEFS) -O0 -w -nocompchk
OMP_DEFS = -qopenmp
#CCFLAGS  = $(HPL_DEFS) -O3 -w -ansi-alias -i-static -z noexecstack -z relro -z now -nocompchk -Wall
CCFLAGS  = $(HPL_DEFS) -O3 -w -ansi-alias -i-static -z noexecstack -z relro -z now -nocompchk
#
#
# On some platforms,  it is necessary  to use the Fortran linker to find
# the Fortran internals used in the BLAS library.
#
LINKER       = $(CC)
LINKFLAGS    = $(CCFLAGS) $(OMP_DEFS) -mt_mpi -qopenmp -nocompchk
#
ARCHIVER     = ar
ARFLAGS      = r
RANLIB       = echo

 

AttachmentSize
Downloadtext/plainerror_micprun.txt3.03 KB

mkl_sparse_s_spmmd is slower than tensorflow tf.sparse_matmul

$
0
0

I have two sparse matrix,both of them sparsity is about 55%, I use mkl_sparse_s_spmmd function and pack the funtion .so dynamic library.in python program import .so.I compare .so and th.sparse_matmul use the same dataset.but I found the tf.sparse_matmul performance is good than the .so.

I compile MKL tensorflow,and test in this tensorflow.

Seg. fault using Data Fitting Akima spline

$
0
0

Hi,

I am encountering a problem with the Akima spline that causes my code to seg. fault.

Debugging the code with optimisations and debug symbols, I can repeat the problem. It occurs when dfdInterpolate1D is called with a value that is ~7e-18 within the upper boundary of the spline.

I realise that this value at double precision is near as dammit on the upper boundary, and it is a relatively rare occurrence in my code but it sadly it occasionally occurs.

Is there some underlying floating-point tolerance within the interpolation routine that might be causing this issue?

At the moment I can put in some boundary checking to catch values within tolerance to work around the problem, but ideally I would like to avoid this as it will slow down the code!

Thanks,

Ewan

Intel MKL FFT module access violation

$
0
0

Hi,

 

I am currently using MKL FFT module for our application on Windows 10 and Linux. Frequently I got Access violation exception on Win10 when the program enters this line:

status = DftiCreateDescriptor(&descriptor, DFTI_SINGLE, DFTI_COMPLEX, 2, dim).

The complete function is:

    DFTI_DESCRIPTOR_HANDLE descriptor = NULL;
	MKL_LONG status;
	MKL_LONG dim[2]{ height, width };

	status = DftiCreateDescriptor(&descriptor, DFTI_SINGLE, DFTI_COMPLEX, 2, dim);
	status = DftiSetValue(descriptor, DFTI_PLACEMENT, DFTI_NOT_INPLACE); //Out of place FFT
	status = DftiCommitDescriptor(descriptor);
	status = DftiComputeForward(descriptor, in.data(), out.data());
	status = DftiFreeDescriptor(&descriptor); //Free the descriptor

The full message thrown at this point is:

Exception thrown at [some address] in fft.exe: 0xC0000005: Access violation reading location 0x0000000000000018.
The two addresses 0x05 and 0x18 are constant in both our application and the small test.

The problem is triggered randomly. For example I have a lot, say, 1000 images. The problem happens when processing 200th image at this time, then 300th in another try.

The configuration for the problem is: x64, (debug/release), openmp multi threads. If I disable openmp, there is no problem.

Does anyone know how to deal with it? Thank you!

 

Best,

Jingchun

AttachmentSize
Downloadapplication/zipMKL_ERR.zip197.55 MB

LAPACK Linear Equation Routines for non square matrix

$
0
0

Hi , I am trying to use LAPACKE function to compute system of linear equations.

I use 2 functions :

LAPACKE_sgetrf , LAPACKE_sgetrs

Input matrix A is MxN. According to your documentation first I call LAPACKE_sgetrf that support MxN matrix and then I should call LAPACKE_sgetrs. But in LAPACKE_sgetrs description there is only square matrix support.

 

Please explain if there solution for MxN matrix? If yes how to resolve?

P.S.

 

my Code:

Matrix _A is 110=rows 4=cols (x,y,z,w) 

B is -1;

        lapack_int info = LAPACKE_sgetrf(LAPACK_ROW_MAJOR, A_.rows, A_.cols, (float*)A_.data, A_.cols, &ipiv[0]);

        info = LAPACKE_sgetrs(LAPACK_ROW_MAJOR, 'N', A_.cols, 1, (float*)A_.data, A_.cols, &ipiv[0], (float*)&b[0], 1);

Values return from LAPACKE_sgetrs are garbage.

Thanks

 


How to get the number of nonzeros (nnz) after pardiso factorization

$
0
0

Hi all,

Is there a way to directly get the number of nonzeros after pardiso factorization? By setting msglvl = 1, solver statics including nnz will be printed to screen.

pardiso (pt, maxfct, mnum, mtype, phase, n, a, ia, ja, perm, nrhs, iparm, msglvl, b, x, error)

But can we directly get value of nnz in code?

Thanks,

Hainan

Need of iterative solver for complex matrices

$
0
0

I want to solve complex linear system and want to use iterative solver(connjugate gradient) for it. I have very big sparse matrices, but I could not able to find iterative complex solver in MKL. May be i am not looking  properly.  If intel does have  iterative solvers for complex matrices, can you provide me an example code for this or some hints to use the iterative solver for complex matrices .

C++ Nonsymmetric Complex Eigenvalue Problems

$
0
0

Hello!

I'm trying to compute only one eigenvector, corresponded with max eigenvalue and I'm using sets of method decribed here.

Comparing to cgeev method I'm receiving same eigenvalues but different eigenvectors values. Is it expected? What should I do to get same eigenvector?

Thank you!

APT Repository not working (signatures invalid)

$
0
0

I'm using the steps from here and getting a GPG error: installing-intel-free-libs-and-python-apt-repo

Here are the steps I'm doing:

  1. docker run -it -rm ubuntu:18.04 bash
  2. apt-get update
  3. apt-get install -y --show-progress curl gnupg
  4. curl https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS... | apt-key add -
  5. echo 'deb https://apt.repos.intel.com/mkl all main'> /etc/apt/sources.list.d/intel-mkl.list
  6. apt-get update

Here is the error (note: it is happening with the TBB repository as well:

root@fc9c5f2a3649:/# apt-get update
Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
Get:2 https://apt.repos.intel.com/mkl all InRelease [4430 B]               
Hit:3 http://archive.ubuntu.com/ubuntu bionic InRelease                         
Err:2 https://apt.repos.intel.com/mkl all InRelease                             
  The following signatures were invalid: EXPKEYSIG 1A8497B11911E097 "CN = Intel(R) Software Development Products", O=Intel Corporation
Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease                
Hit:5 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Reading package lists... Done                      
W: GPG error: https://apt.repos.intel.com/mkl all InRelease: The following signatures were invalid: EXPKEYSIG 1A8497B11911E097 "CN = Intel(R) Software Development Products", O=Intel Corporation
E: The repository 'https://apt.repos.intel.com/mkl all InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

gpg keys failure prevent yum install of Intel libraries

$
0
0

To install intel-daal-core-2018.1-163 on CentOS, I used these two repo files:

/etc/yum.repos.d/intel-mkl.repo:

[intel-mkl-core-2018.1-163] 
name='Intel(R) Intel Math Kernel Library' 
baseurl=https://yum.repos.intel.com/mkl 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://yum.repos.intel.com/mkl/setup/PUBLIC_KEY.PUB 
debuglevel=10 
enabled=1

/etc/yum.repos.d/intel-daal.repo:

[intel-daal-core-2018.1-163] 
name='Intel(R) Data Analytics Acceleration Library' baseurl=https://yum.repos.intel.com/daal 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://yum.repos.intel.com/daal/setup/PUBLIC_KEY.PUB 
debuglevel=10 
enabled=1

But, from this morning, the yum install command (sudo yum install intel-daal-core-2018.1-163) fails with:

failure: repodata/repomd.xml from intel-mkl-core-2018.1-163: [Errno 256] No more mirrors to try. https://yum.repos.intel.com/mkl/repodata/repomd.xml: [Errno -1] Gpg Keys not imported, cannot verify repomd.xml for repo intel-mkl-core-2018.1-163

However, if I stop checking the gpg keys, namely - I change the repo files to:

[intel-mkl-core-2018.1-163] 
name='Intel(R) Intel Math Kernel Library' 
baseurl=https://yum.repos.intel.com/mkl 
enabled=1 
gpgcheck=1 
repo_gpgcheck=0 
#gpgkey=https://yum.repos.intel.com/mkl/setup/PUBLIC_KEY.PUB 
debuglevel=10 
enabled=1 


[intel-daal-core-2018.1-163] 
name='Intel(R) Data Analytics Acceleration Library' 
baseurl=https://yum.repos.intel.com/daal 
enabled=1 
gpgcheck=0 
repo_gpgcheck=0 
#gpgkey=https://yum.repos.intel.com/daal/setup/PUBLIC_KEY.PUB 
debuglevel=10 
enabled=1

And do:

sudo yum clean all ; sudo rm -rf /var/cache/yum

Then sudo yum install intel-daal-core-2018.1-163 is successful.

Googling didn't find any notice from Intel that they changed something.
Does anyone know what's the reason for the yum failure of gpg keys?

Viewing all 3005 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>