Hi,
I'm doing an evaluation of the MKL library with a Phi coprocessor for possible use in production at a bioinformatics institute. The production datasets are very large but I'm starting with some test data of about 6GB to fit in the 8GB Phi coprocessor memory. I've used the Automatic Offload MKL capability (very appealing in terms of rolling this hardware out while minimising code changes) to do a Cholesky factorisation with LAPACKE_dpotrf. Comparing runtimes for one thread on the Xeon not using the Phi coprocessor to a single Xeon thread Automatic Offloading to the Phi coprocessor gives about a 65% reduction in wallclock time. The runtime includes the data download to the Phi coprocessor. The speedup would probably improve if I could also do the inverse on the Phi coprocessor using the LAPACKE_dpotri MKL call on the result from the Cholesky call. But I don't think the dpotri call is supported for Automatic Offload as yet. To my queries:
a/ Perhaps someone from Intel could give some guidance on when the dpotri call might be supported for Automatic Offload? I could have a crack at implementing the function but would prefer to use an optimised Intel version.
b/ Is there a list of what MKL calls are currently supported on the Phi environment and which are optimised? The release notes have incremental details so I guess I could put this together but it would be handy to have this already collated.
c/ As part of this work we noticed using the MKL automatic parallelisation on just the Xeon cores (2 CPU's 10 cores/CPU) with no coprocessor involvement (MKL_MIC_ENABLE=0) that the runtimes dropped off nicely from 1 to 8 cores, runtime for 9 cores was higher than for 8 cores, runtimes dropped nicely again to 12 cores, increased at 13 cores and then dropped for 14 cores after which the runtimes were about the same as the 14 core case. For the Xeon core work we are using
KMP_AFFINITY=verbose,granularity=fine,compact,1,1
I'm a bit perplexed about the runtime increases at 9 and 13 cores.
Thanks in advance for any help
David