getfem-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [EXT] Re: getfem installation


From: Chen,Jinzhen
Subject: Re: [EXT] Re: getfem installation
Date: Tue, 30 Nov 2021 19:00:31 +0000
User-agent: Microsoft-MacOutlook/16.43.20110804

Dear Dr. Poulios,

 

Thank you so much for your prompt response and information.  They are very helpful.   I cc’ed this email to getfem-users@nongnu.org so that I don’t need to send to your personal email in next one. After reading your reference of the configure command, I started from scratch but still got compiling errors.

 

I installed metis and mumps in the system from EPEL reporistory.

 

[root@ ~]# rpm -qa |grep metis

metis-devel-5.1.0-12.el7.x86_64

metis-5.1.0-12.el7.x86_64

[root@ ~]# rpm -qa |grep metis

metis-devel-5.1.0-12.el7.x86_64

metis-5.1.0-12.el7.x86_64

[root@ ~]# rpm -qa |grep MUMPS

MUMPS-srpm-macros-5.3.5-1.el7.noarch

MUMPS-5.3.5-1.el7.x86_64

MUMPS-devel-5.3.5-1.el7.x86_64

MUMPS-common-5.3.5-1.el7.noarch

 

I used the openmpi/4.1.1, gcc/3.9.0, blas/3.8.0 and zlib/1.2.11 modules for this build. Below is the module list command output.

 

[ris_hpc_apps@r1drpswdev3 getfem-5.4.1]$ module list

 

Currently Loaded Modules:

  1) openmpi/4.1.1   2) python/3.7.3-anaconda   3) blas/3.8.0   4) zlib/1.2.11   5) gcc/9.3.0

 

Here is my configuration command running on getfem-5.4.1 directory

 

./configure CXX="/risapps/rhel7/openmpi/4.1.1/bin/mpic++" CC="/risapps/rhel7/openmpi/4.1.1/bin/mpicc"  FC="/risapps/rhel7/openmpi/4.1.1/bin/mpifort"  LIBS="-L/risapps/rhel7/gcc/9.3.0/lib64 -L/risapps/rhel7/openmpi/4.1.1/lib -L/risapps/rhel7/blas/3.8.0 -L/risapps/rhel7/zlib/1.2.11/lib  -L/usr/lib64 -lmetis -lzmumps -ldmumps -lcmumps -lsmumps -lmumps_common" CXXFLAGES="-I/risapps/rhel7/gcc/9.3.0/include -I/risapps/rhel7/openmpi/4.1.1/include -I/risapps/rhel7/zlib/1.2.11/include -I/usr/include -I/usr/include/MUMPS" CPPFLAGES="-I/risapps/rhel7/gcc/9.3.0/include -I/risapps/rhel7/openmpi/4.1.1/include -I/risapps/rhel7/zlib/1.2.11/include -I/usr/include -I/usr/include/MUMPS" CFLAGS="-I/risapps/rhel7/gcc/9.3.0/include -I/risapps/rhel7/openmpi/4.1.1/include -I/risapps/rhel7/zlib/1.2.11/include -I/usr/include -I/usr/include/MUMPS" PYTHON="/risapps/rhel7/python/3.7.3/bin/python" PYTHON_VERSION=3.7.3 --with-mumps="-L/usr/lib64"  --with-mumps-include-dir="-I/usr/include/MUMPS" --with-blas="-L/risapps/rhel7/blas/3.8.0" --prefix=/risapps/rhel7/getfem-mpi/5.4.1 --enable-shared --enable-metis --enable-par-mumps -enable-paralevel=2

 

The current errors on the make command are the following:

….

libtool: compile:  /risapps/rhel7/openmpi/4.1.1/bin/mpic++ -DHAVE_CONFIG_H -I. -I.. -I../src -I../src -I.. -I/usr/local/include -DGETFEM_PARA_LEVEL=2 -DGMM_USES_MPI -DGMM_USES_BLAS -DGMM_U  SES_BLAS_INTERFACE -I/usr/include/MUMPS -O3 -std=c++14 -MT dal_bit_vector.lo -MD -MP -MF .deps/dal_bit_vector.Tpo -c dal_bit_vector.cc  -fPIC -DPIC -o .libs/dal_bit_vector.o

In file included from ./gmm/gmm_kernel.h:49,

                 from getfem/bgeot_config.h:50,

                 from getfem/getfem_omp.h:46,

                 from getfem/dal_basic.h:42,

                 from getfem/dal_bit_vector.h:51,

                 from dal_bit_vector.cc:23:

./gmm/gmm_matrix.h:956:32: error: ‘MPI_Datatype’ does not name a type

  956 |   template <typename T> inline MPI_Datatype mpi_type(T)

      |                                ^~~~~~~~~~~~

./gmm/gmm_matrix.h:958:10: error: ‘MPI_Datatype’ does not name a type

  958 |   inline MPI_Datatype mpi_type(double) { return MPI_DOUBLE; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h:959:10: error: ‘MPI_Datatype’ does not name a type

  959 |   inline MPI_Datatype mpi_type(float) { return MPI_FLOAT; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h:960:10: error: ‘MPI_Datatype’ does not name a type

  960 |   inline MPI_Datatype mpi_type(long double) { return MPI_LONG_DOUBLE; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h:962:10: error: ‘MPI_Datatype’ does not name a type

  962 |   inline MPI_Datatype mpi_type(std::complex<float>) { return MPI_COMPLEX; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h:963:10: error: ‘MPI_Datatype’ does not name a type

  963 |   inline MPI_Datatype mpi_type(std::complex<double>) { return MPI_DOUBLE_COMPLEX; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h:965:10: error: ‘MPI_Datatype’ does not name a type

  965 |   inline MPI_Datatype mpi_type(int) { return MPI_INT; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h:966:10: error: ‘MPI_Datatype’ does not name a type

  966 |   inline MPI_Datatype mpi_type(unsigned int) { return MPI_UNSIGNED; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h:967:10: error: ‘MPI_Datatype’ does not name a type

  967 |   inline MPI_Datatype mpi_type(long) { return MPI_LONG; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h:968:10: error: ‘MPI_Datatype’ does not name a type

  968 |   inline MPI_Datatype mpi_type(unsigned long) { return MPI_UNSIGNED_LONG; }

      |          ^~~~~~~~~~~~

./gmm/gmm_matrix.h: In function ‘typename gmm::strongest_value_type3<V1, V2, MATSP>::value_type gmm::vect_sp(const gmm::mpi_distributed_matrix<MAT>&, const V1&, const V2&)’:

./gmm/gmm_matrix.h:1012:50: error: ‘MPI_SUM’ was not declared in this scope

1012 |     MPI_Allreduce(&res, &rest, 1, mpi_type(T()), MPI_SUM,MPI_COMM_WORLD);

      |                                                  ^~~~~~~

./gmm/gmm_matrix.h: In function ‘void gmm::mult_add(const gmm::mpi_distributed_matrix<MAT>&, const V1&, V2&)’:

./gmm/gmm_matrix.h:1023:20: error: there are no arguments to ‘MPI_Wtime’ that depend on a template parameter, so a declaration of ‘MPI_Wtime’ must be available [-fpermissive]

1023 |     double t_ref = MPI_Wtime();

      |                    ^~~~~~~~~

./gmm/gmm_matrix.h:1023:20: note: (if you use ‘-fpermissive’, G++ will accept your code, but allowing the use of an undeclared name is deprecated)

./gmm/gmm_matrix.h:1026:21: error: there are no arguments to ‘MPI_Wtime’ that depend on a template parameter, so a declaration of ‘MPI_Wtime’ must be available [-fpermissive]

1026 |     double t_ref2 = MPI_Wtime();

      |                     ^~~~~~~~~

./gmm/gmm_matrix.h:1028:19: error: ‘MPI_SUM’ was not declared in this scope

1028 |                   MPI_SUM,MPI_COMM_WORLD);

      |                   ^~~~~~~

./gmm/gmm_matrix.h:1029:18: error: there are no arguments to ‘MPI_Wtime’ that depend on a template parameter, so a declaration of ‘MPI_Wtime’ must be available [-fpermissive]

1029 |     tmult_tot2 = MPI_Wtime()-t_ref2;

      |                  ^~~~~~~~~

./gmm/gmm_matrix.h:1032:17: error: there are no arguments to ‘MPI_Wtime’ that depend on a template parameter, so a declaration of ‘MPI_Wtime’ must be available [-fpermissive]

1032 |     tmult_tot = MPI_Wtime()-t_ref;

      |                 ^~~~~~~~~

make[2]: *** [Makefile:941: dal_bit_vector.lo] Error 1

make[2]: Leaving directory '/risapps/build7/getfem-5.4.1/src'

make[1]: *** [Makefile:577: all-recursive] Error 1

make[1]: Leaving directory '/risapps/build7/getfem-5.4.1'

make: *** [Makefile:466: all] Error 2

 

I really appreciate your help. Thank you again !

 

Best Regards

Jinzhen Chen

 

 

From: Konstantinos Poulios <kopo@mek.dtu.dk>
Date: Tuesday, November 30, 2021 at 1:53 AM
To: "Chen,Jinzhen" <JChen24@mdanderson.org>
Subject: [EXT] Re: getfem installation

 

WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe.

 

Dear Jinzhen Chen,

 

Thanks for your question. Yes you should be able to compile GetFEM, also the parallel version of it on Redhat. I haven't tried it but there isn't anything distribution specific in the GetFEM code.

 

Having said that, the parallel version has not been tested very recently, and it might need some performance fixes from our side, to get a good scaling for Anne Ceciles problem. Having compiled the parallel version of GetFEM on your cluster is a good starting point in any case to detect bottlenecks.

 

The tricky parts for building GetFEM are normally how to link to mumps, metis and other dependencies. If you could send me the compilation errors that you get, either on this address or on the getfem mailing list getfem-users@nongnu.org I can try to help you to resolve the issues.

 

As an additional reference, some time ago, when I asked our cluster administrators to compile GetFEM they used the following configure options:

 

$ ../configure CXX=mpicxx CC=mpicc FC=mpifort LIBS= -L/zdata/groups/common/nicpa/2018-feb/generic/build-tools/1.0/lib -L/zdata/groups/common/nicpa/2018-feb/generic/build-tools/1.0/lib64 -L/zdata/groups/common/nicpa/2018-feb/generic/gcc/7.3.0/lib -L/zdata/groups/common/nicpa/2018-feb/generic/gcc/7.3.0/lib64 -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/zlib/1.2.11/gnu-7.3.0/lib -L/zdata/groups/common/nicpa/2018-feb/generic/numactl/2.0.11/lib -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/libxml2/2.9.7/gnu-7.3.0/lib -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/hwloc/1.11.9/gnu-7.3.0/lib -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/openmpi/3.0.0/gnu-7.3.0/lib -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/parmetis/4.0.3/gnu-7.3.0/lib -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/scalapack/204/gnu-7.3.0/lib -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/scotch/6.0.4/gnu-7.3.0/lib -L/zdata/g
 roups/common/nicpa/2018-feb/XeonX5550/mumps/5.1.2/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/build-tools/1.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/build-tools/1.0/lib64 -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/gcc/7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/gcc/7.3.0/lib64 -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/zlib/1.2.11/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/numactl/2.0.11/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/libxml2/2.9.7/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/hwloc/1.11.9/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/openmpi/3.0.0/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/parmetis/4.0.3/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/scalapack/204/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/Xeon
 X5550/openblas/0.2.20/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/scotch/6.0.4/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/mumps/5.1.2/gnu-7.3.0/lib -lzmumps -ldmumps -lcmumps -lsmumps -lmumps_common -lpord -lesmumps -lscotch -lscotcherr -lparmetis -lmetis -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib -lopenblas --disable-openmp --enable-paralevel --enable-metis --enable-par-mumps --enable-python --disable-boost --disable-matlab --disable-scilab --with-blas= -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib -lopenblas --prefix=<change-here-for-your-installation-directory>

 

Best regards

Konstantinos

 

On Tue, 2021-11-30 at 00:24 +0000, Chen,Jinzhen wrote:

Dear Dr Konstantinos Poulios,

 

My name is Jinzhen Chen, a HPC administrator on MD Anderson Cancer Center, Houston, USA.  I am helping a user (Dr.  Lesage,Anne Cecile J) to install getfem on our HPC cluster,  which OS is rhel7.9.  I got its basic installed. However, When I tried to use option –enable-paralevel=2 of the configure based onhttps://getfem.org/userdoc/parallel.html, I kept getting compiling errors. Looks like some libraries are missing, not visible or not compatible.   I have metis and MUMPS installed on the system and used openmpi/4.1.1 module.

 

My question is it is possible to install the parallel version of getfem on rhel7 ?  if so, how can I contact developer or someone for the issues?  I really appreciate your help and am looking forward to hearing from you.

 

Thank you very much !

 

Regards

Jinzhen  Chen – HPC Team

MD Anderson Cancer Center

inside:Information Services   

inside:HPC Request

Email: jchen24@mdanderson.org | Tel: 713-745-6226

 

 

The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems.

The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]