This page was generated from unit-5a.1-mpi/poisson_mpi.ipynb.
5.1 Poisson Equation in Parallel¶
NGSolve can be executed on a cluster using the MPI message passing interface. You can download poisson_mpi.py and run it as
mpirun -np 4 python3 poisson_mpi.py
This computes and saves a solution which we can visualize it with drawsolution.py
netgen drawsolution.py
For proper parallel execution, Netgen/NGSolve must be configured with -DUSE_MPI=ON
. Recent binaries for Linux and Mac are built with MPI support (?). If you are unsure, when your Netgen/NGSolve supports MPI, look for outputs like "Including MPI version 3.1" during Netgen startup.
MPI-parallel execution using ipyparallel¶
For the MPI jupyter-tutorials we use ipyparallel
module. Please consult https://ipyparallel.readthedocs.io for installation.
We start an MPI server:
[1]:
from ipyparallel import Cluster
c = await Cluster(engines="mpi").start_and_connect(n=4, activate=True)
Starting 4 engines with <class 'ipyparallel.cluster.launcher.MPIEngineSetLauncher'>
We use mpi4py https://mpi4py.readthedocs.io/ for issuing MPI calls from Python. The %%px syntax magic causes parallel execution of that cell:
[2]:
%%px
from mpi4py.MPI import COMM_WORLD as comm
print (comm.rank, comm.size)
[stdout:0] 0 4
[stdout:2] 2 4
[stdout:1] 1 4
[stdout:3] 3 4
The master process (rank==0) generates the mesh, and distributes it within the group of processes defined by the communicator. All other ranks receive a part of the mesh. The function mesh.GetNE(VOL) returns the local number of elements:
[3]:
%%px
from ngsolve import *
from netgen.occ import unit_square
if comm.rank == 0:
mesh = Mesh(unit_square.GenerateMesh(maxh=0.1).Distribute(comm))
else:
mesh = Mesh(netgen.meshing.Mesh.Receive(comm))
print (mesh.GetNE(VOL))
[stdout:3] 78
[stdout:1] 73
[stdout:2] 79
[stdout:0] 0
We can define spaces, bilinear / linear forms, and gridfunctions in the same way as in sequential mode. But now, the degrees of freedom are distributed on the cluster following the distribution of the mesh. The finite element spaces define how the dofs match together.
[4]:
%%px
fes = H1(mesh, order=3, dirichlet=".*")
u,v = fes.TnT()
a = BilinearForm(grad(u)*grad(v)*dx)
pre = Preconditioner(a, "local")
a.Assemble()
f = LinearForm(1*v*dx).Assemble()
gfu = GridFunction(fes)
from ngsolve.krylovspace import CGSolver
inv = CGSolver(a.mat, pre.mat, printrates=comm.rank==0, maxiter=200, tol=1e-8)
gfu.vec.data = inv*f.vec
[stdout:0] CG iteration 1, residual = 0.05313636107212468
CG iteration 2, residual = 0.07324483851673459
CG iteration 3, residual = 0.06077337326925863
CG iteration 4, residual = 0.04965675927264464
CG iteration 5, residual = 0.04628399294099898
CG iteration 6, residual = 0.030515633315404585
CG iteration 7, residual = 0.02589831756935774
CG iteration 8, residual = 0.016994550808865805
CG iteration 9, residual = 0.013010287656153169
CG iteration 10, residual = 0.011432088760998948
CG iteration 11, residual = 0.007816924495134887
CG iteration 12, residual = 0.003509806660977249
CG iteration 13, residual = 0.001738713084906281
CG iteration 14, residual = 0.0011502309396829048
CG iteration 15, residual = 0.0008184957095722675
CG iteration 16, residual = 0.0005419630762720759
CG iteration 17, residual = 0.00038670700524591413
CG iteration 18, residual = 0.0002836742157757891
CG iteration 19, residual = 0.000177734361494144
CG iteration 20, residual = 0.00013341157334192933
CG iteration 21, residual = 9.379683067760118e-05
CG iteration 22, residual = 6.550835827505407e-05
CG iteration 23, residual = 5.123579298412052e-05
CG iteration 24, residual = 3.536225147414028e-05
CG iteration 25, residual = 2.2782716580339465e-05
CG iteration 26, residual = 1.5677459278638398e-05
CG iteration 27, residual = 1.1117149587134128e-05
CG iteration 28, residual = 7.709451159725068e-06
CG iteration 29, residual = 5.474421176424823e-06
CG iteration 30, residual = 3.687679665740634e-06
CG iteration 31, residual = 2.083369575095339e-06
CG iteration 32, residual = 1.6564423874702173e-06
CG iteration 33, residual = 1.3736999959995863e-06
CG iteration 34, residual = 8.539631243750829e-07
CG iteration 35, residual = 4.617497527323203e-07
CG iteration 36, residual = 2.950720008146275e-07
CG iteration 37, residual = 1.9900657697864096e-07
CG iteration 38, residual = 1.4268161595527723e-07
CG iteration 39, residual = 9.868137345832683e-08
CG iteration 40, residual = 7.020839769079075e-08
CG iteration 41, residual = 4.5770714733194566e-08
CG iteration 42, residual = 2.9622799205887258e-08
CG iteration 43, residual = 1.9267023819974943e-08
CG iteration 44, residual = 1.3684980860594161e-08
CG iteration 45, residual = 9.117459310028138e-09
CG iteration 46, residual = 5.918193053287846e-09
CG iteration 47, residual = 3.739344480360375e-09
CG iteration 48, residual = 2.413369403666743e-09
CG iteration 49, residual = 1.7270530804851814e-09
CG iteration 50, residual = 1.1180335860328558e-09
CG iteration 51, residual = 7.628606216673648e-10
CG iteration 52, residual = 5.294659181770532e-10
Parallel pickling allows to serialize the distributed solution and transfer it to the client. The process with rank=0 gets the whole mesh and computed solution, all other processes get the local parts of the mesh and solution:
[5]:
gfu = c[:]["gfu"]
We can now draw the whole solution using the the master process’s gfu[0]
.
[6]:
from ngsolve.webgui import Draw
Draw (gfu[0]);
Drawing gfu[n]
will draw only part of the solution that the process with rank=n
possesses.
[7]:
Draw (gfu[3]);
We can also visualize the sub-domains obtained by the automatic partitioning, without using any computed solution, as follows.
[8]:
%%px
fesL2 = L2(mesh, order=0)
gfL2 = GridFunction(fesL2)
gfL2.vec.local_vec[:] = comm.rank
[9]:
gfL2 = c[:]["gfL2"]
Draw (gfL2[0]);