This page was generated from unit-5a.2-pardofs/pardofs.ipynb.

5.2 Parallel dofs and Vector-types

In this tutorial we learn how NGSolve represents distributed finite element spaces.

[1]:
from ipyparallel import Cluster
c = await Cluster(engines="mpi").start_and_connect(n=4, activate=True)
Starting 4 engines with <class 'ipyparallel.cluster.launcher.MPIEngineSetLauncher'>
[2]:
%%px
from ngsolve import *
from netgen.geom2d import unit_square
comm = MPI.COMM_WORLD
if comm.rank == 0:
    mesh = Mesh(unit_square.GenerateMesh(maxh=0.1).Distribute(comm))
else:
    mesh = Mesh(netgen.meshing.Mesh.Receive(comm))

Create the space on the distributed mesh. All processes agree on the global number of dofs. Locally, each rank has access only to the subset of dofs associated with its elements. Some dofs are shared by several ranks:

[3]:
%%px
fes = H1(mesh, order=1)
# fes = L2(mesh, order=0)
print ("global dofs =", fes.ndofglobal, ", local dofs =", fes.ndof, \
       ", sum of local dofs =", comm.allreduce(fes.ndof))
[stdout:1] global dofs = 136 , local dofs = 50 , sum of local dofs = 155

[stdout:3] global dofs = 136 , local dofs = 53 , sum of local dofs = 155

[stdout:0] global dofs = 136 , local dofs = 0 , sum of local dofs = 155

[stdout:2] global dofs = 136 , local dofs = 52 , sum of local dofs = 155

Parallel Dofs

A ParallelDofs object maintains information how dofs are connected across the cluster. The ParallelDofs object is generated by the FESpace, which has access to the connectivity of the mesh.

[4]:
%%px
pardofs = fes.ParallelDofs()
for k in range(pardofs.ndoflocal):
    print ("dof", k, "is shard with ranks", list(pardofs.Dof2Proc(k)))
[stdout:2] dof 0 is shard with ranks []
dof 1 is shard with ranks [3]
dof 2 is shard with ranks []
dof 3 is shard with ranks []
dof 4 is shard with ranks []
dof 5 is shard with ranks []
dof 6 is shard with ranks []
dof 7 is shard with ranks []
dof 8 is shard with ranks []
dof 9 is shard with ranks []
dof 10 is shard with ranks []
dof 11 is shard with ranks [1]
dof 12 is shard with ranks [3]
dof 13 is shard with ranks []
dof 14 is shard with ranks []
dof 15 is shard with ranks []
dof 16 is shard with ranks []
dof 17 is shard with ranks []
dof 18 is shard with ranks []
dof 19 is shard with ranks []
dof 20 is shard with ranks []
dof 21 is shard with ranks []
dof 22 is shard with ranks [1]
dof 23 is shard with ranks [3]
dof 24 is shard with ranks []
dof 25 is shard with ranks []
dof 26 is shard with ranks []
dof 27 is shard with ranks []
dof 28 is shard with ranks []
dof 29 is shard with ranks []
dof 30 is shard with ranks []
dof 31 is shard with ranks []
dof 32 is shard with ranks []
dof 33 is shard with ranks [1]
dof 34 is shard with ranks []
dof 35 is shard with ranks [3]
dof 36 is shard with ranks []
dof 37 is shard with ranks []
dof 38 is shard with ranks []
dof 39 is shard with ranks []
dof 40 is shard with ranks []
dof 41 is shard with ranks [1]
dof 42 is shard with ranks []
dof 43 is shard with ranks [3]
dof 44 is shard with ranks []
dof 45 is shard with ranks []
dof 46 is shard with ranks []
dof 47 is shard with ranks [1, 3]
dof 48 is shard with ranks [1]
dof 49 is shard with ranks [3]
dof 50 is shard with ranks []
dof 51 is shard with ranks [3]

[stdout:1] dof 0 is shard with ranks []
dof 1 is shard with ranks []
dof 2 is shard with ranks []
dof 3 is shard with ranks []
dof 4 is shard with ranks []
dof 5 is shard with ranks []
dof 6 is shard with ranks []
dof 7 is shard with ranks []
dof 8 is shard with ranks []
dof 9 is shard with ranks []
dof 10 is shard with ranks []
dof 11 is shard with ranks []
dof 12 is shard with ranks [3]
dof 13 is shard with ranks [2]
dof 14 is shard with ranks []
dof 15 is shard with ranks []
dof 16 is shard with ranks []
dof 17 is shard with ranks []
dof 18 is shard with ranks []
dof 19 is shard with ranks []
dof 20 is shard with ranks []
dof 21 is shard with ranks []
dof 22 is shard with ranks []
dof 23 is shard with ranks []
dof 24 is shard with ranks []
dof 25 is shard with ranks []
dof 26 is shard with ranks [3]
dof 27 is shard with ranks [2]
dof 28 is shard with ranks []
dof 29 is shard with ranks []
dof 30 is shard with ranks []
dof 31 is shard with ranks []
dof 32 is shard with ranks []
dof 33 is shard with ranks []
dof 34 is shard with ranks []
dof 35 is shard with ranks [3]
dof 36 is shard with ranks []
dof 37 is shard with ranks [2]
dof 38 is shard with ranks []
dof 39 is shard with ranks []
dof 40 is shard with ranks []
dof 41 is shard with ranks []
dof 42 is shard with ranks [3]
dof 43 is shard with ranks []
dof 44 is shard with ranks [2]
dof 45 is shard with ranks []
dof 46 is shard with ranks []
dof 47 is shard with ranks [3]
dof 48 is shard with ranks [2, 3]
dof 49 is shard with ranks [2]

[stdout:3] dof 0 is shard with ranks []
dof 1 is shard with ranks [1]
dof 2 is shard with ranks []
dof 3 is shard with ranks []
dof 4 is shard with ranks []
dof 5 is shard with ranks []
dof 6 is shard with ranks []
dof 7 is shard with ranks []
dof 8 is shard with ranks []
dof 9 is shard with ranks []
dof 10 is shard with ranks []
dof 11 is shard with ranks []
dof 12 is shard with ranks []
dof 13 is shard with ranks []
dof 14 is shard with ranks [2]
dof 15 is shard with ranks [1]
dof 16 is shard with ranks []
dof 17 is shard with ranks []
dof 18 is shard with ranks []
dof 19 is shard with ranks []
dof 20 is shard with ranks []
dof 21 is shard with ranks []
dof 22 is shard with ranks []
dof 23 is shard with ranks []
dof 24 is shard with ranks []
dof 25 is shard with ranks []
dof 26 is shard with ranks [2]
dof 27 is shard with ranks [1]
dof 28 is shard with ranks []
dof 29 is shard with ranks []
dof 30 is shard with ranks []
dof 31 is shard with ranks []
dof 32 is shard with ranks []
dof 33 is shard with ranks []
dof 34 is shard with ranks []
dof 35 is shard with ranks [2]
dof 36 is shard with ranks []
dof 37 is shard with ranks []
dof 38 is shard with ranks [1]
dof 39 is shard with ranks []
dof 40 is shard with ranks [2]
dof 41 is shard with ranks []
dof 42 is shard with ranks []
dof 43 is shard with ranks []
dof 44 is shard with ranks []
dof 45 is shard with ranks [2]
dof 46 is shard with ranks []
dof 47 is shard with ranks [1]
dof 48 is shard with ranks [1, 2]
dof 49 is shard with ranks []
dof 50 is shard with ranks []
dof 51 is shard with ranks [2]
dof 52 is shard with ranks [2]

[5]:
%%px
print ("I share dofs with ranks:", list(pardofs.ExchangeProcs()))
for k in range(MPI.COMM_WORLD.size):
    print ("with rank", k, "I share dofs", list(pardofs.Proc2Dof(k)))
[stdout:1] I share dofs with ranks: [2, 3]
with rank 0 I share dofs []
with rank 1 I share dofs []
with rank 2 I share dofs [13, 27, 37, 44, 48, 49]
with rank 3 I share dofs [12, 26, 35, 42, 47, 48]

[stdout:3] I share dofs with ranks: [1, 2]
with rank 0 I share dofs []
with rank 1 I share dofs [1, 15, 27, 38, 47, 48]
with rank 2 I share dofs [14, 26, 35, 40, 45, 48, 51, 52]
with rank 3 I share dofs []

[stdout:2] I share dofs with ranks: [1, 3]
with rank 0 I share dofs []
with rank 1 I share dofs [11, 22, 33, 41, 47, 48]
with rank 2 I share dofs []
with rank 3 I share dofs [1, 12, 23, 35, 43, 47, 49, 51]

[stdout:0] I share dofs with ranks: []
with rank 0 I share dofs []
with rank 1 I share dofs []
with rank 2 I share dofs []
with rank 3 I share dofs []

[6]:
%%px
u,v = fes.TnT()
M = BilinearForm(u*v*dx).Assemble().mat
gfu = GridFunction(fes)
gfu.Set (1)
print (gfu.vec)
[stdout:0] CUMULATED



[stdout:1] CUMULATED
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1



[stdout:3] CUMULATED
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1



[stdout:2] CUMULATED
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1
       1



We see that all values are set to 1, i.e. joint dofs have the same value. We call such a vector 'cumulated'. The matrix M is stored locally assembled, i.e. every rank has the contributions from its elements. When we multiply this matrix with a cumulated vector, every rank performs a local matrix vector product. The resulting vector is stored 'distributed', i.e. the true values are obtained by adding up rank-local contributions for joint dofs:

[7]:
%%px
r = M.CreateColVector()
r.data = M*gfu.vec
print (r)
[stdout:0] DISTRIBUTED



[stdout:1] DISTRIBUTED
 0.00166667
 0.00233608
 0.0057191
 0.00537458
 0.00447726
 0.0044181
 0.00452141
 0.00452951
 0.00436189
 0.00412682
 0.00357163
 0.00391905
 0.00311488
 0.00303581
 0.00542435
 0.00576111
 0.0133877
 0.00962804
 0.00878675
 0.00904405
 0.00922506
 0.00903161
 0.00843651
 0.00782022
 0.00581907
 0.00959581
 0.00402148
 0.00292974
 0.00960968
 0.00802012
 0.00869964
 0.00958709
 0.00968368
 0.00903338
 0.0081606
 0.00417114
 0.00620471
 0.00514753
 0.00891842
 0.0103883
 0.0108419
 0.00893703
 0.00549093
 0.00571893
 0.00367601
 0.0101198
 0.00810662
 0.00580331
 0.00591845
 0.00482175



[stdout:2] DISTRIBUTED
 0.00268259
 0.00284609
 0.00413811
 0.00384412
 0.00382697
 0.00373407
 0.00370452
 0.00403554
 0.00418858
 0.00423023
 0.00424378
 0.0014254
 0.00424688
 0.00836753
 0.00787911
 0.0069851
 0.00709273
 0.0068265
 0.00777643
 0.0083048
 0.0084276
 0.00841204
 0.00558687
 0.00406758
 0.00819117
 0.00845983
 0.00764282
 0.00506157
 0.00877024
 0.00762975
 0.00858808
 0.00849844
 0.00825186
 0.00271474
 0.00745165
 0.00264444
 0.00796605
 0.0108693
 0.00879353
 0.00861227
 0.00896137
 0.00413206
 0.00805524
 0.00473829
 0.00542835
 0.00974423
 0.00862839
 0.00372661
 0.00477272
 0.0026678
 0.0075813
 0.00620893



[stdout:3] DISTRIBUTED
 0.00166667
 0.00136048
 0.00412511
 0.00425568
 0.00425461
 0.00427928
 0.00432722
 0.00430011
 0.00498378
 0.00642268
 0.0040858
 0.00465268
 0.00464484
 0.00442116
 0.00144626
 0.00411515
 0.00844479
 0.00857457
 0.00850766
 0.00872643
 0.00885705
 0.00911097
 0.00666588
 0.00912953
 0.00985853
 0.00905035
 0.00436559
 0.00452767
 0.00915845
 0.00843475
 0.00857833
 0.00949822
 0.00908184
 0.00966484
 0.0124597
 0.00443062
 0.00855678
 0.00776508
 0.006963
 0.00927858
 0.00569493
 0.00926318
 0.00572459
 0.00905361
 0.00771849
 0.00259579
 0.00806979
 0.00544661
 0.00384635
 0.00787028
 0.00991402
 0.00556954
 0.00339452



This cumulated/distributed pair of vectors is prefect for computing inner products. We can compute inner products of local vectors, and sum up (i.e. reduce in MPI terms) across all ranks:

[8]:
%%px
print ("global ip =", InnerProduct(gfu.vec, r))
localip = InnerProduct(r.local_vec, gfu.vec.local_vec)
print ("local contribution:", localip)
print ("cumulated:", comm.allreduce(localip, MPI.SUM))
[stdout:0] global ip = 0.9999999999999998
local contribution: 0.0
cumulated: 1.0

[stdout:2] global ip = 0.9999999999999998
local contribution: 0.32166419652519485
cumulated: 1.0

[stdout:3] global ip = 0.9999999999999998
local contribution: 0.34719243453511445
cumulated: 1.0

[stdout:1] global ip = 0.9999999999999998
local contribution: 0.3311433689396907
cumulated: 1.0

[ ]: