This page was generated from unit-6.3-plasticity/plasticity.ipynb.

6.3 Elasto-Plasticity

This tutorial considers Hencky-type [1] plasticity with isotropic hardening in 2d (plane strain). It showcases the classes IntegrationRuleSpace, NewtonCF and MinimizationCF.

We begin with a classical formulation with an explicit yield surface that is probably more familiar to engineers. In a second stage, we consider a (constrained) minimization formulation which leads to a more streamlined implementation. Both formulations correspond to the isotropic hardening model discussed in [2].

The essential model ingredients are given in terms of the dissipation potential \(\Phi\) and the stored energy density contribution \(\Psi\), see eg, [3]. For convenience, we first introduce the strain energy density \(\Psi^\text{e}\)

\begin{align*} \Psi^\text{e}(\epsilon) = \frac{1}{2}\| \epsilon\|^2_{M} &= \frac{E}{2(1+\nu)(1-2\nu)}\left((1-2\nu)\epsilon:\epsilon + \nu \,\text{tr}(\epsilon)^2\right) \\ &= \mu\left(\epsilon:\epsilon\right) + \frac{\lambda}{2}\,\text{tr}(\epsilon)^2 \\ &= \frac{1}{2} \epsilon : \mathbf{C} : \epsilon. \end{align*}

Moreover, we consider the full strain \(\epsilon\) to be the sum of the elastic \(\epsilon^e\) and the plastic strain contribution \(p\). Conversely, the elastic strain is the given as \begin{equation} \epsilon^\text{e} = \epsilon - p ,\end{equation} which is the strain quantity that will be actually employed as argument to \(\Psi^\text{e}\).

The isotropic hardening variable will be denoted by \(\alpha\) throughout this tutorial. The hardening potential density is simply chosen as \(\Psi^\text{h} = \frac{1}{2}\alpha^2\) such that the total energy density reads \begin{equation} \Psi(\epsilon, p, \alpha) = \Psi^\text{e}(\epsilon-p) + \Psi^\text{h}(\alpha) = (\epsilon - p) : \mathbf{C} : (\epsilon - p) + \frac{1}{2} \alpha^2. \end{equation}

The yield function \(f\) that represents the elastic limit is given as

\begin{equation} f(\sigma, \beta) = \left|\mathrm{dev} \sigma\right| - \sqrt{2/3}\sigma_Y(1 - H \beta), \end{equation}

where we note that the hardening parameter \(H\) enters in \(f\) rather than in \(\Psi\) for consistency with [2]. Deviating from [2], we scale the yield stress \(\sigma_Y\) by \(\sqrt{2/3}\) to obtain the von-Mises yield surface radius. We remark that by virtue of \(\mathrm{dev}\sigma\) appearing in \(f\), the plastic strain (rate) \(p\) (\(\dot p\)) will be deviatoric.

The boundary value problem solved in this tutorial is in plain strains. However, this does not mean that the plastic strains are plain and neither are the elastic ones. Only their sum has vanishing out-of-plane components. Therefore, for simplicity, the material model is implemented in 3d whereas displacements are in 2d.

Formulation with explicit yield surface

Based on the principle of maximum dissipation we can abstractly formulate the dissipation potential as

:nbsphinx-math:`begin{align*} Phi(dot{p}, dot{alpha}) =

sup_{sigma} sup_{beta} inf_{Lambda geq 0}

: sigma : dot{p} + beta dot{alpha} - Lambda f(sigma, beta)

end{align*}`

The stationary conditions of the above functional read as

:nbsphinx-math:`begin{align*}

&dot{p} = Lambda frac{partial left|mathrm{dev} sigmaright|}{partial sigma} \ &dot{alpha} = Lambda,sqrt{2/3}sigma_YH \ &{Lambda geq 0 : land : (left|mathrm{dev} sigmaright| - sqrt{2/3}sigma_Y(1 - Hbeta)) leq 0 : land : Lambda(left|mathrm{dev} sigmaright| - sqrt{2/3}sigma_Y(1 - Hbeta)) = 0}

end{align*}`

which are regarded as evolution equations for \(p\) and \(\alpha\). The actual values of \(\sigma\) and \(\beta\) are obtained through the constitutive relations provided by the framework of generalized standard materials, we have

\begin{align*} &\frac{\partial\Psi((\epsilon - p), \alpha)}{\partial p} + \frac{\partial\Phi}{\partial \dot{p}} = 0 \quad \Rightarrow \quad \sigma = \mathbf{C} : (\epsilon - p) \\ &\frac{\partial\Psi((\epsilon - p), \alpha)}{\partial \alpha} + \frac{\partial\Phi}{\partial \dot{\alpha}} = 0 \quad \Rightarrow \quad \beta = - \alpha . \end{align*}

The time-discrete form is simply obtained by replacing the time rates with the approximations \(\dot p \approx \frac{p_{k+1} - p_k}{\Delta t}\) and \(\dot \alpha \approx \frac{\alpha_{k+1} - \alpha_k}{\Delta t}\), whereby the value of \(\Delta t\) has no effect for the present rate-independent model.

Minimization formulation

A semi-implicit time-discrete formulation of the model including hardening reads [2]:
For the given state \((u^k,p^k, \alpha^k)\) find a new solution \((u^{k+1},p^{k+1})\) such that \begin{align*} \text{1.} & \: && \int\Psi(\epsilon(u^{k+1}), p^{k+1},p^{k}, \alpha^{k})+\Phi(p^{k+1},p^k)- f^{k+1}\cdot u^{k+1}\,dx \rightarrow \min \\ \text{2.} & \: && \alpha^{k+1}=\alpha^k+\sqrt{2/3}\sigma_y H \|p^{k+1}-p^k\| , \end{align*} with \begin{align*} &\Psi(\epsilon(u), p, p^k, \alpha^k) = \frac{1}{2} \|\epsilon(u)-p\|_{M}^2 + \frac{1}{2} \left(\alpha^k + \sqrt{2/3}\sigma_y H \|p-p^k\|_{\varepsilon}\right)^2 \quad\text{and}\quad \\ &\Phi(p,p^k) = \sqrt{2/3}\sigma_Y \|p-p^k\|_{\varepsilon}, \end{align*}

where \(\Psi\) denotes the augmented stored energy density and \(\|\bullet \|_{\varepsilon}\) indicates a perturbed norm in order to avoid divisions by zero in the evaluation of the time-discrete evolution equation derived from an incremental variational principle.

Notes in the spatial discretization

The state \(u\) is spatially discretized by continuous Lagrange elements, the internal states \(p\) and \(\alpha\) as well as the multiplier \(\Lambda\) reside at quadrature points, for which NGSolve has the notion of an integration rule space. Thus, the problem above can be decomposed into a "global" problem for the displacements and local problems for the internal variables. The latter can be solved individually for each quadrature point. However, there is a coupling between the local problems and displacements, which has to be accounted for by what is called "algorithmically consistent linearization". The latter is automatically obtained by using an appropriate BilinearForm object representing the boundary value problem and the internal evolution problems as demonstrated for both implementations.

References 1. W. Han and B.D. Reddy: Mathematical Theory and Numerical Analysis, Springer 1999 2. C. Carstensen, Domain Decomposition for a Non-smooth Convex Minimization Problem and its Application in Plasticity, 1997 3. K. Hackl, F.D. Fischer, On the relation between the principle of maximum dissipation and inelastic evolution given by dissipation potentials, PRSA, 2008

Implementation

General setup

[1]:
import numpy as np
from ngsolve import *
from ngsolve.comp import IntegrationRuleSpace
from ngsolve.fem import MinimizationCF, NewtonCF
from netgen.geom2d import CSG2d, Circle, Rectangle
from ngsolve.webgui import Draw
SetNumThreads(4)

The problem domain is the classical plate with hole. Thanks to symmetry of the problem to be solved, we actually use only one quarter of the domain together with additional symmetry boundary conditions. Parameters for the geometry and the material employed are taken from [4].

  1. A. Düster and E. Rank: The p-version of the fnite element method compared to an adaptive h-version for the deformation theory of plasticity, CMAME 190 (2001) 1925-1935

[2]:
# expression (jit) compilation flag
realcompile = False
[3]:
# polynomial order for geometry and displacements
order = 3

geo = CSG2d()
circle = Circle(center=(100,100), radius=10.0, bc="curve").Maxh(1)
rect = Rectangle(pmin=(0,100), pmax=(100,200), bottom="bottom", left="left", top="top", right="right")
geo.Add(rect-circle)
mesh = Mesh(geo.GenerateMesh(maxh=5))
mesh.Curve(order)
Draw(mesh)

# points of interest for post-processing
node_A = mesh(100,200)
node_B = mesh(0,200)

Strain energy density and generic helper functions

Model parameters

[4]:
# Elasticity parameters correspond to those in Ref. 4, ie. mu = 80193.8 and kappa = 164206

# Young's modulus
Emod = 206900

# Poisson's ratio
nu = 0.29

print("E =", Emod, "\n\nnu =", nu)

mu = Parameter(Emod / (2 * (1 + nu)))
lmbda = Parameter(Emod * nu / ((1 + nu) * (1 - 2 * nu)))

print()
print("mu =", mu.Get())
print("lambda =", lmbda.Get())

# just for reference:
kappa = Emod / (3 * (1 - 2 * nu))

print("kappa =", kappa)


# Hardening parameter
H = Parameter(1.0)

# Yield stress
sigma_Y = Parameter(450)

# Perturbation parameter for tensor norms: this value is decisive for the accuracy of the result!
#  for higher values of H it must be sufficently small
norm_pert = 1e-16
E = 206900

nu = 0.29

mu = 80193.7984496124
lambda = 110743.81690660758
kappa = 164206.34920634917

Function definitions

[5]:
def strain(u):
    # compute symmetric gradient (here a 2x2 block) and "embed" it in a 3x3 matrix via "ExtendDimension"
    return Sym(Grad(u)).ExtendDimension((3, 3), pos=(0, 0))


def elastic_strain_energy_mu_lambda(eps, mu, lmbda):
    return mu * InnerProduct(eps, eps) + lmbda/2 * Trace(eps)**2


def elastic_strain_energy(eps):
    return elastic_strain_energy_mu_lambda(eps, mu, lmbda)


def tensor_norm(a, pert=0):
    return sqrt(InnerProduct(a, a) + pert)


def dev(a):
    return a - Trace(a) * Id(3) / 3

Common FE spaces

We employ a \(H^1\) space for the displacements and a "integration rule space" (IR space) as basis space for all internal variables.

[6]:
fes_u = VectorH1(mesh, order=order, dirichletx="right", dirichlety="bottom")
fes_ir = IntegrationRuleSpace(mesh, order=order-1)

Scalar internal states can be defined based on fes_ir as needed.

The "measure" corresponding to fes_ir:

[7]:
# extract integration rules from IntegrationRuleSpace
irs_dx = dx(intrules=fes_ir.GetIntegrationRules())

Load definitions

[8]:
force = 450
# For loadsteps
loadfactor = Parameter(1)

Some data structures for postprocessing

[9]:
# for postprocessing
drawfes = MatrixValued(L2(mesh, order=order-1), dim=3, symmetric=True, deviatoric=False)
pd, qd = drawfes.TnT()
pdraw = GridFunction(drawfes)
ad = BilinearForm(InnerProduct(pd, qd)*irs_dx, symmetric=True).Assemble()
invad = ad.mat.Inverse()

drawafes = L2(mesh, order=order-1)
pda, qda = drawafes.TnT()
adraw = GridFunction(drawafes)
ada = BilinearForm(InnerProduct(pda, qda)*irs_dx, symmetric=True).Assemble()
invada = ada.mat.Inverse()

Implementation with explicit yield surface

In this approach we employ a generic symmetric-matrix-valued space for \(p\).

[10]:
fes_p = MatrixValued(fes_ir, symmetric=True, deviatoric=False, dim=3)

fes_int = fes_p * fes_ir * fes_ir # p x alpha x Lambda

# Trial and test functions
u, u_test = fes_u.TnT()
int_trial, int_test = fes_int.TnT()
p, alpha, Lambda = int_trial

# GridFunction for "external" state
gfu = GridFunction(fes_u)

# GridFunction for internal states
gfint = GridFunction(fes_int)
gfp, gfalpha, gfLambda = gfint.components

# For history states
gfhist = GridFunction(fes_int)
gfp_k, gfalpha_k, gfLambda_k = gfhist.components

# "Trial" states for plastic evolution (these are not trial functions in FE parlance but elastic predictors)
gftrial = GridFunction(fes_int)
gfsigma_trial, gfbeta_trial, _ = gftrial.components

# Time increment
Delta_t = 1.0
# Not really needed for a rate-independent model. Value could be used for scaling the equations though.

# Short-hand for storing history variables. Note that the history value of Lambda is not relevant and thus omitted.
def store_internal():
    gfp_k.Interpolate(gfp)
    gfalpha_k.Interpolate(gfalpha)
    gfLambda_k.Interpolate(gfLambda)

Functions for the energy density \(\Psi\), the dissipation potential \(\Phi\) and the evolution equations

[11]:
# Energy density
def Psi(strain, p, alpha):
    strain_energy = elastic_strain_energy(strain-p)
    hardening     = 1/2 * alpha**2
    return strain_energy + hardening


# The yield condition: result > 0 ==> plastic evolution
def yield_condition(sigma, beta, pert=0):
    return tensor_norm(dev(sigma), pert=pert) - sqrt(2/3) * sigma_Y * (1 - H*beta)


# The objective function defining the dissipation potential
def Phi_obj(sigma, beta, p_dot, alpha_dot, Lambda):
    return InnerProduct(sigma, p_dot) + InnerProduct(beta, alpha_dot) - \
            Lambda * yield_condition(sigma, beta, pert=norm_pert)


# The evolution equations
def evolution_eqs(strain, p, alpha, Lambda, evol_tol):
    p_dot = (p - gfp_k) / Delta_t
    alpha_dot = (alpha - gfalpha_k) / Delta_t
    sigma = -Psi(strain, p, alpha).Diff(p)
    sigma = sigma.MakeVariable()
    beta = -Psi(strain, p, alpha).Diff(alpha)
    beta = beta.MakeVariable()
    _Phi_obj = Phi_obj(sigma, beta, p_dot, alpha_dot, Lambda)

    dsigma = _Phi_obj.Diff(sigma)

    # only active when there is hardening
    dbeta = IfPos(H - 1e-16, _Phi_obj.Diff(beta), alpha_dot)

    dLambda = _Phi_obj.Diff(Lambda)

    # Only solve the evolution equations if the elastic trial state is beyond the yield surface.
    # This essentially leads to the fulfillment of Lambda >= 0
    return IfPos(yield_condition(gfsigma_trial, gfbeta_trial) - evol_tol,
                 CF((dsigma, dbeta, dLambda)),
                 CF((p_dot, alpha_dot, Lambda - gfLambda_k)))

Handling of the evolution of internal variables.

The evolution equations are a system of nonlinear equations per quadrature point. NewtonCF solves such equations with given starting points, tolerance and maximum iterations. On failure, it returns NaN.

[12]:
# Parameters for evolution
tol = Parameter(1e-10)
maxiter = 20
evol_tol = tol

eqs = evolution_eqs(strain(gfu), p, alpha, Lambda, evol_tol).Compile(realcompile=realcompile)
evolution = NewtonCF(eqs, gfint, tol=tol.Get(), maxiter=maxiter)

The "trial state" that indicates whether there will be a plastic evolution.

[13]:
Psi_trial = Psi(strain(gfu), *gfhist.components[:2])
def compute_trial_state():
    gftrial.components[0].Interpolate(-Psi_trial.Diff(gfhist.components[0]))
    gftrial.components[1].Interpolate(-Psi_trial.Diff(gfhist.components[1]))
    # no need to interpolate Lambda

The actual solution of the equations happens during an again exact (interpolating) projection that sets gfint.

[14]:
# A Linearform for which the evaluation triggers the solution of the evolution equations
testfunc = CoefficientFunction(tuple(fes_int.TestFunction()))
L_evol = LinearForm(fes_int)
L_evol += InnerProduct(CacheCF(evolution), testfunc) * irs_dx
# NOTE: CacheCF(evolution) ensures that the evolution equations are solved only once per evaluation

# Mass-Matrix for the internal variable space as generic interpolation tool
M_evol = BilinearForm(fes_int, symmetric=True, diagonal=True)
for _trial, _test in zip(*fes_int.TnT()):
    M_evol += InnerProduct(_trial,  _test).Compile() * irs_dx

M_evol.Assemble()
M_evol_inv = M_evol.mat.Inverse()


def evolve_internal():
    compute_trial_state()
    L_evol.Assemble()
    gfint.vec.data = M_evol_inv * L_evol.vec

The LinearForm for the global boundary value problem

[15]:
# Create a strain variable for partial differentiation
strain_gf = strain(gfu).MakeVariable()

# stress
sigma = Psi(strain_gf, gfp, gfalpha).Diff(strain_gf)

L = LinearForm(fes_u)
L += InnerProduct(sigma, strain(u_test)).Compile(realcompile=realcompile) * irs_dx
L += -u_test[1] * force * loadfactor * ds("top")

The "elastic" part of the BilinearForm

[16]:
# elasticity tensor
cc = sigma.Diff(strain_gf).Reshape((strain_gf.dim, strain_gf.dim))

# reshape test and trial strians
delta_eps = strain(u_test).Reshape((strain_gf.dim, ))
Delta_eps = strain(u).Reshape((strain_gf.dim, ))

a = BilinearForm(fes_u, symmetric=False)

# "elastic contribution"
a += InnerProduct(delta_eps, cc * Delta_eps).Compile(realcompile=realcompile) * irs_dx

The "algorithmic" part of the BilinearForm

An algorithmically consistent linearization also needs to account for the linearization of internal states.

The first ingredients are the derivatives of the stress with respect to internal states.

[17]:
# Partial derivates w.r.t. internal states; need to be reshaped in order to put them together
#  in a single expression below.

dim_sigma = sigma.dim
dp_sigma = sigma.Diff(gfp).Reshape((dim_sigma, gfp.dim))
dalpha_sigma = sigma.Diff(gfalpha).Reshape((dim_sigma, 1))
dLambda_sigma = sigma.Diff(gfLambda).Reshape((dim_sigma, 1))

Next, we collect these derivative in a single object with dimensions (dim_sigma, dim_int). Using the method ExtendDimension(...) we can define individual contributions/blocks.

[18]:
dim_int = gfp.dim + gfalpha.dim + gfLambda.dim

dint_sigma = dp_sigma.ExtendDimension((dim_sigma, dim_int), pos=(0,0)) \
 + dalpha_sigma.ExtendDimension((dim_sigma, dim_int), pos=(0, gfp.dim)) \
 + dLambda_sigma.ExtendDimension((dim_sigma, dim_int), pos=(0, gfp.dim + gfalpha.dim))

In order to obtain the sensitivities of the internal states with respect to the strain, we need to define the partial deriatives of the evolution equations with respect to "external" (strain) and "internal" (p, alpha, Lamda) states.

[19]:
# instance of evolution equations based only on grid functions for direct evaluation
eqs_gf = evolution_eqs(strain_gf, gfp, gfalpha, gfLambda, evol_tol)

dim_eqs = eqs_gf.dim

# Partial derivate w.r.t. external (strain) state
dext_eqs_gf = eqs_gf.Diff(strain_gf).Reshape((dim_eqs, strain_gf.dim))

# Partial derivates w.r.t. internal states; need to be reshaped in order to put them together
#  in a single expression below.
dp_eqs_gf = eqs_gf.Diff(gfp).Reshape((dim_eqs, gfp.dim))
dalpha_eqs_gf = eqs_gf.Diff(gfalpha).Reshape((dim_eqs, 1))
dLambda_eqs_gf = eqs_gf.Diff(gfLambda).Reshape((dim_eqs, 1))

dint_eqs_gf = \
   dp_eqs_gf.ExtendDimension((dim_eqs, dim_int), pos=(0,0)) \
 + dalpha_eqs_gf.ExtendDimension((dim_eqs, dim_int), pos=(0, gfp.dim)) \
 + dLambda_eqs_gf.ExtendDimension((dim_eqs, dim_int), pos=(0, gfp.dim + gfalpha.dim))

The matrix represented by dint_eqs is singular, because p is a symmetric deviatoric (trace-less) tensor, it has a non-trivial embedding into a vector space for its independent components. Therefore, we need to create such an embedding for the space of internal variables as follows:

[20]:
# VS embedding

# Retreive non-trivial embedding for p
VS_p = gfp.space.VSEmbedding().NumPy()

# Create a common embedding for all internal states
VS_all = np.zeros((dim_eqs, VS_p.shape[1] + 2))
VS_all[:VS_p.shape[0], :VS_p.shape[1]] = VS_p
VS_all[VS_p.shape[0], VS_p.shape[1]] = 1
VS_all[VS_p.shape[0] + 1, VS_p.shape[1] + 1] = 1

# Create a CoefficienFunction from the array
VS_cf = CF(tuple(VS_all.flatten().tolist()), dims=VS_all.shape)

With this embedding we project dint_eqs_gf to a reduced space and obtain the invertible matrix A_red. After inversion we can compute the sensitivity of the internal states with respect to the external (strain) state.

[21]:
A_red = VS_cf.trans * dint_eqs_gf * VS_cf
A_red_inv = Inv(A_red)
A_inv = VS_cf * A_red_inv * VS_cf.trans

# sensitivity of internal states wrt to external states, i.e. strain
dext_int = -A_inv * dext_eqs_gf

Finally, the complete linearization term is given as

[22]:
linearization_term = delta_eps * (dint_sigma * (dext_int * Delta_eps))
[23]:
a += linearization_term.Compile(realcompile=realcompile) * irs_dx

Solution procedure

The nonlinear PDE is solved with a Newton-Raphson scheme. Thereby, in each Newton iteration we solve a linearized PDE for the displacements and a nonlinear evolution problem at each quadrature point via ``NewtonCF``

In order to run the problem, we need to define vectors that store respective solution data, define loadsteps and setup a loop that iterates through the latter.

[24]:
# A set of functions/vectors for keeping data
w = gfu.vec.CreateVector()
rhs = gfu.vec.CreateVector()

# Postprocess internal states
fd = LinearForm(InnerProduct(gfp, qd)*irs_dx)
fda = LinearForm(InnerProduct(gfalpha_k, qda)*irs_dx)

# Load steps (chosen based on experience)
loadsteps = [0.1,0.3,0.5,0.7,0.8,0.9,0.95,1.0]

# Set solution to zero initially
gfu.vec[:] = 0
gfint.vec[:] = 0
gfhist.vec[:] = 0
gftrial.vec[:] = 0


# Iterate through load steps
for ls in loadsteps:
    loadfactor.Set(ls)

    # Update old solution at time t = t_k
    store_internal()

    with TaskManager():
        for i in range(20):

            a.Assemble()
            L.Assemble()

            w.data = a.mat.Inverse(freedofs=fes_u.FreeDofs(False), inverse="umfpack") * L.vec
            gfu.vec.data -= w

            # Solve the evolution equation (quadrature-point-wise)
            evolve_internal()

            if np.isnan(np.max(np.abs(gfp.vec.FV().NumPy()))):
                raise Exception("Evolution solver failed")

            # NOTE: This problem is not a minimization problem.
            # Therefore, we compute || |W| . |R| || instead of || W . R ||.
            err = np.linalg.norm(np.abs(w.FV().NumPy()) * np.abs(L.vec.FV().NumPy()))
            print("step ", i, "err = ", err)

            # Check convergence
            if err < 1e-6: break

    print("force = ", ls * force, ", uy_A =", gfu(node_A)[1], ", ux_B =", gfu(node_B)[0],\
          ", int u2 =", Integrate(gfu[1] * ds("top"),mesh))
called base class apply, type = N5ngfem29SymMatrixDifferentialOperatorE
called base class apply, type = N5ngfem29SymMatrixDifferentialOperatorE
called base class apply, type = N5ngfem29SymMatrixDifferentialOperatorE
called base class apply trans, type = N5ngfem29SymMatrixDifferentialOperatorE
called base class apply trans, type = N5ngfem29SymMatrixDifferentialOperatorE
called base class apply trans, type = N5ngfem29SymMatrixDifferentialOperatorE
step  0 err =  20.2775914005007
step  1 err =  1.0233284517430075e-27
force =  45.0 , uy_A = 0.02095136567063172 , ux_B = 0.007675801310750769 , int u2 = 2.0403433383184253
step  0 err =  81.11036560200291
step  1 err =  5.906104834130272e-27
force =  135.0 , uy_A = 0.06285409701189515 , ux_B = 0.023027403932252312 , int u2 = 6.121030014955275
step  0 err =  81.11036560200297
step  1 err =  0.003960174305532058
step  2 err =  7.775726315947739e-07
force =  225.0 , uy_A = 0.10479020091514707 , ux_B = 0.0383637673731429 , int u2 = 10.203329712864894
step  0 err =  81.21875520986039
step  1 err =  0.018031293471957838
step  2 err =  1.2052318627725052e-05
step  3 err =  2.09026865573613e-12
force =  315.0 , uy_A = 0.14690383135026452 , ux_B = 0.053618592725876844 , int u2 = 14.294380881986799
step  0 err =  20.357843107103378
step  1 err =  0.002591638643587366
step  2 err =  1.0168062431500842e-07
force =  360.0 , uy_A = 0.1680593665656626 , ux_B = 0.061200822069871465 , int u2 = 16.344788243169514
step  0 err =  20.447609365083892
step  1 err =  0.042901021741168145
step  2 err =  1.046139337049393e-05
step  3 err =  1.4672785051149498e-13
force =  405.0 , uy_A = 0.1897152264853908 , ux_B = 0.06857379483715682 , int u2 = 18.417343825144368
step  0 err =  5.200328021981898
step  1 err =  0.007514126754815541
step  2 err =  1.6856790159888142e-06
step  3 err =  1.838759597124022e-14
force =  427.5 , uy_A = 0.20091302005516778 , ux_B = 0.07211240832791124 , int u2 = 19.470064950788473
step  0 err =  5.27813273475888
step  1 err =  0.014611413571473136
step  2 err =  1.0175007935796264e-06
step  3 err =  1.7616119015424333e-10
force =  450.0 , uy_A = 0.21257445342324355 , ux_B = 0.07547311846851285 , int u2 = 20.54493747041257
[25]:
fd.Assemble()
pdraw.vec.data = invad * fd.vec
Draw(Norm(pdraw), mesh, "p")
Draw(gfu,mesh, "u")

if H.Get() > 1e-16:
    fda.Assemble()
    adraw.vec.data = invada * fda.vec
    Draw(Norm(adraw), mesh, "alpha")
[26]:
# confirm that the trace vanishes
p1 = np.array(pdraw(mesh(90, 100, 0))).reshape((3,3))
np.einsum('ij,ij', p1, np.eye(3))
[26]:
-4.065758146820642e-18
[ ]:

Implementation of the minimization formulation

We again start with the (coupled) function spaces. This time, we won’t need a TrialFunction "variable" for \(\alpha\) but only a GridFunction to store its value. Also, there is no need for \(\Lambda\) in this formulation.

Moreover, we use a deviatoric space for \(p\).

[27]:
fes_p = MatrixValued(fes_ir, symmetric=True, deviatoric=True, dim=3)

fes = fes_u * fes_p
u, p = fes.TrialFunction()

# GridFunction for solution
gfsol = GridFunction(fes)
gfu, gfp = gfsol.components

# Save previous solution
gfsol_k = GridFunction(fes)
gfu_k, gfp_k = gfsol_k.components

# alpha_0 = 0
alpha_k = GridFunction(fes_ir)

The energy density \(\Psi\) and the dissipation potential \(\Phi\)

[28]:
# The full energy density (incl. hardening)
def Psi(p, strain, p_k, alpha_k):
    strain_energy = elastic_strain_energy(strain-p)
    alpha = alpha_k + IfPos(H - 1e-16, sqrt(2/3) * sigma_Y * H * tensor_norm(p - p_k, pert=norm_pert), 0)
    hardening    = 1/2 * alpha**2
    return strain_energy + hardening


# Dissipation potential: For rate-independent plasticity (homogeneous of degree 1 in the rate of p)
# this is the dissipation! Also, because of rate independence, the parameter Delta_t is not needed for
# the evolution.
def Phi(p, p_k):
    return sqrt(2/3) * sigma_Y * tensor_norm(p - p_k, pert=norm_pert)

Finally, we have all ingredients for the variational formulation:

Find for given \((u^k,p^k)\) a solution \((u^{k+1},p^{k+1})\) such that \begin{align*} \Psi(\epsilon(u^{k+1}), p^{k+1},p^{k}) + \Phi(p^{k+1},p^k)-\int f^{k+1}\cdot u^{k+1}\,dx \rightarrow \min! \end{align*}

[29]:
realcompile = False

# Possible optimization: use static condensation for p
a = BilinearForm(fes, symmetric=True)
a += Variation( Psi(p, strain(u), gfp_k, alpha_k) * irs_dx ).Compile(realcompile=realcompile)
a += Variation( Phi(p, gfp_k) * irs_dx ).Compile(realcompile=realcompile)
a += Variation( -u[1] * force * loadfactor * ds("top") ).Compile(realcompile=realcompile)

Since Phi(p, gfp_k) is included above, we automatically obtain an "algorithmically consistent linearization", corresponding to the local evolution

[30]:
evolution_objective = (Psi(p, strain(gfu), gfp_k, alpha_k) + Phi(p,gfp_k)).Compile(realcompile=realcompile)

which will be used to solve for p in a nested way, ie. for each "guess" for u^{k+1}.

In order to run the problem, we need to define vectors that store respective solution data, define loadsteps and setup a loop that iterates through the latter.

The nonlinear PDE is solved with a Newton-Raphson scheme. Thereby, in each Newton iteration we solve a linearized PDE for the displacements and a nonlinear evolution problem at each quadrature point via ``MinimizationCF``.

[31]:
# a set of vectors for keeping data
vec_k = gfsol.vec.CreateVector()
w      = gfsol.vec.CreateVector()
rhs    = gfsol.vec.CreateVector()

# postprocess internal states
fd = LinearForm(InnerProduct(gfp, qd)*irs_dx)
fda = LinearForm(InnerProduct(gfalpha_k, qda)*irs_dx)

# load steps (chosen based on experience)
loadsteps = [0.1,0.3,0.5,0.7,0.8,0.9,0.95,1.0]

# set solution to zero initially
gfsol.vec[:] = 0
alpha_k.vec[:] = 0

# evolution params (default values of MinimizationCF at the time of writing)
tol = 1e-6
maxiter = 20

# iterate through load steps
for ls in loadsteps:
    loadfactor.Set(ls)

    # update old solution at time t = t_k
    gfsol_k.vec.data = gfsol.vec
    with TaskManager():
        for i in range(20):
            energy_k = a.Energy(gfsol.vec)
            #print ("energy(t_k) = ", energy_k)

            a.AssembleLinearization(gfsol.vec)
            a.Apply(gfsol.vec, rhs)

            # static condensation could be employed to speedup solving
            w.data = a.mat.Inverse(freedofs=fes.FreeDofs(False), inverse="sparsecholesky") * rhs

            # linesearch ( with damping)
            vec_k.data = gfsol.vec
            scale = 1
            while scale > 1e-7:
                gfsol.vec.data -= scale * w
                energy1 = a.Energy(gfsol.vec)

                # Here we solve the evolution equation at each quadrature point through evaluation of
                # the MinimizationCF. This function takes the "objective", the initial guess (most
                # likely the previous solution), tolerances and the maximum number of iterations.
                gfp.Interpolate(MinimizationCF(evolution_objective, gfp, tol=tol, maxiter=maxiter))

                energy2 = a.Energy(gfsol.vec)

                if energy2 < energy_k + 1e-12:
                    break
                scale *= 0.5
                gfsol.vec.data = vec_k
                #print(scale)

            err = sqrt(InnerProduct(w, rhs))
            print("step ", i, "err = ", err)

            # check convergence
            if err < 1e-6: break

    alpha_k.Interpolate(alpha_k + sqrt(2/3) * sigma_Y * H * sqrt(InnerProduct(gfp - gfp_k, gfp - gfp_k)))

    print("force = ", ls * force, ", uy_A =", gfu(node_A)[1], ", ux_B =", gfu(node_B)[0],\
          ", int u2 =", Integrate(gfu[1] * ds("top"),mesh))
step  0 err =  9.582053489082611
step  1 err =  1.5804207355430305e-07
force =  45.0 , uy_A = 0.020951434171478267 , ux_B = 0.007675857648415199 , int u2 = 2.040350007860739
step  0 err =  19.164145962860392
step  1 err =  4.708180561332195e-06
step  2 err =  5.424583975269361e-14
force =  135.0 , uy_A = 0.06285437905979187 , ux_B = 0.023027634266761347 , int u2 = 6.121057429994661
step  0 err =  19.16427071315457
step  1 err =  0.11949149778293307
step  2 err =  0.0015046422348997577
step  3 err =  7.425318751837336e-07
force =  225.0 , uy_A = 0.10479086970074826 , ux_B = 0.03836430738210783 , int u2 = 10.203394561664297
step  0 err =  19.16445776662414
step  1 err =  0.4836909116770408
step  2 err =  0.24453547926991206
step  3 err =  0.12296417201973991
step  4 err =  0.0017841795231259261
step  5 err =  1.3974547630948506e-06
step  6 err =  1.397449432510109e-06
step  7 err =  1.3974467676560743e-06
step  8 err =  1.3974414376467317e-06
step  9 err =  1.3756089534695076e-06
step  10 err =  1.3756063299638138e-06
step  11 err =  1.3326235520407434e-06
step  12 err =  1.2493438187133275e-06
step  13 err =  1.24933428751255e-06
step  14 err =  1.249333096177532e-06
step  15 err =  1.2493140348676776e-06
step  16 err =  1.2102770324884595e-06
step  17 err =  1.2102585669457402e-06
step  18 err =  1.2102562589817864e-06
step  19 err =  1.210251642720302e-06
force =  315.0 , uy_A = 0.14690512878173165 , ux_B = 0.05361962301393339 , int u2 = 14.29450627324556
step  0 err =  9.583034955336203
step  1 err =  0.36517325669899164
step  2 err =  0.0054734900839062475
step  3 err =  5.105355878054722e-06
step  4 err =  2.1350460313475528e-09
force =  360.0 , uy_A = 0.16806150050323018 , ux_B = 0.06120245859365296 , int u2 = 16.344992564416554
step  0 err =  9.583299082880389
step  1 err =  0.6507145690588375
step  2 err =  0.015268431682287201
step  3 err =  8.568336210143623e-06
step  4 err =  1.2392085266584219e-10
force =  405.0 , uy_A = 0.18971829980334978 , ux_B = 0.06857629153637922 , int u2 = 18.417646094574945
step  0 err =  4.7938237203676
step  1 err =  0.5129680502270898
step  2 err =  0.010018538224349248
step  3 err =  4.110438756037797e-05
step  4 err =  1.7798002002377263e-08
force =  427.5 , uy_A = 0.200917205012181 , ux_B = 0.07211590985747644 , int u2 = 19.470484518414832
step  0 err =  4.794121197748083
step  1 err =  0.6860863019349761
step  2 err =  0.016352144142236888
step  3 err =  7.700999025076335e-05
step  4 err =  7.221558400413134e-08
force =  450.0 , uy_A = 0.2125796391685457 , ux_B = 0.07547797856721246 , int u2 = 20.545486437704703
[32]:
fd.Assemble()
pdraw.vec.data = invad * fd.vec
Draw(Norm(pdraw), mesh, "p")
Draw(gfu,mesh, "u")

if H.Get() > 1e-16:
    fda.Assemble()
    adraw.vec.data = invada * fda.vec
    Draw(Norm(adraw), mesh, "alpha")
[33]:
# confirm that the trace vanishes
p1 = np.array(pdraw(mesh(90, 100, 0))).reshape((3,3))
np.einsum('ij,ij', p1, np.eye(3))
[33]:
8.131516293641283e-19

A note on performance

For the (rather small) problem at hand, the minimization implementation is considerable faster than the "classical" implementation with explicit yield surface, dispite the fact that the former method solves a larger linear system than the latter. The main bottleneck is the assembly of the consistent linearization term. However, for larger problems, the effort for solving the linear system grows faster than that for the assembly. Hence, the picture will change for larger problems, in particular in three dimensions. A possibility to reduce the system size in case of the minimization implementation is static condensation, which corresponds to "consistent linearization" computed at the level of element matrices instead of quadrature points.

[ ]: