How the MLS-MPM quadratic kernel implementation works

A while ago I implemented MLS-MPM… With a flaw. The weights my quadratic kernel would spit out were sometimes exceeding 1. I just did a lazy fix and scaled it all down. Today a little video got me to read through the taichi implementation of mls-mpm. I noticed how theres some trick used to compute weights.

What exactly is that trick? Where does it originate from? I’m not very knowledgeable in this field so it might be something obvious, and in that case I’m sorry

Hi @Tea, welcome to Taichi forum. This is a very good question.

Could you read Chinese? I’ve answered this question in another post: here.

No, I can’t. All I can do is use google translate which may have some errors…

Hi @Tea , I could try to explain it briefly.

In the MLS-MPM paper, we know the Quadratic kernels is

\mathrm{N}(x)=\left\{\begin{array}{ll} \frac{3}{4}-|x|^{2} & 0 \leqslant|x|<\frac{1}{2} \\ \frac{1}{2}\left(\frac{3}{2}-|x|\right)^{2} & \frac{1}{2} \leqslant|x|<\frac{3}{2} \\ 0 & \frac{3}{2} \leqslant|x| \end{array}\right..

It can be shown as

The quadratic kernel could be divided into three parts according to the definition domain.

  1. N(x) \in [0.5,1.5]
  2. N(x) \in [-0.5,0.5]
  3. N(x) \in [-1.5,-0.5]

These three parts could all be translated to the definition domain [0.5,1.5] .

  1. The first part didn’t need to translate, so N(x) \in [0.5,1.5] = f(x) \in [0.5, 1.5] . f(1.0) = N(1.0)
  2. The second part after translation is g(x) \in [0.5, 1.5] = N(x) \in [-0.5,0.5] , g(1.0) = N(0.0)
  3. The third part after translation is h(x) in [0.5, 1.5] = N(x) \in [-1.5,-0.5] , h(1.0) = N(-1.0)

In Line 36 of mpm88.py:

w = [0.5 * (1.5 - fx)**2, 0.75 - (fx - 1)**2, 0.5 * (fx - 0.5)**2]

The three-components of w correspond to the three parts of N(x).
Specifically,
w[0] = 0.5 * (1.5 - fx)**2 corrsponds to f(x).
w[1] = 0.75 - (fx - 1)**2 corrsponds to g(x).
w[2] = 0.5 * (fx - 0.5)**2 corrsponds to h(x).

2 个赞

Because the kernel function can exceed one! From my own experience, some kernel does not obey the rule of normalization ( integrating from -inf to inf equals one). But they work as well!
In fact, I think the kernel is just a scaling function that convert the distance of nearby particles into a coeffient, which will fade out after the particles are too far away between each other.

(I am not sure. Maybe I am wrong.)

1 个赞

Thanks a lot! I’m sorry for the late reply, didn’t get a notification… This is super helpful though! Thanks!

1 个赞