Hello community,
As I went through the examples, I found that every kernel’s grad()
is called explicitly. Does this mean that auto diff in Taichi is inside kernel only and there is no tracing of gradient between different kernels?
Thank you.
Hi @jackal
You can make use of ti.Tape
to autodiff the whole program. E.g. https://github.com/yuanming-hu/difftaichi/blob/master/examples/diffmpm.py#L355
Thank you @yuanming ,
I saw this in docs.
Design decisions
Decouple computation from data structures
Domain-specific compiler optimizations
Megakernels
Two-scale automatic differentiation
Embedding in Python
Does “Two-scale automatic differentiation” here consists of in-kernel auto diff and tape
machinism?
Yes - haven’t got a chance to document this…