Scratchpad Optimization

(补充:在论文和补充材料里有看到Cache(…)这个函数调用,但github的python example中没有找到类似的例子)

Good question. The scratch optimization was implemented in the legacy CUDA backend: taichi/gpu.cpp at dc162e11988f3b1053c36826f82704b82fce9d0c · taichi-dev/taichi · GitHub

After we switch to LLVM this functionality has not yet been restored. (We are more than happy to welcome an implementation in LLVM :slight_smile:

Thanks for answer my question.When i checkout to dc162, compile, run and read some examples, I got confused with some questions.
First, i cannot find any examples with root.hash, so i add it to python_bindings.cpp, and run a very simple example: ti.root.hash(ti.i, 16).place(u), then i get “using S2 = hash<S2_ch>” in tmp000.cpp. So, where does 16 go or what the meaning of 16?
Second, when i read the code in lang/include/struct.h, the SNodeAllocator really confuses me, I see the comment “virtual memory”, may be it use the idea in spgrid, but i cannot figure it out , could you give some explanation…
Third, in the master branch, root.hash go wrong with some llvm error. So, how does taichi handle sparse right now, with bitmasked?
Four, i am working on sparse supporting for a new platform, both software and hardware improvement, and the targeting scenes is 3D render/vision/simulation or 3D convolution. I think, may be taichi is a very very good reference. I also read taco, simit and tvm sparse support(very naive)…, but i am new to 3D vision computing, so i can not figure out how to start my coding work :joy:, could you give some advice? It will be very helpful.