我在使用ti.ndarray时候,发现内存会一直飙升,简单的复现代码如下:
import taichi as ti
import numpy as np
ti.init()
from memory_profiler import profile
@profile
def test():
pos = np.ones((2000000, 3))
for i in range(100):
a = ti.ndarray(ti.types.vector(pos.shape[1], float), pos.shape[0])
a.from_numpy(pos)
test()
输出如下:
[Taichi] version 1.6.0, llvm 15.0.1, commit f1c6fbbd, win, python 3.8.0
[Taichi] Starting on arch=x64
Line # Mem usage Increment Occurrences Line Contents
=============================================================
7 86.3 MiB 86.3 MiB 1 @profile
8 def test():
9 132.1 MiB 45.8 MiB 1 pos = np.ones((2000000, 3))
10 2425.5 MiB 0.0 MiB 101 for i in range(100):
11 2402.6 MiB 0.1 MiB 100 a = ti.ndarray(ti.types.vector(pos.shape[1], float), pos.shape[0])
12 2425.5 MiB 2293.3 MiB 100 a.from_numpy(pos)
可以看见这个for-loop内存增加了2293MB,但是这个ndarray应该每一次循环都得把内存释放掉才行呀,这里感觉应该改变一下,或者说这是一个bug?
顺便,这里for-loop里面如果使用numpy来创建array,循环结束内存并不会这样累加。结果如下:
Line # Mem usage Increment Occurrences Line Contents
=============================================================
7 86.3 MiB 86.3 MiB 1 @profile
8 def test():
9 #pos = np.ones((2000000, 3))
10 132.1 MiB 0.0 MiB 101 for i in range(100):
11 # a = ti.ndarray(ti.types.vector(pos.shape[1], float), pos.shape[0])
12 # a.from_numpy(pos)
13 132.1 MiB 45.8 MiB 100 pos = np.ones((2000000, 3))