我希望在taichi的kernel中使用一个预训练好的pytorch模型,在将模型的输入转化为tensor的时候出现了这个问题,似乎是不能将一个kernel里面的变量转化为tensor,因此也无法使用这个模型,请问有什么可以解决这个问题的办法吗?
@ti.kernel
def Func(self):
a = [1, 2, 3]
b = torch.tensor(a)
Traceback (most recent call last):
File "E:\SSF\main.py", line 38, in <module>
main(running)
File "E:\SSF\main.py", line 32, in main
nsf.run()
File "E:\SSF\MachingLearning\NormalModel\NeuralSSF.py", line 130, in run
self.Func()
File "C:\Users\VRG716\.conda\envs\SSF\lib\site-packages\taichi\lang\kernel_impl.py", line 1035, in __call__
raise type(e)("\n" + str(e)) from None
taichi.lang.exception.TaichiCompilationError:
File "E:\SSF\MachingLearning\NormalModel\NeuralSSF.py", line 75, in Func:
b = torch.tensor(a)
^^^^^^^^^^^^^^^
Traceback (most recent call last):
File "C:\Users\VRG716\.conda\envs\SSF\lib\site-packages\taichi\lang\ast\ast_transformer_utils.py", line 27, in __call__
return method(ctx, node)
File "C:\Users\VRG716\.conda\envs\SSF\lib\site-packages\taichi\lang\ast\ast_transformer.py", line 581, in build_Call
node.ptr = func(*args, **keywords)
RuntimeError: Could not infer dtype of Expr