Hi,
I am using Taichi with Pytorch and one thing I am trying to figure out is how to use different GPUs for torch and Taichi cuda backend?
Setting the global CUDA_VISIBLE_DEVICES does not solve my problem if I want to specifically use GPU1 for taichi. If I set CUDA_VISIBLE_DEVICES=1, then torch won’t be able to see the other GPUs
Any help or quick workaround would be appreciated.
Thanks!