I want to multiply A(2, 6) with B(6, 1) to obtain a Matrix with shape (2, 1).
If I use A= ti.Matrix(1, dt=ti.f32, shape=(2, 6), needs_grad=True)
. It will initialize a Matrix with shape (2, 6, 1, 1).
If I use A = ti.Matrix(2, 6, dt=ti.f32, shape=1, needs_grad=True)
. It will initialize a Matrix with shape (1, 2, 6).
I want to ask which two dimensions are multiplied when I implement multiplication another matrix.
I also want to ask which is the correct way to initialize a Matrix.
I’m working on a documentation page to clarify this. Please expect that to be online in 45 minutes.
A quick answer for now:
A = ti.Matrix(1, 4, dt=ti.f32, shape=(2, 6), needs_grad=True)
allocates a 2x6 tensor of 1x4 matrices. You need to first get the matrix by tensor indexing, e.g. A[1, 5]
, and then do your matrice multiplication using the @
operator.
It seems to me that the correct way for your case is to use
A = ti.Matrix(2, 6, dt=ti.f32, shape=(), needs_grad=True)
, which allocates you a 0D tensor of 2x6 matrices.
Thank you so much! The documents are very clear now.
Is there a way to create a matrix with dynamic shape? Like a universal data holder?
Do you mean something like std::vector<Matrix>
?
In C++, yes. In python, a standard torch tensor may have dynamic batch size.
Also, another question is how to pass data = torch.zeros((2, 6))
to ti_data_holder = ti.Matrix(2, 6, dt=ti.f32, shape=(), needs_grad=True)
. It says list index out of range
.
Sorry v0.2.4 does not support from/to_torch for 0D tensors yet. I just now pushed v0.2.5 with this feature. It will be available in 20 minutes. Example:
For something like std::vector<Matrix>
, see here: https://github.com/yuanming-hu/taichi/blob/4c12a6b7c6a488478ca620aa5d7f40272eaf0a0b/tests/python/test_dynamic.py#L56
Although a dynamic batch size is doable, that would require a deeper understanding into Taichi. I would suggest using a fixed batch size for now.