Changes since v0.3.0:
 (Dec 24, 2019) v0.3.9

ti.classfunc
decorator for functions within adata_oriented
class 
[Expr/Vector/Matrix].to_torch
now has a extra argumentdevice
, which specifies the device placement for returned torch tensor, and should have typetorch.device
. Default=None
.  Crossdevice (CPU/GPU) taichi/PyTorch interaction support, when using
to_torch/from_torch
. 
#kernels compiled during external array IO significantly reduced (from
matrix size
to1
)

 (Dec 23, 2019) v0.3.8 released.

Breaking change:
ti.data_oriented
decorator introduced. Please decorate all your Taichi dataoriented objects using this decorator. To invoke the gradient versions ofclassmethod
, for example,A.forward
, simply useA.forward.grad()
instead ofA.forward(__gradient=True)
(obsolete).

Breaking change:
 (Dec 22, 2019) v0.3.5 released.
 Maximum tensor dimensionality is 8 now (used to be 4). I.e., you can now allocate up to 8D tensors.
 (Dec 22, 2019) v0.3.4 released.
 2D and 3D polar decomposition (
R, S = ti.polar_decompose(A, ti.f32)
) and svd (U, sigma, V = ti.svd(A, ti.f32)
) support. Note thatsigma
is a3x3
diagonal matrix.  Fixed documentation versioning
 Allow
expr_init
withti.core.DataType
as inputs, so thatti.core.DataType
can be used asti.func
parameter
 2D and 3D polar decomposition (
 (Dec 20, 2019) v0.3.3 released.
 Loud failure message when calling nested kernels. Closed #310

DiffTaichi
examples moved to a standalone repo  Fixed documentation versioning
 Correctly differentiating kernels with multiple offloaded statements
 (Dec 18, 2019) v0.3.2 released

Vector.norm
now comes with a parametereps
(=0
by default), and returnssqrt(\sum_i(x_i ^ 2) + eps)
. A nonzeroeps
safe guards the operator’s gradient on zero vectors during differentiable programming.

 (Dec 17, 2019) v0.3.1 released.
 Removed dependency on
glibc 2.27
 Removed dependency on