Automatic Differentiation for Tensor Algebras

11/03/2017
by   Sebastian Urban, et al.
0

Kjolstad et. al. proposed a tensor algebra compiler. It takes expressions that define a tensor element-wise, such as f_ij(a,b,c,d) = [-∑_k=0^4 ((a_ik+b_jk)^2 c_ii + d_i+k^3 ) ], and generates the corresponding compute kernel code. For machine learning, especially deep learning, it is often necessary to compute the gradient of a loss function l(a,b,c,d)=l(f(a,b,c,d)) with respect to parameters a,b,c,d. If tensor compilers are to be applied in this field, it is necessary to derive expressions for the derivatives of element-wise defined tensors, i.e. expressions for (da)_ik=∂ l/∂ a_ik. When the mapping between function indices and argument indices is not 1:1, special attention is required. For the function f_ij (x) = x_i^2, the derivative of the loss is (dx)_i=∂ l/∂ x_i=∑_j (df)_ij2x_i; the sum is necessary because index j does not appear in the indices of f. Another example is f_i(x)=x_ii^2, where x is a matrix; here we have (dx)_ij=δ_ij(df)_i2x_ii; the Kronecker delta is necessary because the derivative is zero for off-diagonal elements. Another indexing scheme is used by f_ij(x)= x_i+j; here the correct derivative is (dx)_k=∑_i (df)_i,k-i x_k, where the range of the sum must be chosen appropriately. In this publication we present an algorithm that can handle any case in which the indices of an argument are an arbitrary linear combination of the indices of the function, thus all the above examples can be handled. Sums (and their ranges) and Kronecker deltas are automatically inserted into the derivatives as necessary. Additionally, the indices are transformed, if required (as in the last example). The algorithm outputs a symbolic expression that can be subsequently fed into a tensor algebra compiler. Source code is provided.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset