torch.special.shifted_chebyshev_polynomial_t
function shifted_chebyshev_polynomial_t<S extends Shape>(x: Tensor<S, 'float32'>, n: number | Tensor, options?: SpecialPolynomialOptions<S>): Tensor<S, 'float32'>Computes shifted Chebyshev polynomial of the first kind T*_n(x).
The shifted Chebyshev polynomial T*_n(x) = T_n(2x - 1) is obtained by transforming the domain from [-1, 1] to [0, 1]. This makes Chebyshev approximation directly applicable to [0, 1] problems without manual domain mapping. Useful for:
- Function approximation on [0, 1]: weight functions, basis functions, spectral methods
- Numerical integration on [0, 1]: Chebyshev-Gauss-Lobatto quadrature, spectral collocation
- Probability and statistics: CDF approximations, quantile functions, beta-distributed variables
- Machine learning: feature scaling between 0 and 1 (normalized inputs, proportions, probabilities)
- Optimization: parameter spaces restricted to [0, 1]
Transformation: The linear map x → 2x - 1 shifts [0, 1] to [-1, 1], preserving orthogonality with transformed weight function √(x(1-x)) on [0, 1].
Advantage over standard T_n: Direct use on natural [0, 1] domain avoids external coordinate mapping.
- Domain [0, 1]: Natural for probabilities, proportions, normalized variables
- Equiripple on [0, 1]: T*_n oscillates ±1 with n+1 extrema in [0, 1]
- Orthogonality weight: √(x(1-x)) on [0, 1] (transformed from 1/√(1-u²) on [-1, 1])
- No manual mapping needed: Directly use on [0, 1]; no need for external transformation
- Boundary values: T*_n(0) = (-1)^n, T*_n(1) = 1 (asymmetric)
- Domain [0, 1]: Input values must be in [0, 1]; extrapolation outside unstable
- Different boundary behavior: Unlike T_n(±1)=±1, shifted has T_n(0)=(-1)^n, T_n(1)=1
Parameters
xTensor<S, 'float32'>- Input tensor with values in [0, 1]
nnumber | Tensor- Polynomial degree (non-negative integer). Can be scalar or Tensor
optionsSpecialPolynomialOptions<S>optional- Optional output tensor
Returns
Tensor<S, 'float32'>– Tensor with T*_n(x) valuesExamples
// Shifted version on [0, 1] domain
const x = torch.linspace(0, 1, 5);
const T_star_0 = torch.special.shifted_chebyshev_polynomial_t(x, 0); // [1, 1, 1, 1, 1]
const T_star_1 = torch.special.shifted_chebyshev_polynomial_t(x, 1); // 2*x - 1
const T_star_2 = torch.special.shifted_chebyshev_polynomial_t(x, 2); // 8*x^2 - 8*x + 1
// Direct use on [0, 1] without domain mapping
const probabilities = torch.linspace(0, 1, 100); // Directly use as probabilities
const basis1 = torch.special.shifted_chebyshev_polynomial_t(probabilities, 1);
// No need to transform: just compute directly on probability space
// Comparison with standard Chebyshev
const x_std = torch.tensor([0.25, 0.5, 0.75]); // [0, 1] values
const shifted_result = torch.special.shifted_chebyshev_polynomial_t(x_std, 3);
const x_mapped = x_std.mul(2).sub(1); // Manual mapping to [-1, 1]
const standard_result = torch.special.chebyshev_polynomial_t(x_mapped, 3);
// shifted_result equals standard_result (without manual mapping)
// Approximation on [0, 1] domain: CDF approximation
const cdf_x = torch.linspace(0, 1, 200); // CDF support
const cdf_approx_order1 = torch.special.shifted_chebyshev_polynomial_t(cdf_x, 1);
const cdf_approx_order3 = torch.special.shifted_chebyshev_polynomial_t(cdf_x, 3);
// Higher orders give increasingly accurate CDF approximationSee Also
- PyTorch torch.special.shifted_chebyshev_polynomial_t()
- torch.special.chebyshev_polynomial_t - Standard first kind on [-1, 1]
- torch.special.shifted_chebyshev_polynomial_u - Shifted second kind