• {\displaystyle \mathrm {LSE} (x_{1},\dots ,x_{n})=\log \left(\exp(x_{1})+\cdots +\exp(x_{n})\right).} The LogSumExp function domain is R n {\displaystyle \mathbb...
    7 KB (1,152 words) - 17:21, 23 June 2024
  • Boltzmann distribution. Another smooth maximum is LogSumExp: L S E α ( x 1 , … , x n ) = 1 α log ⁡ ∑ i = 1 n exp ⁡ α x i {\displaystyle \mathrm {LSE} _{\alpha...
    6 KB (1,073 words) - 08:38, 6 July 2023
  • Thumbnail for Softplus
    Softplus (section LogSumExp)
    The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero: L S E 0 + ⁡ ( x 1 , … , x n ) :=...
    5 KB (701 words) - 03:51, 17 July 2024
  • \mathbb {R} ^{K},} where the LogSumExp function is defined as LSE ⁡ ( z 1 , … , z n ) = log ⁡ ( exp ⁡ ( z 1 ) + ⋯ + exp ⁡ ( z n ) ) {\displaystyle \operatorname...
    30 KB (4,737 words) - 05:00, 9 July 2024
  • Thumbnail for Log-normal distribution
    then the exponential function of Y, X = exp(Y) , has a log-normal distribution. A random variable that is log-normally distributed takes only positive...
    71 KB (9,591 words) - 09:26, 12 July 2024
  • Thumbnail for Logarithm
    Logarithm (redirect from Log (mathematics))
    (log multiplication), and takes addition to log addition (LogSumExp), giving an isomorphism of semirings between the probability semiring and the log semiring...
    97 KB (11,584 words) - 15:07, 7 July 2024
  • Thumbnail for Rectifier (neural networks)
    The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero: L S E 0 + ⁡ ( x 1 , … , x n ) :=...
    17 KB (2,280 words) - 11:55, 9 July 2024
  • operation, logadd (for multiple terms, LogSumExp) can be viewed as a deformation of maximum or minimum. The log semiring has applications in mathematical...
    6 KB (1,025 words) - 22:15, 28 March 2023
  • Thumbnail for Exponential function
    Exponential function (redirect from Exp(x))
    function is a mathematical function denoted by f ( x ) = exp ⁡ ( x ) {\displaystyle f(x)=\exp(x)} or e x {\displaystyle e^{x}} (where the argument x is...
    44 KB (5,755 words) - 12:46, 8 July 2024
  • Thumbnail for Entropy (information theory)
    entropy (negentropy) function is convex, and its convex conjugate is LogSumExp. The inspiration for adopting the word entropy in information theory came...
    69 KB (9,894 words) - 10:43, 14 July 2024