The adaptive subgradient method (AdaGrad) [Duchi2011] follows the algorithmic framework of an iterative solver with the algorithm-specific transformation $$T$$, set of intrinsic parameters $$S_t$$ defined for the learning rate $$\eta$$, and algorithm-specific vector $$U$$ and power $$d$$ of Lebesgue space defined as follows:

\begin{align}\begin{aligned}S_t = {G_t}\\G_t = (G_{t, i})_{i = 1, \ldots, p}\\G_0 \equiv 0\end{aligned}\end{align}

$$T(\theta_{t - 1}, g(\theta_{t - 1}), S_{t - 1})$$:

1. $$G_{t, i} = G_{t - 1, i} + g_i^2(\theta_{t - 1})$$, where $$g_i(\theta_{t - 1})$$ is the $$i$$-th coordinate of the gradient $$g(\theta_{t - 1})$$

2. $$\theta_t = \theta_{t - 1} - \frac {\eta}{\sqrt{G_t + \varepsilon}} g(\theta_{t - 1})$$, where

$\frac {\eta}{\sqrt{G_t + \varepsilon}} g(\theta_{t - 1}) = \{\frac {\eta}{\sqrt{G_{t, 1} + \varepsilon}} g_1(\theta_{t - 1}), \ldots, \frac {\eta}{\sqrt{G_{t, 1} + \varepsilon}} g_p(\theta_{t - 1})\}$

Convergence check: $$U = g(\theta_{t - 1}), d = 2$$

## Computation¶

The adaptive subgradient (AdaGrad) method is a special case of an iterative solver. For parameters, input, and output of iterative solvers, see Computation for Iterative Solver.

### Algorithm Input¶

In addition to the input of the iterative solver, the AdaGrad method accepts the following optional input:

OptionalDataID

Input

gradientSquareSum

A numeric table of size $$p \times 1$$ with the values of $$G_t$$. Each value is an accumulated sum of squares of coordinate values of a corresponding gradient.

### Algorithm Parameters¶

Parameter

Default Value

Description

algorithmFPType

float

The floating-point type that the algorithm uses for intermediate computations. Can be float or double.

method

defaultDense

Default performance-oriented computation method.

batchIndices

NULL

A numeric table of size $$\text{nIterations} \times \text{batchSize}$$ for the defaultDense method that represents 32-bit integer indices of terms in the objective function. If no indices are provided, the algorithm generates random indices.

batchSize

$$128$$

The number of batch indices to compute the stochastic gradient.

If batchSize equals the number of terms in the objective function, no random sampling is performed, and all terms are used to calculate the gradient.

The algorithm ignores this parameter if the batchIndices parameter is provided.

learningRate

A numeric table of size $$1 \times 1$$ that contains the default step length equal to $$0.01$$.

A numeric table of size $$1 \times 1$$ that contains the value of learning rate $$\eta$$.

Note

This parameter can be an object of any class derived from NumericTable, except for PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable.

degenerateCasesThreshold

$$1\mathrm{e}{-08}$$

Value $$\varepsilon$$ needed to avoid degenerate cases when computing square roots.

engine

SharePtr< engines:: mt19937:: Batch>()

Pointer to the random number generator engine that is used internally for generation of 32-bit integer indices of terms in the objective function.

### Algorithm Output¶

In addition to the output of the iterative solver, the AdaGrad method calculates the following optional result:

OptionalDataID

Output

gradientSquareSum

A numeric table of size $$p \times 1$$ with the values of $$G_t$$. Each value is an accumulated sum of squares of coordinate values of a corresponding gradient.

## Examples¶

Note

There is no support for Java on GPU.