[source]

LeakyReLU

  1. keras.layers.LeakyReLU(alpha=0.3)

Leaky version of a Rectified Linear Unit.

It allows a small gradient when the unit is not active:f(x) = alpha * x for x < 0,f(x) = x for x >= 0.

Input shape

Arbitrary. Use the keyword argument input_shape(tuple of integers, does not include the samples axis)when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • alpha: float >= 0. Negative slope coefficient.

References

[source]

PReLU

  1. keras.layers.PReLU(alpha_initializer='zeros', alpha_regularizer=None, alpha_constraint=None, shared_axes=None)

Parametric Rectified Linear Unit.

It follows:f(x) = alpha * x for x < 0,f(x) = x for x >= 0,where alpha is a learned array with the same shape as x.

Input shape

Arbitrary. Use the keyword argument input_shape(tuple of integers, does not include the samples axis)when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • alpha_initializer: initializer function for the weights.
  • alpha_regularizer: regularizer for the weights.
  • alpha_constraint: constraint for the weights.
  • shared_axes: the axes along which to share learnable parameters for the activation function. For example, if the incoming feature maps are from a 2D convolution with output shape (batch, height, width, channels), and you wish to share parameters across space so that each filter only has one set of parameters, set shared_axes=[1, 2].

References

[source]

ELU

  1. keras.layers.ELU(alpha=1.0)

Exponential Linear Unit.

It follows:f(x) = alpha * (exp(x) - 1.) for x < 0,f(x) = x for x >= 0.

Input shape

Arbitrary. Use the keyword argument input_shape(tuple of integers, does not include the samples axis)when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • alpha: scale for the negative factor.

References

[source]

ThresholdedReLU

  1. keras.layers.ThresholdedReLU(theta=1.0)

Thresholded Rectified Linear Unit.

It follows:f(x) = x for x > theta,f(x) = 0 otherwise.

Input shape

Arbitrary. Use the keyword argument input_shape(tuple of integers, does not include the samples axis)when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • theta: float >= 0. Threshold location of activation.

References

[source]

Softmax

  1. keras.layers.Softmax(axis=-1)

Softmax activation function.

Input shape

Arbitrary. Use the keyword argument input_shape(tuple of integers, does not include the samples axis)when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • axis: Integer, axis along which the softmax normalization is applied.

[source]

ReLU

  1. keras.layers.ReLU(max_value=None, negative_slope=0.0, threshold=0.0)

Rectified Linear Unit activation function.

With default values, it returns element-wise max(x, 0).

Otherwise, it follows:f(x) = max_value for x >= max_value,f(x) = x for threshold <= x < max_value,f(x) = negative_slope * (x - threshold) otherwise.

Input shape

Arbitrary. Use the keyword argument input_shape(tuple of integers, does not include the samples axis)when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • max_value: float >= 0. Maximum activation value.
  • negative_slope: float >= 0. Negative slope coefficient.
  • threshold: float. Threshold value for thresholded activation.