[source]

LocallyConnected1D

  1. keras.layers.LocallyConnected1D(filters, kernel_size, strides=1, padding='valid', data_format=None, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

Locally-connected layer for 1D inputs.

The LocallyConnected1D layer works similarly tothe Conv1D layer, except that weights are unshared,that is, a different set of filters is applied at each different patchof the input.

Example

  1. # apply a unshared weight convolution 1d of length 3 to a sequence with
  2. # 10 timesteps, with 64 output filters
  3. model = Sequential()
  4. model.add(LocallyConnected1D(64, 3, input_shape=(10, 32)))
  5. # now model.output_shape == (None, 8, 64)
  6. # add a new conv1d on top
  7. model.add(LocallyConnected1D(32, 3))
  8. # now model.output_shape == (None, 6, 32)

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
  • strides: An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: Currently only supports "valid" (case-insensitive). "same" may be supported in the future.
  • data_format: String, one of channels_first, channels_last.
  • activation: Activation function to use (see activations). If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

3D tensor with shape: (batch_size, steps, input_dim)

Output shape

3D tensor with shape: (batch_size, new_steps, filters)steps value might have changed due to padding or strides.

[source]

LocallyConnected2D

  1. keras.layers.LocallyConnected2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

Locally-connected layer for 2D inputs.

The LocallyConnected2D layer works similarlyto the Conv2D layer, except that weights are unshared,that is, a different set of filters is applied at eachdifferent patch of the input.

Examples

  1. # apply a 3x3 unshared weights convolution with 64 output filters
  2. # on a 32x32 image with `data_format="channels_last"`:
  3. model = Sequential()
  4. model.add(LocallyConnected2D(64, (3, 3), input_shape=(32, 32, 3)))
  5. # now model.output_shape == (None, 30, 30, 64)
  6. # notice that this layer will consume (30*30)*(3*3*3*64)
  7. # + (30*30)*64 parameters
  8. # add a 3x3 unshared weights convolution on top, with 32 output filters:
  9. model.add(LocallyConnected2D(32, (3, 3)))
  10. # now model.output_shape == (None, 28, 28, 32)

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions.
  • padding: Currently only support "valid" (case-insensitive). "same" will be supported in future.
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last".
  • activation: Activation function to use (see activations). If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:(samples, channels, rows, cols) if data_format='channels_first'or 4D tensor with shape:(samples, rows, cols, channels) if data_format='channels_last'.

Output shape

4D tensor with shape:(samples, filters, new_rows, new_cols) if data_format='channels_first'or 4D tensor with shape:(samples, new_rows, new_cols, filters) if data_format='channels_last'.rows and cols values might have changed due to padding.