Keras Model Import: Supported Features
Little-known fact: Deeplearning4j’s creator, Skymind, has two of the topfive Keras contributorson our team, making it the largest contributor to Keras after Keras creator FrancoisChollet, who’s at Google.
While not every concept in DL4J has an equivalent in Keras and vice versa, many of thekey concepts can be matched. Importing keras models into DL4J is done inour deeplearning4j-modelimportmodule. Below is a comprehensive list of currently supported features.
Layers
Mapping keras to DL4J layers is done in the layers sub-module of model import. The structure of this project loosely reflects the structure of Keras.
Core Layers
- Dense
- Activation
- Dropout
- Flatten
- Reshape
- Merge
- Permute
- RepeatVector
- Lambda
- ActivityRegularization
- Masking
- SpatialDropout1D
- SpatialDropout2D
- SpatialDropout3D
Convolutional Layers
- Conv1D
- Conv2D
- Conv3D
- AtrousConvolution1D
- AtrousConvolution2D
- SeparableConv1D
- SeparableConv2D
- Conv2DTranspose
- Conv3DTranspose
- Cropping1D
- Cropping2D
- Cropping3D
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
Pooling Layers
- MaxPooling1D
- MaxPooling2D
- MaxPooling3D
- AveragePooling1D
- AveragePooling2D
- AveragePooling3D
- GlobalMaxPooling1D
- GlobalMaxPooling2D
- GlobalMaxPooling3D
- GlobalAveragePooling1D
- GlobalAveragePooling2D
- GlobalAveragePooling3D
Locally-connected Layers
Recurrent Layers
Embedding Layers
Merge Layers
- Add / add
- Multiply / multiply
- Subtract / subtract
- Average / average
- Maximum / maximum
- Concatenate / concatenate
- Dot / dot
Advanced Activation Layers
Normalization Layers
Noise Layers
Layer Wrappers
- TimeDistributed
- Bidirectional
Losses
- mean_squared_error
- mean_absolute_error
- mean_absolute_percentage_error
- mean_squared_logarithmic_error
- squared_hinge
- hinge
- categorical_hinge
- logcosh
- categorical_crossentropy
- sparse_categorical_crossentropy
- binary_crossentropy
- kullback_leibler_divergence
- poisson
- cosine_proximity
Activations
- softmax
- elu
- selu
- softplus
- softsign
- relu
- tanh
- sigmoid
- hard_sigmoid
- linear
Initializers
- Zeros
- Ones
- Constant
- RandomNormal
- RandomUniform
- TruncatedNormal
- VarianceScaling
- Orthogonal
- Identity
- lecun_uniform
- lecun_normal
- glorot_normal
- glorot_uniform
- he_normal
- he_uniform
Regularizers
- l1
- l2
- l1_l2
Constraints
- max_norm
- non_neg
- unit_norm
- min_max_norm
Optimizers
- SGD
- RMSprop
- Adagrad
- Adadelta
- Adam
- Adamax
- Nadam
- TFOptimizer