Back

Explore Courses Blog Tutorials Interview Questions
0 votes
5 views
in AI and Deep Learning by (50.2k points)

I'm a Keras beginner and am trying to build the simplest possible autoencoder. It consists of three layers: an input layer, an encoded representation layer, and an output layer. My data (training and validation images) are a ndarray where each image is 214x214x3 (pixels x pixels x RGB channels). I thought I could just use the input shape of the images in the Input layer, but somehow I keep encountering errors.

I tried flattening the data, and that works just fine. I can of course just do that, and reshape the output, but I'm curious why this doesn't work.

# Shape and size of single image 

input_shape = x_tr.shape[1:] # --> (214, 214, 3) input_size = x_tr[0].size # Size of encoded representation encoding_dim = 32 compression_factor = float(input_size / encoding_dim) # Build model autoencoder = Sequential() autoencoder.add(Dense(encoding_dim, input_shape=input_shape, activation='relu')) autoencoder.add(Dense(input_shape, activation='softmax')) input_img = Input(shape=(input_shape,)) encoder_layer = autoencoder.layers[0] encoder = Model(input_img, encoder_layer(input_img)) autoencoder.compile(optimizer='adadelta', loss='mean_squared_error') autoencoder.fit(x_tr, x_tr,epochs=50,batch_size=32,shuffle=True,verbose=1,validation_data=(x_va, x_va),callbacks=[TensorBoard(log_dir='/tmp/autoencoder2')])

1 Answer

0 votes
by (108k points)

We would have multiple types of input data for sequential Keras model, which includes:

  1. Numeric/continuous values

  2. Categorical values

  3. Image data

Coming back to your question, for solving your encountering errors in autoencoders you have to go through the following steps:

  • Data Visualization & Preprocessing: Since we only have 3 layered features, we have to normalize the features to be between [0, 1].

  • Softmax Regression Mode: We will now train a Softmax Regression (SR) model to predict the labels as it achieves 97% training accuracy and a minimal loss.

  • ANN Model: Now we will build our ANN model. We will add 2 hidden layers with 32 and 16 nodes.

  • Cross-Validation: With a small sample size like our current situation, it’s especially important to perform cross-validation to get a better estimate on accuracy.

your problem arises in Dense as it takes an integer as an input (the number of neurons), you provided a tuple Try:

output_dim = 214 * 214 * 3

autoencoder.add(Dense(output_dim, activation='softmax'))

You need to flatten your inputs/outputs, the fully connected Dense layer expects a 1-dimension input/output.

For more details, you can check out this link:https://towardsdatascience.com/applied-deep-learning-part-2-real-world-case-studies-1bb4b142a585

Become a master of Artificial Intelligence by going through this online ai course.

Browse Categories

...