@Ritik, you haven’t mentioned what architecture you are talking about so let us assume it’s a partly convolutional, partly connected network like AlexNet, GoogLeNet , etc. The answer to your question depends on the network type which you are working with.
Let us assume you have a network which contains a lot of convolutional units and does not have any fully connected layers, so it can be invariant to the size of the input image. This type of network can process the input images and return another image as output. Remember to make sure that the output matches your expectation since you have to find the loss.
If you are using fully connected units then it can be big trouble for you, following are some ways to solve your problem:
- Don’t care about the squashing of images.
- Centre-crop the images to a specific size.
- Pad the image with a solid color and then resize it
- Perform the combination of the above.
visit here for examples: https://github.com/tensorflow/models/blob/f98c5ded31d7da0c2d127c28b2c16f0307a368f0/slim/preprocessing/inception_preprocessing.py#L206-L216
Hope this answer helps!