I have an Android app that was modeled after the Tensorflow Android demo for classifying images,
The original app uses a tensorflow graph (.pb) file to classify a generic set of images from Inception v3 (I think)
I then trained my own graph for my own images following the instruction in Tensorflow for Poets blog,
and this worked in the Android app very well, after changing the settings in,
private static final int INPUT_SIZE = 299;
private static final int IMAGE_MEAN = 128;
private static final float IMAGE_STD = 128.0f;
private static final String INPUT_NAME = "Mul";
private static final String OUTPUT_NAME = "final_result";
private static final String MODEL_FILE = "file:///android_asset/optimized_graph.pb";
private static final String LABEL_FILE = "file:///android_asset/retrained_labels.txt";
To port the app to iOS, I then used the iOS camera demo, https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios/camera
and used the same graph file and changed the settings in,
Not sure if this is the best way to resize, but this worked. But it seemed to make image classification even worse, not better...
Any ideas, or issues with the image conversion/resize?