I have an Android app that was modeled after the Tensorflow Android demo for classifying images,
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android
The original app uses a tensorflow graph (.pb) file to classify a generic set of images from Inception v3 (I think)
I then trained my own graph for my own images following the instruction in Tensorflow for Poets blog,
https://petewarden.com/2016/02/28/tensorflow-for-poets/
and this worked in the Android app very well, after changing the settings in,
ClassifierActivity
private static final int INPUT_SIZE = 299;
private static final int IMAGE_MEAN = 128;
private static final float IMAGE_STD = 128.0f;
private static final String INPUT_NAME = "Mul";
private static final String OUTPUT_NAME = "final_result";
private static final String MODEL_FILE = "file:///android_asset/optimized_graph.pb";
private static final String LABEL_FILE = "file:///android_asset/retrained_labels.txt";
To port the app to iOS, I then used the iOS camera demo, https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios/camera
and used the same graph file and changed the settings in,
CameraExampleViewController.mm
Not sure if this is the best way to resize, but this worked. But it seemed to make image classification even worse, not better...
Any ideas, or issues with the image conversion/resize?