A weight imprinting engine that performs low-shot transfer-learning for image classification models.
For more information about how to use this API and how to create the type of model required (an embedding extractor), see Retrain an image classification model on-device.
Performs weight imprinting (transfer learning) with the given embedding extractor model.
Parameters: model_path (str) – Path to the embedding extractor or the model previous trained with ImprintingEngine.
Saves the newly trained model as
Parameters: output_path (str) – The filename and path for the trained model (must end with
Trains the model with a set of images for one category (class).
Parameters: input (list of
numpy.array) – Each
numpy.arrayrepresents an image as a 1-D tensor. You can convert each image to this format by passing it as an
numpy.asarray(). The maximum number of images in the list is 200.
Returns: An int as the label id for the class.
Trains the model using multiple categories (classes) of images.
This essentially calls
Train()for each category you provide, and produces a list of category ids.
Note: The maximum number of images for each category is 200.
Parameters: input_data (dict) – A mapping of each category name (string) to a list of
numpy.arrayimages to use for training. Each
numpy.arrayrepresents an image as a 1-D tensor. You can convert each image to this format by passing it as a
dictthat maps each category id (int) to each category name (string). You should write these results to a text file and use it as your labels file when performing inference with the generated model (which is given by
Is this content helpful?