Model name: GO Device Installation Monitoring model
Goal: Installing GO devices is a universal step for all Geotab customers. As part of the installation process, customers must submit a picture of the installed device. The models detect and verify that the GO device has been correctly installed to reduce potential problems from improper installation.
Base model: Pre-trained Yolo11 computer vision models, where nano and x-large determine the size and complexity of the model.
Model type: Object detection and image classification
Model version: 2
Developed by: Data observability
Primary intended uses:
Out-of-scope uses: This model is specifically designed for capturing installations of GO devices. It does not cover the installation of external hardware, OEM devices, third-party solutions, or the following accessory integrations:
Targeted users/User groups: The model enhances the MyAdmin and MyInstall team's operations by decreasing the reliance on manual reviews. This will lead to quicker customer feedback and significantly improve efficiency and scalability managed by all support operations.
Factors impacting model performance: Model performance is significantly impacted by noisy data. Customer submissions frequently include several images of GO devices. However, only images of installed devices are relevant, because uninstalled devices are typically classified as improper installations.
This section outlines the key aspects of the images used to develop and evaluate the model. We first describe the training and validation images, and then detail the data pipeline and preprocessing steps used to prepare the image for modeling. Lastly, we discuss the privacy considerations and protections implemented to ensure responsible handling of sensitive data.
The machine learning pipeline designed to detect bad installations of GO devices uses two main components using the Airflow orchestration (directed sequence of tasks with no loops or cycles):

In this section, we highlight some ethical challenges that were encountered during model development, including bias and fairness considerations, and present our solutions to overcome these challenges. Additionally, we provide the assumptions and constraints of our model, including any limitations in the data or the model's scope that could affect its performance, to foster understanding of the model's strengths and limitations to stakeholders which is crucial to responsibly use the model and interpret its results.

The object detection model was evaluated on the unseen images and the following metrics were used to assess the model performance:
Intersection over Union (IoU): IoU is a measure that quantifies the overlap between a predicted bounding box and a ground truth bounding box. It plays a fundamental role in evaluating the accuracy of object localization.
The image classification was evaluated on the unseen images and the performance metrics used to assess the model are accuracy, precision, recall and F1 score: