MovementBasedSegmenter
MovementBasedSegmenter's turn to locate and classify fruits.
Let's create an initialization function for the MBS watcher.
const initializeMBS = () => {};Once again, retrieve the elements we'll be working with, along with the bounding rect of the element defining the watcher's contour.
const initializeMBS = () => {
const mbsElement = document.getElementsByClassName('mbs')[0];
const mbsBounds = mbsElement.getBoundingClientRect();
};Just like NeuralNetworkClassifier (and all watchers, for that matter), MBS also has its classification trigger on the onClassification callback.
However, MovementBasedSegmenter has a secondary callback, triggered prior to onClassification, named onLocation. This is because MBS first determines the location of an object and then it classifies it. onLocation is generally used to create a loading animation for a located, not yet classified object.
For the sake of simplicity, we will focus on onClassification in this guide.
const initializeMBS = () => {
const mbsElement = document.getElementsByClassName('mbs')[0];
const mbsBounds = mbsElement.getBoundingClientRect();
const onClassification = (classifiedObjects) => classifiedObjects.forEach((classifiedObject) => {
handleObjectClassified(classifiedObject, '#FFFFFF');
});
const onLocation = (locatedObjects) => {
// This step fires before onClassification!
console.log(locatedObjects);
};
const mbsFruitsWatcher = {
name: 'MovementBasedSegmenter',
shape: lampix.helpers.rectangle(
mbsBounds.left,
mbsBounds.top,
mbsBounds.width,
mbsBounds.height
),
params: {
neural_network_name: 'fruits'
},
onLocation,
onClassification
};
};Let's also add the utility mentioned above to add DOM elements for each classified object (remember to import it):
All that's left is telling Lampix about this watcher too, by adding the following to the end of the initializeMBS function.
Now, initializeMBS should look like this:
Last updated
Was this helpful?