Basic concepts of the lampix Javascript APIΒΆ

The Lampix JavaScript API enables any HTML5 page to run on Lampix and react to movement, objects and hands.

The following interactions are possible around any recognised object:

  1. React when an object is placed at a certain given position (bounding box) under Lampix (:any:’registerSimpleClassifier’)
  2. React when an object is placed anywhere under Lampix (registerPositionClassifier)
  3. React when an user holds his finger on a given position, such as for a button press (registerSimpleClassifier)
  4. React when a paper document is placed under Lampix (registerPositionClassifier)
  5. React when movement happens within a given bounding box (registerMovement)

Objects can be recoginzed using different classifiers. Out of the box Lampix contains a training system enabling users to train their own Deep-Neural-Network-based classifiers to recognize their own objects. Also out of the box, there is an integration with the Google Vision API, allowing the recognition of a large set of general purpose objects. Should this be insufficient, custom classifiers are pluggable to the lampix core API and there is a separate documentation on how to build your own classifier.

On Lampix, HTML5 content (which we subsequently call application) runs in a sandboxed environment based on the Webkit web browser.

Once an application is loaded, there are 2 types of interactions between the Lampix system and the application, depending on the direction of the interaction:

  1. Application calls Lampix environment
  • Subscribe to movement events on a given set of areas, defined by their bounding boxes
  • Subscribe to classification once movement happens in a given area, defined by its bounding box. This includes all types of classifiers described above.
  • Subscribe to classification anywhere under Lampix, for a given class of objects. This excludes fingers and hand gestures and is more computationally intensive. It includes paper documents, for which the call also contains any text which was recognized on the paper as well as the positions of the text components.
  • Exclude certain areas from movement detection and classification. This is useful if the Application is aware that it is going to project things on these areas and does not want movement or detection to be triggered unnecesarly on these areas.
  1. Lampix environment calls Application
  • Movement has happened on one or more areas subscribed to. Lampix will send the exact contours of the physical moving parts to the application.
  • Classifier on a given position has detected a known object. Lampix will return the class of the detected object and will also return class 0 once the object is not detected anymore. The detection of fingers for button presses works using the same mechanism.
  • Classifier on the whole surface detected a known object. Lampix will return the class of the detected object, as well as its bounding box and its contour. Bounding box and contour are useful as in many cases the Application might refrain from overlaying content over the detected object in order not to influence further detections. Other Applications might use the contours to project directly on to the recognized objects and might turn off classification temporarly on these areas.

All the interactions can happen at any time during the lifetime of an Application. It is advisable for the Application to unsubscribe from all the subscriptions it does not need at certain points in time.

To enable experimenting with these interactions and creating content for lampix easily, the Lampix Simulator is provided. The Simulator provides the same environment for running lampix web applications as lampix does while it replaces the actual computer vision events (object & finger recognition). The developer can simulate these events on a PC via simple mouse clicks and see how the application reacts. Most lampix applications have been developed using the simulator first, in order to separate the complexity of the content from the complexity of the computer vision.