Introduction - basic concepts of the lampix API

The Lampix software stack (LSS) is built with Python 2.7 and runs on Linux. Currently it uses the standard distro of the Raspberry PI 3, the Debian-based Raspbian system. This combination opens access to a very wide range of libraries and user communities, and that is one of the reasons we chose it.

We have been working on an Android-Based Lampix in the recent weeks so this update should arrive soon.

Parts of this documentation describe features which are in the making and have not yet been released to the public. Usually these parts are marked with an (U), we do our best at keeping these marks updated.

Lampix consists of a complex system of cameras, a projector, lighting fixtures and of course the Raspberry PI. The interaction of these is taken care of by the LSS.

The aim of the LSS is to take away the unpleasant part of developing Apps for Lampix, leaving room for creativity.

From a birds-eye perspective, LSS takes care of the following:

  • Generating and updating the lampix world object model (LWOM) which contains a representation of the physical things lampix is currently seing (U)
  • Managing which lampix app is currently active in which area of the desk
  • Handling basic access to the lampix video stream
  • Generating higher level events out of the video stream, such as:
    • Movement on certain areas
    • Shape and object detection (U)
    • AR-Marker detection
  • Coordinate transformations (object to projector, camera to projector, projector to camera, etc.)
  • Buttons and menus
  • Web based interactions through the lampix web interface and RESTful services
  • Recognising the Unique ID lampix paper, see Unique ID lampix paper (U)
  • Providing all base classes required for creating new lampix apps
  • Controlling the light. In the end it’s just a smarter lamp.
  • Developer Web interface

The following is a list of lampix apps which come preloaded out of the box. To boostrap your development, pick the one of the example apps below which comes closest to your app-idea, download its source code and start modifying it. To get a good overview of the LSS, keep on reading below.

Application Used services (most important)
Control the lights
  • Light control
  • Buttons
Freeze a document on the desk
  • Paper document detection and document menus
  • Coordinate transformations
Detect a marker and project a point on it
  • AR Marker Detection
Upload a document to the users Dropbox
  • Paper document detection and document menus
Copy and paste documents to desktop (U)
  • LWOM Rectangles
  • Paper document detection and document menus
  • RESTful lampix web service
Show a tea timer when a cup is detected (U)
  • Buttons
  • Custom shape detection
Find text in a document (U)
  • OCR service (cloud based)
  • Paper document detection and document menus
Share document from desk (U)
  • Paper document detection and document menus
  • Unique document ID
Share from smartphone to desk (U)
  • RESTful lampix web service
Notifications (U)
  • Custom widgets

Getting started as a developer

Once you receive your lampix, plug it in and boot it up. If the projector does not turn on automatically, use its remote control to turn it on.

Important: In ordrer to properly shut down lampix, please issue a sudo poweroff in the terminal, AND turn off the projector using its remote control. This last step is important as otherwise the Raspberry PI does not correctly shut down.

In the next step, connect your lampix to your wifi network using the standard procedure from Raspbian. After this, find the IP of your lampix by running ifconfig in the terminal and reading the inet addr listed under wlan0.

To develop for lampix we recommend using a Python IDE, such as PyCharm. Other useful tools on Windows operating systems include putty to get a remote terminal on your lampix and WinScp to upload and download files.

If running remotely as described, it is useful to pull the lampix code to your PC and work on it there, while just testing using lampix.

For easy development and testing using PyCharm we recommend setting up an automated upload to your lampix and a remote debugger connection. Due to GPIO access lampix currently needs to run sudo-ed and to enable remote starting of the application from within PyCharm the remote python interpreter has to be set to the file /home/pi/pythonsudo.sh within PyCharm.

After setting up all the above you should be able to quickly get into development, modifying code on your PC, and running it on your Lampix just by pressing Run.

Whenever lampix is running, there is a Developer Web Interface available on port 8888. The place you want to get started is /web/const.html. This page shows a list of all the constants used in Lampix algorithms, and also 3 useful debugging features:

  • Log snapshot: Records and shows the current log file. Useful if you want to know what just happened, or if an error occured.
  • Snapshot: Shows an image of what Lampix is currently seeing.
  • Apps: Shows a list of all running and available apps.

How you can extend lampix

1. Write apps which can use objects from the LWOM

A lampix application can do one or more of the following tasks:

  1. Provide a recognizer which reacts to area movement and generates recognized objects
  2. Display stuff
  3. Provide new web api calls and react to them
  4. Call web apis from other servers
  5. Provide business logic, this is the connector between recognized objects, displaying, web api, etc.

The minimal thing you have to do in order to get a running app is to extend :any:`LampixApp` and place your class in the applications directory. Obviously we are working on advanced deployment methods (one of the reasons we are working on the Android version of lampix).

Once you do this, the application you just created pops up in the apps section of the Developer Web Interface.

If your Lampix seems to be reacting slowly, try starting and stopping different apps to get a basic intuition of each apps runtime behaviour.

Deriving from LampixApp does not yet allow you to project things on the table or use the Lampix camera system. For this you need to extend :any:`SurfaceWatcher`.

2. Write code which extends the LWOM

To teach lampix about new types of physical objects or to project stuff on the desk, the base class you have to derive from is :any:`SurfaceWatcher`. Deriving from this you inherit the ability to react on movement on the desktop. You can watch for movement only on a certain area of the desktop if you set a corresponding mask.

Examples of classes derived from SurfaceWatcher:
  • :any:`Desk` takes care of recognizing and creating * :any:`Paper`, the representation of any physical paper document.

The current standard class hierarchy derived from SurfaceWatcher is the following.

digraph foo { { "Desk" [shape="rect", style=filled, fontsize=8] "SurfaceWatcher" [shape="rect", style=filled, fontsize=8] "Paper" [shape="rect", style=filled, fontsize=8] "Button" [shape="rect", style=filled, fontsize=8] "NeuralButton" [shape="rect", style=filled, fontsize=8] "DelegateButton" [shape="rect", style=filled, fontsize=8] } "Desk" -> "SurfaceWatcher" "Paper" -> "SurfaceWatcher" "Button" -> "SurfaceWatcher" "NeuralButton" -> "Button" "DelegateButton" -> "NeuralButton" }

You can study the class reference to get deeper insight into what each class does, and we will also keep updating this section of the docs to lower the entry barrier.

The lampix application lifecycle

This part of the documentation is currently under development.

How do lampix apps interact with their environment?

In terms of input, each lampix application can “see” using the cameras of the system. Each application is registered for a certain area or for the whole surface of the desktop, and can have exclusive or shared access to that area. Lampix abstracts away some of the details, such that the application actually only gets notified when movement really happens in its area.

Also, lampix has a built in microphone such that each application can be triggered by voice (U).

In terms of output each application can project graphics on the desktop and can initiate events to other apps and web calls to external services.

There are a few key events which can trigger the start of any lampix application:

  • Visual triggers

    • Motion triggered
    • Area motion triggered
    • Shape triggered (U)
  • Other triggers

    • Triggered by another application (events) (U)
    • Triggered by the main statemachine (U)
    • Triggered by a web method of lampix (U)
    • Triggered by voice (U)

While an application is active, it receives events related to:

  • Movement on the area it is registered for
  • The web api calls it has registered (U)
  • Button presses for the buttons it has registered

The lampix coordinate systems

All lampix applications gain access to 3 coordinate systems and the required coordinate transformation APIs:

  • Projection coordinates: these are the virtual screen coordinates which are used for positioning displayed graphical elements
  • Video coordinates: these are the coordinates of the video stream, the “sight” of lampix, image processing happens in these coordinates. For example, when a rectangle detection takes place, its results are initially delivered in this coordinate system
  • Document (object) coordinates: After a paper document (or any other element) has been detected, there is also a coordinate system which originates on the corner of this element, such that it is easy to position graphics directly on this element (such as when finding text on a physical document)

To make things easier for developers, the :any:’CoordinateTransformer’ class provides methods for transforming between these coordinate systems and the :any:’PerspectiveCropper’ class provides methods for warping and cropping images between coordinate systems.