21/10/12

GWT Augmented Reality - HOWTO - step 1

Welcome to the second step of using NyARToolkit in a GWT project.

NOTE:
At the end of the previous step we used the super-source tag; someone may have noticed that when you use super-source you end up with an eclipse project with errors: the project compiles with GWT and works without problems but eclipse signals errors in all the super-source'd classes (actually because package names are not correct).
To find the best solution for this annoying situation, last week I asked on the  google-web-toolkit google group. Actually the full solution is harder than we can afford in this post but you can find here the thread; in our demo-project without serverside component and without tests the first response in the thread is enough, but for a bigger application some care is needed.





Having NyARToolkit compiled in our GWT project let us to proceed here in setting up all the parts needed to track our first marker.

Following for a while  the  sample code provided by the toolkit in "SimpleLite.java" (package jp.nyatla.nyartoolkit.jogl.sample.sketch) it is straightforward to find that the steps required to detect a marker using NyARToolkit are:

1. create a markersystem-config and a markersystem:
                NyARMarkerSystemConfig config = new NyARMarkerSystemConfig(640,480);
                NyARGlMarkerSystem nyar=new NyARGlMarkerSystem(config);

2. load a marker description  into the markersystem
                int marker_id=nyar.addARMarker(ARCODE_FILE,16,25,80);

3. create a sensor
                NyARSensor i_sensor = new NyARSensor(new NyARIntSize(640,480));

4. populate the sensor, i.e. load in the sensor an image, and update the markersystem
                nyar.update(i_sensor);

5. check if the marker is found in the image
                 boolean found = nyar.isExistMarker(marker_id);

Steps 1,3 and 5 are ready to be used in GWT whereas step 2 and 4 need some work.

The method addARMarker is used to load a marker from a file i.e. it opens a stream, reads from the stream and then create a marker and stores it in the markersystem. In our web-app we cannot open a file but we can very easily leverage  ClientBundle to provide the resource to the library so the code reads:

                    ...
                    int i_patt_resolution = 16;
                    int i_patt_edge_percentage = 25;
                    double i_marker_size = 80;
                    NyARCode arCode=new NyARCode(i_patt_resolution,i_patt_resolution);
                    loadFromARToolKitFormString(Markers.INSTANCE.patt_hiro().getText(),arCode);
                    int marker_id = nyar.addARMarker(arCode, i_patt_edge_percentage, i_marker_size);
                    ...

where loadFromARToolKitFormString is the, almost verbatim, copy of     NyARCodeFileReader.loadFromARToolKitFormFile method:
    

public static void
   loadFromARToolKitFormString(String i_string,NyARCode o_code)
                throws NyARException {
        int width=o_code.getWidth();
        int height=o_code.getHeight();
        NyARRgbRaster tmp_raster=
                new NyARRgbRaster(width,height, NyARBufferType.INT1D_X8R8G8B8_32);
        try {
            String[] a = i_string.split(" ");
            int p=0;

            int[] buf=(int[])tmp_raster.getBuffer();
            for (int h = 0; h < 4; h++){
                p=readBlock(a,p,width,height,buf);
                o_code.getColorData(h).setRaster(tmp_raster);
                o_code.getBlackWhiteData(h).setRaster(tmp_raster);
            }
        } catch (Exception e) {
            throw new NyARException(e);
        }
        tmp_raster=null;
        return;
    }

and Markers ClientBundle is simply:

public interface Markers extends ClientBundle {
    public static final Markers INSTANCE  = GWT.create(Markers.class);  
     @Source("patt.hiro")
    public TextResource patt_hiro();
}


Having the marker loaded let us to focus on the last part: populating the sensor.

For this task fortunately NyARToolkit provides a general purposte raster class so we can write:

            NyARRgbRaster input =
                 new NyARRgbRaster(640, 480, NyARBufferType.BYTE1D_X8R8G8B8_32,false);
             input.wrapBuffer(bytes);
             i_sensor.update(input);

where bytes is the byte array:
                    byte bytes[] = new byte[640*480*4];

that has to be populated, for example,  copying data from a canvas:

                    ImageData capt = c.getContext2d().getImageData(0, 0, 640, 480);
                    try {
                        JsArrayInteger jsa = copyImageDataToJsIntegerArray(capt);
                        int len = jsa.length();
                        for(int j=0; j< len*0.25; j++) {
                            bytes[4*j] = (byte) jsa.get(4*j+3);      //alpha
                            bytes[4*j+1] = (byte) jsa.get(4*j);      //red
                            bytes[4*j+2] = (byte) jsa.get(4*j+1); //green
                            bytes[4*j+3] = (byte) jsa.get(4*j+2); //blue
                        }
                    } catch(Exception e) {
                        e.printStackTrace();
                    }


The idea from here is to use a canvas as a buffer where to copy frames from the camera and then process as shown above. Unfortunately the access of the cam requires WebRTC and thus Elemental
(ok, or jsni, but this is OT here ;). Keeping the project with 2.4 we can still verify if our approach works drawing in the canvas a still image of a marker (there is one in NyARToolkit, 320x240ABGR.png) via something like:


        SafeUri uri= Images.INSTANCE.captured().getSafeUri(); 
        final Image img = new Image(uri);

        final Canvas c = Canvas.createIfSupported();
        c.setCoordinateSpaceWidth(320);
        c.setCoordinateSpaceHeight(240);

        c.setWidth("640px");
        c.setHeight("480px");

        RootLayoutPanel.get().add(imag);

        img.addLoadHandler(new LoadHandler() {

            @Override
            public void onLoad(LoadEvent event) {
                ImageElement imgElement= ImageElement.as(img.getElement());
                c.getContext2d().drawImage(imgElement, 0, 0);
               ....
        } 



Next STEP: loading from the camera and going 3D.


Ciao,
   Alberto e Francesca.

The content of this post is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License.