Tutorials

21 Jun 2016 Posted by: Comments: 0 In: tutorial

This is a simple Projection Mapping tutorial for beginners. You will use a Processing sketch to map your face (or an image file face) onto a mannequin face. The Processing sketch will record your face through your webcam or will read an image file from your disk; its output is then passed via Spout (Syphon) to the mapping software MAPIO, and then you will use MAPIO warping features to map your face onto a mannequin face using a projector.. or without it :) Most of the images in this tutorial are from Windows, but the workflow for Mac users is almost the same, so you can follow it without problems.

MAPIO is a simple yet powerful Projection mapping software, that I found to be very useful especially for beginners, just to get a feel with the basic principles of projection mapping.

Spout (on Windows) and Syphon (os OSX) are two useful utilities that allows applications to share frames, video or stills, in realtime. Using these utilities, you can gather two or more video source from different software and mix, merge, edit the result using another software.

 

Download the software

Install Video, Spout / Syphon Libraries for Processing

  1. In order to use our webcam with Processing, we need to install the correct library. To do so, go to: Sketch -> Import a library -> AddLibrary… (you can do this even if you don’t have a webcam)

    and type in the Video in the ‘Find’ field. Click on the Video library from Processing foundation:
    Screen Shot 2016-05-25 at 17.30.51
    Click ‘Install’ button and wait for the download. If the installation will success, you should see a green circle at the beginning of the row.
  2. Windows Users: just repeat step 1, but this time installing Spout library.
    Screen Shot 2016-05-25 at 17.31.36
  3. OSX Users: just repeat step 1, but this time installing Syphon library.
    Screen Shot 2016-05-25 at 17.46.17

Run the Processing sketch

  1. Download here (WIN / OSX) the Processing sketch for the exercise.
  2. Run the Processing sketch. Use 1, 2, 3 keys to switch between 3 modes:
    1. A men faces selection
    2. A women faces selection
    3. The livestream from the webcam (if there is a webcam). Press spacebar to freeze the video-stream. It will facilitate the mapping of your static face.
  3. From now on, the Processing sketch will stream via Spout (Syphon) its output to every Spout (Syphon) receivers. We’ll use MAPIO Spout(Syphon) Built-in plugin to catch this video stream and map it onto our mannequin face.
  4. Open MAPIO.
  5. Windows users: Select Source -> Spout2 -> pmtest.
    OSX users: Select Source -> Syphon -> Processing syphon
    You should see the Processing sketch output into the Canvas area!
    2016-06-21 01_31_11-MAPIO 2 Lite (64 bits) [DEMO] - new

Let’s map your face!

  1. If you have a projector, connect it to your PC/Mac now. The second video output of your PC/Mac will be connected to our projector.
    Windows users: in your windows Settings, set the Display mode to Extend these displays.
    2016-06-21 18_39_31-Settings
    OSX users: just open System Preferences -> Displays -> Arrangment, and make sure Mirror displays options is not checked.
    Screen Shot 2016-06-22 at 12.21.16
  2. Select INPUT in the Map Mode Tab (1), make sure SLICE Edit Mode is selected, and select the Transform tool from the Tools Tab (2). Now resize and move the selection around your favourite face (3).
    2016-06-21 01_35_34-MAPIO 2 Lite (64 bits) [DEMO] - new
  3. In this way we are telling MAPIO that we are only interested in that area of the input signal.
  4. If you have a Projector, then continue to the next step. If you don’t, jump at the end of the tutorial to correctly setup MAPIO, then you will return to the step 6 of this tutorial!
  5. Switch to OUTPUT Map Mode (1), choose the Display option in the Destination Menu (2). A separate window will appear (3): this is the content we will project with our projector onto our mannequin. Drag the Display Window (3) to the second screen, and double click into it to switch in full screen mode. At this point you should see your MAPIO interface in the first screen, and a fullscreen face projected from your projector.
    2016-06-21 18_34_27-MAPIO 2 Lite (64 bits) [DEMO] - new
  6. Resize the rectangle in the CANVAS tab and translate it in the canvas area: try to match the mannequin face area as close as possible (focus on the eyes and the mouth).
    pm20
  7. Now the fun part. As you noticed, there are part of your projected image that have to be replaced, to match the mannequin shape. MAPIO allows you to warp the projected in more ways. Here you can find a comprehensive video with all the MAPIO warping features. In our tutorial, we’ll focus on the Elast Mode tool. To start with it, Select the Warp tool (1), make sure you select at least Medium subdivisions in your slice settings (2), and choose Elast Mode – Elast Rect in the Toolbar (3). You should see a red rectangle on your face. Now we need to modify the The Elast Rect area: every vertex we will move will affect all the vertices in this Elast Rect area. Let’s start with the eyes: as you notice from the projection, to fill the whole mannequin face we resized our projected face, but now the glasses are too wide, we need to shrink the area between the projected eyes, mantaining (as long as possible) the position of the ears. To move on the right the left side of the glasses without create a strong distortion on the near vertices, we can influence the whole left side of the projected image: click and drag with the left mouse button from the upper left (4) to the bottom right (5) of the left side of the image.
    2016-06-21 22_19_32-MAPIO 2 Lite (64 bits) [DEMO] - new
  8. Inside this area, we choose to push the glasses toward the center, starting from the left. Switch to the Line Tool (1), Click and drag from point (2) to point (3), and then click on the line you just created, and drag it to the right, until the left lens glasses will be centered on the left mannequin eye. Please note that in this tutorial we will focus on the eyes, the nose and the mouth, so we don’t care if the rest of the projected face will be distorted: we will mask it at the end :)
    pm22
  9. Repeat the process for the right eye, the nose, and the mouth. Remember: each time you are moving a different area, you have to specify a different Elast Rect area.
    1. For the nose, use two vertical lines to adjust the left side and the right side, while mantaining the same Elast Rect area.
    2. For the mouth, use two vertical lines to adjust the left side and the right side, and two horizontal lines to adjust the beginning and the end of the beard, while mantaining the same Elast Rect area.
      pm23
  10. And here it is our result for now:
    IMG_9110
    We can turn off the areas outside the center, in this way we will also cut off the high projected distortion on the forehead, the ears and the cheeks. Le’s mask them!
  11. Click on Mask Mode (1), choose Vector Tool (2) and then click on the (+) Toolbar icon (3). Left click a few times to create a polyline around eyes, nose and mouth (4-13), double click to close the path (14), then click on Invert in the Mask properties (15).
    2016-06-21 22_55_35-MAPIO 2 Lite (64 bits) [DEMO] - new
    We obtain this:
    IMG_9112
  12. This is only a tutorial: in a real project we probably will mask the projected image with an alpha image and smoothed borders, to better merge our projected face with the mannequin one, and we will use a lot of other tricks to get a better result. Keeping things simple, we can still adjust a bit the projection. Since the mannequin has its own colour (pink), it is best to desaturate our projected image. Click on Color Tool, and set the Saturation value to 0.
    2016-06-21 23_09_05-MAPIO 2 Lite (64 bits) [DEMO] - new
  13. If you are simulating the projector, probably at this time you already get a good result. If you are using a real projector, probably you want to light up a bit the rest of the mannequin face. To do that, we can add a white background surface to the projection. Click on the Square icon in the Add Tab: a “Slide 2” rectangle will appear in the Project Tab. Drag it under the “Slice 1” rectangle.
    2016-06-21 23_10_16-MAPIO 2 Lite (64 bits) [DEMO] - new
  14. Choose Source->Image and select a white image from your disk.
    2016-06-21 23_12_15-MAPIO 2 Lite (64 bits) [DEMO] - new
  15. Click on Color Tool (like in step 11) and adjust the Brightness until you get a good uniform (almost) result between the projected face and the projected backgroud. On the right you can see the simulated-projection result (using the Processing sketch).
    pm24

…Enjoy your result!:)

Let’s map your face (without a projector)!

Ok, so.. it turns out that your projector is broken, you lent it to your best friend a year ago, or.. you simply don’t have one, but you want to start practice with the projection mapping tools. We can simulate the projector output using for the second time the Spout (Syphon) framework, this time as a destination.

  1. Windows users: select Destination -> Spout2 from the Menu
    OSX users: select Destination -> Syphon from  the Menu
    2016-06-21 23_56_05-MAPIO 2 Lite (64 bits) [DEMO] - projector_00.mio [ ~_Documents_Projects_Projecti
  2. Select Destination -> Output settings, click on 640×480 resolution link, and click Save.
    2016-06-22 00_17_50-MAPIO 2 Lite (64 bits) [DEMO] - projector_00.mio [ ~_Documents_Projects_Projecti
  3. Download here the processing receiver sketch (WIN / OSX), open and run it.
  4. Windows users: right click on the Processing output window: it will appear a menu with all the Spout senders. pmtest is our Processing sketch sender, that we are using to stream the Processing output to MAPIO. Here we need to get the MAPIO output, so choose Mapio as Spout source. Click Save.
    2016-06-22 00_33_48-SpoutPanel
  5. Use the keys 1,2,3,4 to choose the best background onto wich you want to project your face. This background images will let you to simulate a real situation, but since you don’t have a real projector, let’s simulate a projection adding the MAPIO output to our fake background statues. You should see something like this:
    2016-06-22 01_09_28-ProjectionMapping_SpoutReceiver_01
    Our projected face is clearly too big for the background statue face, but we will adjust it soon. Now jump back to the previous tutorial steps, to number 6. Remember that every time you will see a projected image onto the mannequin face you have to imagine that face projected onto the background statue of the Processing Sketch. I know, it is not the same, but.. it is still something! Enjoy!

 

25 May 2016 Posted by: Comments: 0 In: tutorial

This is a simple Projection Mapping tutorial for beginners. You will use a Processing sketch to map your face (or an image file face) onto a mannequin face. The Processing sketch will record your face through your webcam, or will read an image file from your disk. The Processing Sketch output is then passed via Syphon (or Spout) to the VJ software Resolume Arena, and then you will use Resolume worping features to map your face onto the mannequin.

Download the software

Separate files

Install Video Library for Processing (OSX & Windows users)

In order to use our webcam with Processing, we need to install the correct library. To do so, go to:

Sketch > Import a library > AddLibrary...

Screen Shot 2016-05-25 at 17.30.18
and type Video in the ‘Find’ field. Click on the Video library from Processing foundation:
Screen Shot 2016-05-25 at 17.30.51
Click ‘Install’ button and wait for the download. If the installation will success, you should see a green circle at the beginning of the row.

Install SPOUT (Windows users)

We need to install SPOUT service both for Resolume and for Processing.

  • To install SPOUT for Resolume, just copy .dll files from
    C:\ProgramFiles(x86)\SpoutX\FFGL

    to

    C:\Program Files (x86)\Resolume Arena X\plugins\vfxDownload Spout
  • For Processing, we need to install the Spout library. Just Repeat the step you did to install the Video library, but this time installing ‘Spout’ library
    Screen Shot 2016-05-25 at 17.31.36

Install SYPHON (OSX users)

We need to install Syphon both for Resolume and for Processing.

  • To install Syphon for Resolume, […]
  • For Processing, we need to install the Syphon library. Just Repeat the step you did to install the Video library, but this time we will search for the ‘Syphon’ library
    Screen Shot 2016-05-25 at 17.46.17

Run the Processing sketch

  1. Download here (WIN, MAC) the processing sketch for the exercise. You need to have a connected webcam and a projector to follow the entire exercise.
  2. Run the sketch. Use 1, 2, 3 keys to switch between 3 modes:
    1. A men faces selection
    2. A women faces selection
    3. The livestream from the webcam. Press spacebar to freeze the stream. It will facilitate the mapping of your static face.
  3. From now on, the Processing sketch will stream via Spout (Syphon) its output to every Spout (Syphon) receivers. We’ll use Resolume Arena Spout (Syphon) receiver plugin to catch this stream and map it onto our mannequins.
  4. Open Resolume.
    1. Under The tab Sources you should see a Spout (Syphon) branch: it is the output of our Processing sketch. Drag and drop the pmtest (It might have a different name from this example) label from the Source tab into one of the empty console boxes (2).
    2. Click on the pmtest console box you just created: you should see the Processing sketch output into the Output Monitor area (3)!
      dragDrop

Let’s map your face!

  1. Open the Advanced Output Menu from the menu Output/Advanced (1).
  2. Make sure The Screen Label (2) and the Output Transformation tab (3) are selected. In the Device Menu (4), choose the display of your projector. After that, you should see the Output Transformation tab content projected by your projector.
    2016-06-08 01_06_34-Resolume Arena - Example (1280 x 720)
  3. Since we want to project only one face, there is no need to project the whole source processing composition onto our mannequin. Let’s select only a cropped area. Select the Input Selection tab (1) and make sure the Slice 1 label is selected (2). Now drag and drop the four corners of the highlighted area around the face you want to project (3). Now we have
    2016-06-08 01_14_04-Resolume Arena
  4. Back to the Output Transformation tab (1), richgt click on the image and select Match Input Shape (2). Now that we have a more proportioned figure, Select the Transform tool (3) and drag the figure in place, to roughly match the mannequin face.
    2016-06-08 01_22_22-Resolume Arena
  5. Now the fun part. Choose The Edit Point tool (1) and start to subdivide the figure surface adding some movable vertices: click on ‘+’ Subdivisions X/Y until you are satisfied. These vertices added to our figure allows us to move only specific portion of the face texture, to refine the mapping and warp the image to perfectly align the face in the figure to the mannequin one. For example, if you want to move the entire right eye, you should select the area highlighted with the number (3) (by dragging the left mouse button). Once the vertices are selected, you can move them altogether.
    2016-06-08 01_54_19-Untitled-1 @ 50,1% (File_000, RGB_8) _
  6. Repeat the previous step until you get the desired result. In the following images you can see the final result (still very rough, no time to map :) and the resulting warped image.
    2016-06-08 01_54_52-Untitled-1 @ 50,1% (2016-06-08 01_39_27-Resolume Arena, RGB_8) _
14 Jun 2014 Posted by: Comments: 0 In: tutorial

Lately I have received many requests on how to use iniTree and iniSphere in VDMX. Below you will find a simple guide on how to do that. I’ll show how to use iniTree patch, but the same steps apply also to iniSphere and other patches! Hope it helps!

NB: use only 32bit version of inimart plugins if you want to use them with VDMX or CoGe. 64bit version is not supported yet!!!

Each Quartz Composer patch can have as many published VDMX inputs as needed: instead of simply use right click + publish input on the patch, we need to use input splitters. Here it is how:

  1. Insert iniTree patch in your composition.
  2. Right click on iniTree patch -> insert input splitter -> choose for the input you want to publish into VDMX/CoGe. We’ll go for Height, Opening, Branches Num and DrawLines for this example.Screen Shot 2014-06-14 at 15.19.08
  3. Right click on Height input splitter -> publish input -> input. Choose a right name for this input (hum.. Height, for example? :) ). Do the same for the other input splitters.Screen Shot 2014-06-14 at 16.44.42
  4. Now you need to tell VDMX witch kind of value you will accept for this input. Click on Height input splitter, and open the Setting parameters by pressing ⌘2 (cmd + 2) on the keyboard.
  5. The Type setting will set the input type. The type names you’ll find here are quite self-explanatory, only one note on the difference between Index and Number:
    • Index is the same of Whole type in math: from 0 to N, without decimals. You can specify a sequence of labels associated with indices values (a sort of enum, if you know a bit programming stuff);
    • Number is the same of Real type in math (float in programming): from -N to N, with decimals.Screen Shot 2014-06-14 at 15.42.56
  6. Taking into account iniTree free limitations on tree rendering, Choose this settings for input splitters:
    • Height – Type: Index / Limited / Maximum Value: 4 / Minimum Value: 0
    • Opening – Type: Number / Limited / Maximum Value: 10 / Minimum Value: 0
    • Branches Num – Type: Index / Limited / Maximum Value: 4 / Minimum Value: 0 (This input can be also a Number with the same range, if you want to use intermediate Branches angles).
    • DrawLines – Type: Boolean
  7. Setup the other iniTree input to let VDMX/CoGe render something also if you’ll use only the 4 input splitter inputs.
    • BranchesRatio: 1
    • GrowDelay: 10
    • LineWidth: 3Screen Shot 2014-06-14 at 16.15.57
  8. That’s it! Save the composition, open VDMX/CoGe, and drag the composition in one free slot. The input splitter value types will be read, and will provide you an input panel for your Quartz Composer animations!Screen Shot 2014-06-14 at 16.39.18
    CoGeTest

You can download the example composition for this tutorial (using iniTree plugin) here:

27 Dec 2013 Posted by: Comments: 1 In: tutorial

Here you can download a simple Quartz Composer composition to show how you can link iniSphere input (but the same approach is valid for all the other plugin) to audio output, to create audioreactive compositions.

audioReactive_00

Very simple, but someone novice might find it interesting. :)

To run it, you need to install:

Download from here: