summer placement

June 19, 2010

for my summer placement i have decided to work on personal installation rather working for a professional company. i have contacted my friend who is a digital music composer and hosts various electronic music venues in Dublin called Second Square To None. it is well known venue and has attracted almost every electronic music composer living in and near Dublin.

we will be working on interactive installation together and for this project i have dedicated a separate blog. this blog can be found here::

please, visit this blog for documentation and experiments and final outcomes.
this summer placement workshop has started already now, but will take place in action in Dublin from 3rd by 12th August. if we are successful with this project we will look for possibilities to exhibit and show it at various festivals and venues around Ireland.

scanner photography

April 27, 2010

i used to have an 25Euro epson scanner  and it wasn’t top notch beast. but i made it to scan surroundings and objects lift to its screen in highest resolution it could. all of the scan action artefacts went to doom along with my hard drive death but i happened to find some uploads on internet, here is one the experiments – a model of a head and a plastic snowflake. i dragged the head with snowflake on the scanner surface at the beginning of the scan and that is seen in the top part of the head, then i stopped and finished the scan with rest of the model in stilled position. what i like about scanning things without lid on is a surprise outcome of the colors and shades, you can do real-time action by adding some lighting effects on and off and it will be recorded on image as on long exposure film. also the resolution of scanner allows to produce very hight-definition raw images which are instant whereas similar image processing on film will require time and money to develop. i find scanner photography very experimental and economical in terms of saving lot of money on film and they can be easily editing afterwards as they render straight in digital format. the only thing is that it doesn’t feel like after-editing at all considering how much preproduction and set up has been effortlessly performed during the scan. the most amazing thing in scanner photography is that you get very hight-resolution super real image with obscure alterations performed in time. it is time consuming never ending fun, i will get a scanner soon!!

Post-production techniques ||part2

April 27, 2010

after successfully completing first part of my Post-Production in max/MSP i moved on to next part which involved using a Playstation1 GunCon game controller to trigger a shatter effect in my video presentation.  my main task was to make it communicate with the computer and in order to do so i needed a “middle man” which would convert a controller analog circuit signal into midi or OSC signal understood by Max/MSP. my options were either Phidgets or Arduino or similar platform chip boards. i chose to work in Arduino package as i found it cheaper to buy and compatible with more sensors and elements (including Phidgets) as well i had an impression that there is more “underground” feeling to it as there are many arduino board clone experiments and it feels more innovative as an open source programming with hacking possibilities and experiments. the arduino programming language is based on C/C++ and it’s interface looks very similar to Processing. but i didn’t need to write code to make my Arduino board work, all i had to do is install special arduino drivers for max/MSP and use relevant objects in max to detect incoming signal from the board.
the most difficult part for me was to figure out how to translate trigger signal to arduino board. i opened gun controller and found two pins which transmitted electric circuit when button was pressed down.

the gamegun has built in chip board and all wiring were soldered to it. i was trying to get trigger signal at the end of main wire which i cut open and located the main wire, ground and button feedback wire. with help from Jason who knew more than i did we tried to use these wires and stick them into arduino board. we didn’t succeed and my guess was that there is a miscommunication between controller built in namco board and arduino. i had an idea to make it work but i had to test it beforehand. i knew how to make a simple button on and off action on arduino breadboard also called a protoboard.

it is a separate construction base device used for building temporary prototypes of electronic circuits. it is solder less and is easily reusable in comparison to other more permanent soldered boards such as built-in one-off gun controller board. i wanted to test if button lifted from breadboard would still function if there would be 4 wires attached to its legs and those wires would be sticked into breadboard. here is an experiment i did to test it:

the reason for this test was to see if this button can be triggered and signal transmitted without delays due to length of wires. it worked perfectly so the next step was to solder 4 wires to gun’s built-in button and insert those 4 wires in breadboard. in doing so i would avoid interacting with build in board which might require more voltage to operate than arduino board and i had already working circuit set up for a button trigger on breadboard which did communicate with my max/MSP patch without glitches and problems. all i had to do is transmit the button pressing from gun to the breadboard. here is some images of soldered wires and arduino set up:

here is the patch which was receiving incoming signal from arduino.

this part of the patch deals with receiving a trigger signal from gun via arduino through route analog digital object in Max/MSP. white round button triggers the shattering effect which is applied on one of the layers of life video feed. it is possible to later the velocity of disintegrating particles as well as manipulate the character of dissolve based on speed, particle size and shape and direction of the movement. my aim was to mix two different sources of videos , one is a live chromakeyed footage from camera feed and another layer beneath it is just a film clip. the main idea is to give an impression when someone “shoots” a person and it will shatter on top of a background. the inspiration came from”The Lawnmower Man” film in which there was used similar effect of body disintegration. the best way to describe it would be a video clip which shows how this patch and gun works together:

i also implemented coloring options which i can alter as i go in my patch. i can change RGB levels as well as the brightness and contrast.

patch1 – tracking//

April 7, 2010

my post-production first patch has been successfully made and tested. up till now i was working hard to solve my alpha channel issues. i have finally managed to complete on a patch which does live chromakeying, hosts 3-d elements in same environment and triggers them with values taken from tracked motion via live feed. i have tested other 3-d objects as well as cubes and balls, both are working fine. here is the final rewritten patch1::

all this time i have been working on mac platform. patching in mac osx can be slightly different as in windows xp. some objects has different names and live web camera feed is performed differently as well.  i needed to test my finalized patch with external camera feed, so far i always used built in mac webcam but for my final presentation i will take video feed from external camera. at my disposal was a camera compatible only with pc, not mac, so i had to look into patch adjustments for win xp. in order to capture a live video feed on xp jit.qt.grab requires the use of a 3rd party vdig. VDIG, a video driver that translates from my hardware to the video functions used by QuickTime. i downloaded it for free and installed on my pc laptop. i did run a test and it detected attached camera::

next thing was to open my patch and test camera feed. even though my computer crashes after few minutes patch has been running, i had enough time to test that it does detect camera and displays all functions as chromakeying, 3-d elements and tracking. i did small test with objects i found on my table to see if full-screen and chromakeying is working:

then i decided to set up a small scene and simulate green screen live feed. i didn’t have any green sheet so i used a cardboard sheet instead. here is my test scene:

then i tested it as a camera feed to my computer:

the camera footage was feeding without problems, next thing is to test it with my max/msp patch and import 3-d elements, too in hope that it will not crush straight away. here is the test::

i clicked on the background area i want to key out (you can see my mouse arrow on it in max patch on the right) and as you can see in output window – in left top corner – there is a little plastic toy without background, next thing was to open 3-d elements and test how they work. at this stage my weak “decent bitch” computer could be freezing any moment:

it works!!!!! never mind couple of freezes and good few restarts of the patch seasoned with swear words, i managed to harvest small film clips captured from the full screen mode.P –

also, because this patch does motion tracking and plastic toy is dead static, i did few more tests with myself by also changing some values which affected the appearance of videos.

–i can proudly announce that my set concept for live post-production techniques  is almost finished. i still have some time for tweaking and testing more 3-d elements, as well i have planned to implement some colouring aspects, too. i also wondered if i could build another patch and use a physical game gun to interact with projection and live scene. i have obtained gun and have opened it and figured out the structure of the patch. i also tested max with my midi controller and signal transmission was easy. i have quite a challenge on gun thing though, but i might work on it first and then post the results after it has been successfully tested. back to work now….

short form video on the scene

April 2, 2010

here are some fotos i managed to capture in crossmoments we were filming footage for short form video::







chromakeying and alpha channel::max/MSP##4

March 31, 2010

finally my patch has succeeded!! with a help from jitter website forum i was directed how i can fix my patch and achieve the transparency in 3-d space. the reason why i couldn’t get transparent plane after keying it was that i havent enabled alpha based transparency on a 3-d gl object. i first had to ensure that i am actually sending a matrix with a proper alpha channel to my videoplane, and then sending @blend_enable 1 and @depth_enable 0 on my videoplane. this is how it looks in my patch:

but apparently i am facing one more difficult problem which results from the fact that i need to turn off depth-enable in order to properly display transparency. therefore i can’t rely on automatic depth sorting and i have to handle it myself. a kind jitter forum member robtheritch explained me how to achieve it. he wrote: ” one way to do this is with gl objects @layer attribute…you use the z value of the circling gl object to change its layer attribute. this assumes the videoplane is at a z value of 0, and that the camera is in its default position and default lookat. you can encapsulate the circling object code in a poly~ and have several of these going simultaneously.” i didn’t use poly~ shape which is a 3-d object rendered by jitter but instead i incorporated my 3-d cubes i rendered out in maya and it did work within patch perfectly. here is the part of the patch which shows the layer attributes and how 3-d cube spinning is positioned in layer 1 and layer 3 while actual videoplane is @layer 2.

in order to access information on possible attributes i can use with jitter 3-d objects (OB3D) i can consult jit.gl.videoplane object reference window in max/MSP documentation. it is useful to get the general idea of possibilities and set rules to each jitter object i could utilise in certain situations. in patch i am using 3 attributes with my jit.gl.videoplane object. they are
1) @depth_enable 0 attribute — depth buffering flag (default = 1) when the flag is set, depth buffering is enabled.
2) @blend_enable 1 attribute — blending flag (default = 0) when the flag is set, blending is enabled for all rendered objects.
3) @layer attribute — object layer number (default = 0) when in automatic mode, the layer number determines the rendering order (low to high). objects in the same layer have no guarantee which will be rendered first.

finally, here is the whole body of fully functioning patch::

and some screenshots of video output::




—-

i am very delighted with results and knowing that my patch is working fine i can move on to next step – experimenting with different shapes and 3-d objects, implementing some colour adjustment attributes as well as expanding the interactivity idea. i have already build a fully working tracking patch but it needs more tweaking. i managed to track the movement with jit.3m object and adjust sensitivity of effect applied onto 3-d objects. i must do some tests with real camera instead of built-in webcam to analyze tracking movement from far rather from close. because tracking device calculates pixels which has changed the position on the video input screen and as closer to the camera feed as better will be tracking results. even though moving object is far and doesn’t do much pixel movement in camera feed when from far, i can increase movement effectiveness by multiplying tracked values and by using new values to manipulate objects around the scene. here are some screenshots from tracking patch:


of course the patch needs general tweaking and setting best chromakeying values, as well as many tests with real green screen and new 3-d elements. i feel that main aspect has been solved and i can work on details now.

max/MSP glitch

March 26, 2010

here i was experimenting with rotating 3-d objects in max/MSP patch. by invading fully functioning patch and rearranging some patch chords, i managed to corrupt rotation and achieve jittered mad processing images.)) i did numerous resizing of output video and recapturing same output video, every time it restructured itself completely different. what i observed was appearance of strange data texts which wasnt relevant to my patch at all, i think some renders had some bits of texts from random webpages i might have been visited before, but it didn’t make clear connection. i absolutely love braking processes and enjoy witnessing the beauty of functioning malfunction///dig them stream bites the wicked canvases>>

in this image above there is some random strips of text, obviously not english. before i started playing with this patch i visited some webpage on recycling. i don’t remember if that was in english or other language, what surprised me is that max/MSP for some strange reason had access to the info displayed on that webpage. i am pretty sure i did close that webpage but somehow it appeared in my patch. it looks like some sort of glitch. lovely.

these strange eggs are made out of xyz axis in continious rotation. the option of deleting previous frame has been disabled which allowed me to record full flex of movement and add each frame to already building up canvas. it is a capture of continious flow and it is amazing.)))

/////

chromakeying in max/MSP##3

March 24, 2010

i have managed to get halfway without any hassle, the final thing i need to solve is a chromakeying in 3-d space. and that’s where my main issues started to appear.
i created a plane in maya and exported as .obj file. then i imported it in max/MSP scene along with my 3-d cube elements. i even applied successfully a live texture onto my plane. next thing what i was working on is applying chromakeying on the texture feed. at that stage i thought what max will do is project only keyed image on the plane, there was a back thought of rest of the plane, but i had to run it through first test. what it came out with was a fine keyed texture on the 3-d plane, but “transparency” or unwanted areas were displayed in black. here is the patch showing this experiment::

if i look at the chromakey output window, keyed areas are black, too. so, when applied as a texture it will remain being black. i didn’t have any problem when keyed live footage was superimposed on top of another footage, in areas where it is black the underlying footage showed through. i did look into the explanation of chromakeying process again to see why black areas would appear on top of a plane. i was looking into alpha channel and transparency issues in jitter.
this is an information i found on jitter documentation: “ARGB (alpha, red, green, blue)- a 4 plane char data used in jitter to deal with colours and alpha channel. fourth plane is often useful for what is known as the alpha channel—a channel that stores information about how transparent a pixel should be when overlaid on another image. In Jitter, this is the most common way to describe a color: as a combination of exact intensities of red, green, and blue… For each pixel of an image—be it a video, a picture, or any other 2D matrix—we need at least three values, one for each of the three basic colors. Therefore, for onscreen color images, we most commonly use a 2D matrix with at least three planes…” all this information is relevant when referring to 2-d environment, not 3-d. my main issue is solving transparency on or in 3-d space but i am afraid that hasn’t been discussed much on max/MSP documentation. even-though in 3-d environment, jitter treats matrix chromakeying as 2-d and it doesn’t understands or i haven’t found the way how to make 3-d element become transparent on keyed areas.
in my first attempt i used 3-d plane which was overlaid with live texture. texture was keyed out and because nothing was displayed underneath it, it came out black. my aim is to achieve a transparency in black area, so i looked into different ways of importing video footage in 3-d environment, and there is another way of having a videoplane object instead of plane rendered out from maya. i used object which maps incoming video to a plane in 3-d space. this may be used to exploit hardware accelerated rotation, scaling, interpolation, and blending. here is a patch in which i used this jit.gl.videoplane object and chromakeyed it. unfortunately it came out with very similar undesired outcome – alpha as a black area and no transparency.

here is a close-up with non keyed green-screen footage:

where as this one is keyed

as a result i still don’t have any transparency which doesn’t allow to see 3-d cubes rotating behind the person. the scene is too big and black areas shouldn’t be there at all. so i need to try other options.
i was looking more into 3-d compositing in jitter and instead of importing plane made in maya or using a videoplane, i was looking at 3-d objects, and more precisely, grid shapes made in jitter using jit.gl.gridshape object tool. here is a list of 3-d models available.

the one i particularly am interested is a plane and possibility to overlay it with a texture. i quickly run through all of the shapes and check their parameters and transparency options. what is good about grid-shapes, they are !transparent objects with overlaying grid. i was quite happy to see anything what had some sort of transparency in 3-d space. here are some screenshots of gridshapes.

i managed to incorporate a gridshape object in same plane with my 3-d cubes. i even managed to overlay a gridshape with a texture, which i was trying to chromakey. i didn’t get my desired outcome. here is a patch of gridshape overlayed with a video texture::

the good thing is that i can see cubes in whole rotating in 3-d space and grid allows the see through, the bad thing is that my texture is almost invisible. it is slightly possible to track some strange light movement over the grid, but nobody even realizes this is my texture and it doesn’t help to achieve desired outcome.

after so many trials i started to look into forums and support files on internet. there is not much literature published about this software but there is a great deal of information available on max/MSP official website. there are amazing jitter recipes book which has many tutorial files with explanation and patches available to download and ready to test as well as forum. i managed to trace some posts with relevant subject i was working on. here is an extract i found on max/MSP Jitter webpage forum, and i managed to find one post written by person facing similar problem to mine::
[extracted taken from: http://www.cycling74.com/forums/topic.php?id=825 ]

joshua,

very sorry to bother u again about something that must be quite simple…but i have done a test exporting a chromakeyed movie from jit.qt.record using the “animation” codec, and the alpha channel is black. is there a flag that needs to be set to enable alpha channel transparency? patch setup:jit.qt.chromakey –> jit.qt.record “write TEST.mov 15. animation max 600″

many thanks
david.

unfortunately nobody answered his question and i was left on my own findings how to solve my problem. one thing which i learned is that it is not quite possible to perform chromakeying on 3-d objects, even though it is possible to chromakey texture which will be overlaying a plane in 3-d space, i will not achieve transparency i am after. chromakeying works for two video sources, simple 2-d planes, a matrix which consists of pixels and max can deal with them very easily. max has done chromakeying in all cases without hassle, the problem i am facing is my false interpretation of chromakeying necessity. to achieve transparency in 3-d space i will need to look into alpha channel adjustments, alpha channel is a 4th channel which stores data about pixel transparency. in chromakeying process max/MSP never gets rid of pixels or makes them transparent, each chromakeyed pixel is always replaced with other pixel and will never have a faculty of being transparent. i was thinking wrong and have learnt my lesson. i have understood more processes in max/MSP and in order to achieve my final goal in presentation i will need to look in more depth and try another techniques.

post-production techniques, work with max/MSP#2

March 23, 2010

i will explain here the process of building my patch from small different test patches for my final crit. as i mentioned before, i want to make max/MSP do post production techniques live. i have done a research and i am happy to know that this program is doing chromakeying, colour adjustment, different video layer mixing as well as working with 3-d objects. my project goal is to perform post-production techniques in real-time and implement some interactivity between physical movement of a person and 3-d element movement in the patch. here is how i want to do it::
1) there will be a person standing in front of a green screen
2) i will have a video camera set up which will capture a person against the green screen (my goal is not to film it, i will supply a real-time video feed into my computer via camera)
3) max/MSP patch will detect the live feed and will analyze data. it will track the movement of the person’s hands and legs and translate it into real-time changing values.
4) these values will be applied to 3-d elements i want to place around the person.
5) in output projection we will see a person surrounded by 3-d elements. these elements will be interactively moving around depending on person’s movements. the main goal is to give a person which is a main character in the projection a decision on how he or she wants the environment to correspond.
6) i will do some real-time colouring and brightness/contrast adjustment.
—–
for my project, firstly,  i have to find out how to do a chromakeying and if it is possible to chromakey a live footage. here is a patch i was working on and testing a chromakeying on video and then on a live feed.


in this screenshot you can see a max/MSP patch which processes two video sources. one video is just a movie file (green screen footage called green.mov) and another one is a live built-in camera feed from computer. patch is chromakeying a first video by “taking out” a green colour. chromakeying is the process of superimposing one image on top of another by selective replacement of colour and it is done in Jitter by jit.chromakey object. by specifying a colour and a few other parameters, jit.chromakey detects cells containing that colour in the first (left-hand) matrix and replaces them with the equivalent cells in the second (right-hand) matrix when it constructs the output matrix. the result is that selective colour cells of green.mov video file are superimposed onto the live feed as you can see in my patch bottom window. max/MSP does chromakeying using a suckah tool from the object palette.

when i pick the suckah tool it will appear like this in patch editing mode.

suckah tool allows me to get the rgb color beneath it, or feed in any screen coordinates to get the rgb values of that pixel. what is interesting is that i can take any pixel readings with this tool by placing it on any desired area within a patch (even on pixels, which are not in the real video footage) and use it as a value of colour reading which has to be keyed out in the video. chromakeying has to be done with help of jit.chromakey object.

the jit.chromakey object measures the chromatic distance of each of the left input’s cells (pixels) with a reference color (a.k.a “green screening”). the total chromatic distance is calculated by summing the absolute value of each color channel’s distance from the reference color’s corresponding color channel. if the distance is less than or equal to a tolerated distance ( tol ) value, the right input cell is multiplied by a maximum keying ( maxkey ) value.
to key out live feed was relatively easy, i just replaced source of video with live feed from camera and applied same keying objects on it. here is a patch showing this process:

in live feed footage i picked random pixel to see the keying effect. obviously i didn’t have a consistent background which will be in my final piece, but all i wanted to see if keying is possible on live footage and if that doesn’t slow down the process. keying was made without problems. my first challenge has been sorted about chromakeying live feed.

next step i was moving to is trying to import my 3-d objects in the max/MSP patch. max supports 3-d files with .obj extension.  i found out that maya allows to export scene as .obj but certain plug-ins must have been activated beforehand.  i did that and successfully exported my 3-d cubes. all what max does is takes the polygons and places them in 3-d space. i can apply my own texturing and control their movement in 3-d space, that is exactly what i need for my project.

cubes are made by myself but mushrooms are given .obj files by default in max/MSP. here i was testing placing two 3-d scenes together and it works very good. at the moment both scenes rotates with help of mouse at the same time, but i can reset them to original position separately. i will need to look into how to manipulate different 3-d objects separately. the rotation is done by jit.gl.handle object.
this object responds to mouse clicks and drags in the destination by generating rotate and position messages out its left outlet. then the messages are sent to 3-d objects, the objects can then be rotated and moved in space using mouse. next step is to solve my live green screen feed. first idea was to create a plain 3-d plane and import as my second 3-d object in patch. then, knowing that i can apply texture on my 3-d objects, i will try to apply live texture instead of static image. i will need to research on textures and how to replace the image texture with live video texture. here is the patch which successfully does this process:

i managed to apply live camera feed on top of my 3-d plane and i have my 3-d cubes around this plane in same environment. applying texture whether it would be just a static image, video or live feed is not so hard at all. i am using prepend texture object to assign any incoming in its left inlet data as a texture. in patch above there is a link going out of prepend texture object and going in jit.gl.model ml object, which is the rendering engine for 3-d objects in my patch. in this case texture is applied directly on to plane. sometimes patch can get messy with linking chords and it is useful to know that you can send messages across whole patch without links. to send prepend texture message across render object, i will create texture message box and link in to render engine as shown here: . message takes data from objects containing the keyword “mytexture”. i can name it anything i want, it is something similar to flash coding, when you assign variables and use preset values at any stage of development or any place in patch in max/MSP case.
next step is chromakeying live footage. i have tried many attempts and they produce different result, i want to explain the chromakeying problems in next post.

kevin peterson

March 18, 2010

i saw this beautiful image of a girl and some writing behind her. eventhough she is well blended on the rough surface of a concrete wall, it feels as if she is just a passer by and has been captured on a film as a real person from different time..featuring one of his graffiti girls by kevin peterson [houston, tx, united states]>>



Follow

Get every new post delivered to your Inbox.