Archive for the ‘Postproduction Techniques//’ Category

Post-production techniques ||part2

April 27, 2010

after successfully completing first part of my Post-Production in max/MSP i moved on to next part which involved using a Playstation1 GunCon game controller to trigger a shatter effect in my video presentation.  my main task was to make it communicate with the computer and in order to do so i needed a “middle man” which would convert a controller analog circuit signal into midi or OSC signal understood by Max/MSP. my options were either Phidgets or Arduino or similar platform chip boards. i chose to work in Arduino package as i found it cheaper to buy and compatible with more sensors and elements (including Phidgets) as well i had an impression that there is more “underground” feeling to it as there are many arduino board clone experiments and it feels more innovative as an open source programming with hacking possibilities and experiments. the arduino programming language is based on C/C++ and it’s interface looks very similar to Processing. but i didn’t need to write code to make my Arduino board work, all i had to do is install special arduino drivers for max/MSP and use relevant objects in max to detect incoming signal from the board.
the most difficult part for me was to figure out how to translate trigger signal to arduino board. i opened gun controller and found two pins which transmitted electric circuit when button was pressed down.

the gamegun has built in chip board and all wiring were soldered to it. i was trying to get trigger signal at the end of main wire which i cut open and located the main wire, ground and button feedback wire. with help from Jason who knew more than i did we tried to use these wires and stick them into arduino board. we didn’t succeed and my guess was that there is a miscommunication between controller built in namco board and arduino. i had an idea to make it work but i had to test it beforehand. i knew how to make a simple button on and off action on arduino breadboard also called a protoboard.

it is a separate construction base device used for building temporary prototypes of electronic circuits. it is solder less and is easily reusable in comparison to other more permanent soldered boards such as built-in one-off gun controller board. i wanted to test if button lifted from breadboard would still function if there would be 4 wires attached to its legs and those wires would be sticked into breadboard. here is an experiment i did to test it:

the reason for this test was to see if this button can be triggered and signal transmitted without delays due to length of wires. it worked perfectly so the next step was to solder 4 wires to gun’s built-in button and insert those 4 wires in breadboard. in doing so i would avoid interacting with build in board which might require more voltage to operate than arduino board and i had already working circuit set up for a button trigger on breadboard which did communicate with my max/MSP patch without glitches and problems. all i had to do is transmit the button pressing from gun to the breadboard. here is some images of soldered wires and arduino set up:

here is the patch which was receiving incoming signal from arduino.

this part of the patch deals with receiving a trigger signal from gun via arduino through route analog digital object in Max/MSP. white round button triggers the shattering effect which is applied on one of the layers of life video feed. it is possible to later the velocity of disintegrating particles as well as manipulate the character of dissolve based on speed, particle size and shape and direction of the movement. my aim was to mix two different sources of videos , one is a live chromakeyed footage from camera feed and another layer beneath it is just a film clip. the main idea is to give an impression when someone “shoots” a person and it will shatter on top of a background. the inspiration came from”The Lawnmower Man” film in which there was used similar effect of body disintegration. the best way to describe it would be a video clip which shows how this patch and gun works together:

i also implemented coloring options which i can alter as i go in my patch. i can change RGB levels as well as the brightness and contrast.


patch1 – tracking//

April 7, 2010

my post-production first patch has been successfully made and tested. up till now i was working hard to solve my alpha channel issues. i have finally managed to complete on a patch which does live chromakeying, hosts 3-d elements in same environment and triggers them with values taken from tracked motion via live feed. i have tested other 3-d objects as well as cubes and balls, both are working fine. here is the final rewritten patch1::

all this time i have been working on mac platform. patching in mac osx can be slightly different as in windows xp. some objects has different names and live web camera feed is performed differently as well.  i needed to test my finalized patch with external camera feed, so far i always used built in mac webcam but for my final presentation i will take video feed from external camera. at my disposal was a camera compatible only with pc, not mac, so i had to look into patch adjustments for win xp. in order to capture a live video feed on xp jit.qt.grab requires the use of a 3rd party vdig. VDIG, a video driver that translates from my hardware to the video functions used by QuickTime. i downloaded it for free and installed on my pc laptop. i did run a test and it detected attached camera::

next thing was to open my patch and test camera feed. even though my computer crashes after few minutes patch has been running, i had enough time to test that it does detect camera and displays all functions as chromakeying, 3-d elements and tracking. i did small test with objects i found on my table to see if full-screen and chromakeying is working:

then i decided to set up a small scene and simulate green screen live feed. i didn’t have any green sheet so i used a cardboard sheet instead. here is my test scene:

then i tested it as a camera feed to my computer:

the camera footage was feeding without problems, next thing is to test it with my max/msp patch and import 3-d elements, too in hope that it will not crush straight away. here is the test::

i clicked on the background area i want to key out (you can see my mouse arrow on it in max patch on the right) and as you can see in output window – in left top corner – there is a little plastic toy without background, next thing was to open 3-d elements and test how they work. at this stage my weak “decent bitch” computer could be freezing any moment:

it works!!!!! never mind couple of freezes and good few restarts of the patch seasoned with swear words, i managed to harvest small film clips captured from the full screen mode.P —

also, because this patch does motion tracking and plastic toy is dead static, i did few more tests with myself by also changing some values which affected the appearance of videos.

–i can proudly announce that my set concept for live post-production techniques  is almost finished. i still have some time for tweaking and testing more 3-d elements, as well i have planned to implement some colouring aspects, too. i also wondered if i could build another patch and use a physical game gun to interact with projection and live scene. i have obtained gun and have opened it and figured out the structure of the patch. i also tested max with my midi controller and signal transmission was easy. i have quite a challenge on gun thing though, but i might work on it first and then post the results after it has been successfully tested. back to work now….

chromakeying and alpha channel::max/MSP##4

March 31, 2010

finally my patch has succeeded!! with a help from jitter website forum i was directed how i can fix my patch and achieve the transparency in 3-d space. the reason why i couldn’t get transparent plane after keying it was that i havent enabled alpha based transparency on a 3-d gl object. i first had to ensure that i am actually sending a matrix with a proper alpha channel to my videoplane, and then sending @blend_enable 1 and @depth_enable 0 on my videoplane. this is how it looks in my patch:

but apparently i am facing one more difficult problem which results from the fact that i need to turn off depth-enable in order to properly display transparency. therefore i can’t rely on automatic depth sorting and i have to handle it myself. a kind jitter forum member robtheritch explained me how to achieve it. he wrote: ” one way to do this is with gl objects @layer attribute…you use the z value of the circling gl object to change its layer attribute. this assumes the videoplane is at a z value of 0, and that the camera is in its default position and default lookat. you can encapsulate the circling object code in a poly~ and have several of these going simultaneously.” i didn’t use poly~ shape which is a 3-d object rendered by jitter but instead i incorporated my 3-d cubes i rendered out in maya and it did work within patch perfectly. here is the part of the patch which shows the layer attributes and how 3-d cube spinning is positioned in layer 1 and layer 3 while actual videoplane is @layer 2.

in order to access information on possible attributes i can use with jitter 3-d objects (OB3D) i can consult object reference window in max/MSP documentation. it is useful to get the general idea of possibilities and set rules to each jitter object i could utilise in certain situations. in patch i am using 3 attributes with my object. they are
1) @depth_enable 0 attribute — depth buffering flag (default = 1) when the flag is set, depth buffering is enabled.
2) @blend_enable 1 attribute — blending flag (default = 0) when the flag is set, blending is enabled for all rendered objects.
3) @layer attribute — object layer number (default = 0) when in automatic mode, the layer number determines the rendering order (low to high). objects in the same layer have no guarantee which will be rendered first.

finally, here is the whole body of fully functioning patch::

and some screenshots of video output::


i am very delighted with results and knowing that my patch is working fine i can move on to next step – experimenting with different shapes and 3-d objects, implementing some colour adjustment attributes as well as expanding the interactivity idea. i have already build a fully working tracking patch but it needs more tweaking. i managed to track the movement with jit.3m object and adjust sensitivity of effect applied onto 3-d objects. i must do some tests with real camera instead of built-in webcam to analyze tracking movement from far rather from close. because tracking device calculates pixels which has changed the position on the video input screen and as closer to the camera feed as better will be tracking results. even though moving object is far and doesn’t do much pixel movement in camera feed when from far, i can increase movement effectiveness by multiplying tracked values and by using new values to manipulate objects around the scene. here are some screenshots from tracking patch:

of course the patch needs general tweaking and setting best chromakeying values, as well as many tests with real green screen and new 3-d elements. i feel that main aspect has been solved and i can work on details now.

chromakeying in max/MSP##3

March 24, 2010

i have managed to get halfway without any hassle, the final thing i need to solve is a chromakeying in 3-d space. and that’s where my main issues started to appear.
i created a plane in maya and exported as .obj file. then i imported it in max/MSP scene along with my 3-d cube elements. i even applied successfully a live texture onto my plane. next thing what i was working on is applying chromakeying on the texture feed. at that stage i thought what max will do is project only keyed image on the plane, there was a back thought of rest of the plane, but i had to run it through first test. what it came out with was a fine keyed texture on the 3-d plane, but “transparency” or unwanted areas were displayed in black. here is the patch showing this experiment::

if i look at the chromakey output window, keyed areas are black, too. so, when applied as a texture it will remain being black. i didn’t have any problem when keyed live footage was superimposed on top of another footage, in areas where it is black the underlying footage showed through. i did look into the explanation of chromakeying process again to see why black areas would appear on top of a plane. i was looking into alpha channel and transparency issues in jitter.
this is an information i found on jitter documentation: “ARGB (alpha, red, green, blue)- a 4 plane char data used in jitter to deal with colours and alpha channel. fourth plane is often useful for what is known as the alpha channel—a channel that stores information about how transparent a pixel should be when overlaid on another image. In Jitter, this is the most common way to describe a color: as a combination of exact intensities of red, green, and blue… For each pixel of an image—be it a video, a picture, or any other 2D matrix—we need at least three values, one for each of the three basic colors. Therefore, for onscreen color images, we most commonly use a 2D matrix with at least three planes…” all this information is relevant when referring to 2-d environment, not 3-d. my main issue is solving transparency on or in 3-d space but i am afraid that hasn’t been discussed much on max/MSP documentation. even-though in 3-d environment, jitter treats matrix chromakeying as 2-d and it doesn’t understands or i haven’t found the way how to make 3-d element become transparent on keyed areas.
in my first attempt i used 3-d plane which was overlaid with live texture. texture was keyed out and because nothing was displayed underneath it, it came out black. my aim is to achieve a transparency in black area, so i looked into different ways of importing video footage in 3-d environment, and there is another way of having a videoplane object instead of plane rendered out from maya. i used object which maps incoming video to a plane in 3-d space. this may be used to exploit hardware accelerated rotation, scaling, interpolation, and blending. here is a patch in which i used this object and chromakeyed it. unfortunately it came out with very similar undesired outcome – alpha as a black area and no transparency.

here is a close-up with non keyed green-screen footage:

where as this one is keyed

as a result i still don’t have any transparency which doesn’t allow to see 3-d cubes rotating behind the person. the scene is too big and black areas shouldn’t be there at all. so i need to try other options.
i was looking more into 3-d compositing in jitter and instead of importing plane made in maya or using a videoplane, i was looking at 3-d objects, and more precisely, grid shapes made in jitter using object tool. here is a list of 3-d models available.

the one i particularly am interested is a plane and possibility to overlay it with a texture. i quickly run through all of the shapes and check their parameters and transparency options. what is good about grid-shapes, they are !transparent objects with overlaying grid. i was quite happy to see anything what had some sort of transparency in 3-d space. here are some screenshots of gridshapes.

i managed to incorporate a gridshape object in same plane with my 3-d cubes. i even managed to overlay a gridshape with a texture, which i was trying to chromakey. i didn’t get my desired outcome. here is a patch of gridshape overlayed with a video texture::

the good thing is that i can see cubes in whole rotating in 3-d space and grid allows the see through, the bad thing is that my texture is almost invisible. it is slightly possible to track some strange light movement over the grid, but nobody even realizes this is my texture and it doesn’t help to achieve desired outcome.

after so many trials i started to look into forums and support files on internet. there is not much literature published about this software but there is a great deal of information available on max/MSP official website. there are amazing jitter recipes book which has many tutorial files with explanation and patches available to download and ready to test as well as forum. i managed to trace some posts with relevant subject i was working on. here is an extract i found on max/MSP Jitter webpage forum, and i managed to find one post written by person facing similar problem to mine::
[extracted taken from: ]


very sorry to bother u again about something that must be quite simple…but i have done a test exporting a chromakeyed movie from jit.qt.record using the “animation” codec, and the alpha channel is black. is there a flag that needs to be set to enable alpha channel transparency? patch setup:jit.qt.chromakey –> jit.qt.record “write 15. animation max 600”

many thanks

unfortunately nobody answered his question and i was left on my own findings how to solve my problem. one thing which i learned is that it is not quite possible to perform chromakeying on 3-d objects, even though it is possible to chromakey texture which will be overlaying a plane in 3-d space, i will not achieve transparency i am after. chromakeying works for two video sources, simple 2-d planes, a matrix which consists of pixels and max can deal with them very easily. max has done chromakeying in all cases without hassle, the problem i am facing is my false interpretation of chromakeying necessity. to achieve transparency in 3-d space i will need to look into alpha channel adjustments, alpha channel is a 4th channel which stores data about pixel transparency. in chromakeying process max/MSP never gets rid of pixels or makes them transparent, each chromakeyed pixel is always replaced with other pixel and will never have a faculty of being transparent. i was thinking wrong and have learnt my lesson. i have understood more processes in max/MSP and in order to achieve my final goal in presentation i will need to look in more depth and try another techniques.

post-production techniques, work with max/MSP#2

March 23, 2010

i will explain here the process of building my patch from small different test patches for my final crit. as i mentioned before, i want to make max/MSP do post production techniques live. i have done a research and i am happy to know that this program is doing chromakeying, colour adjustment, different video layer mixing as well as working with 3-d objects. my project goal is to perform post-production techniques in real-time and implement some interactivity between physical movement of a person and 3-d element movement in the patch. here is how i want to do it::
1) there will be a person standing in front of a green screen
2) i will have a video camera set up which will capture a person against the green screen (my goal is not to film it, i will supply a real-time video feed into my computer via camera)
3) max/MSP patch will detect the live feed and will analyze data. it will track the movement of the person’s hands and legs and translate it into real-time changing values.
4) these values will be applied to 3-d elements i want to place around the person.
5) in output projection we will see a person surrounded by 3-d elements. these elements will be interactively moving around depending on person’s movements. the main goal is to give a person which is a main character in the projection a decision on how he or she wants the environment to correspond.
6) i will do some real-time colouring and brightness/contrast adjustment.
for my project, firstly,  i have to find out how to do a chromakeying and if it is possible to chromakey a live footage. here is a patch i was working on and testing a chromakeying on video and then on a live feed.

in this screenshot you can see a max/MSP patch which processes two video sources. one video is just a movie file (green screen footage called and another one is a live built-in camera feed from computer. patch is chromakeying a first video by “taking out” a green colour. chromakeying is the process of superimposing one image on top of another by selective replacement of colour and it is done in Jitter by jit.chromakey object. by specifying a colour and a few other parameters, jit.chromakey detects cells containing that colour in the first (left-hand) matrix and replaces them with the equivalent cells in the second (right-hand) matrix when it constructs the output matrix. the result is that selective colour cells of video file are superimposed onto the live feed as you can see in my patch bottom window. max/MSP does chromakeying using a suckah tool from the object palette.

when i pick the suckah tool it will appear like this in patch editing mode.

suckah tool allows me to get the rgb color beneath it, or feed in any screen coordinates to get the rgb values of that pixel. what is interesting is that i can take any pixel readings with this tool by placing it on any desired area within a patch (even on pixels, which are not in the real video footage) and use it as a value of colour reading which has to be keyed out in the video. chromakeying has to be done with help of jit.chromakey object.

the jit.chromakey object measures the chromatic distance of each of the left input’s cells (pixels) with a reference color (a.k.a “green screening”). the total chromatic distance is calculated by summing the absolute value of each color channel’s distance from the reference color’s corresponding color channel. if the distance is less than or equal to a tolerated distance ( tol ) value, the right input cell is multiplied by a maximum keying ( maxkey ) value.
to key out live feed was relatively easy, i just replaced source of video with live feed from camera and applied same keying objects on it. here is a patch showing this process:

in live feed footage i picked random pixel to see the keying effect. obviously i didn’t have a consistent background which will be in my final piece, but all i wanted to see if keying is possible on live footage and if that doesn’t slow down the process. keying was made without problems. my first challenge has been sorted about chromakeying live feed.

next step i was moving to is trying to import my 3-d objects in the max/MSP patch. max supports 3-d files with .obj extension.  i found out that maya allows to export scene as .obj but certain plug-ins must have been activated beforehand.  i did that and successfully exported my 3-d cubes. all what max does is takes the polygons and places them in 3-d space. i can apply my own texturing and control their movement in 3-d space, that is exactly what i need for my project.

cubes are made by myself but mushrooms are given .obj files by default in max/MSP. here i was testing placing two 3-d scenes together and it works very good. at the moment both scenes rotates with help of mouse at the same time, but i can reset them to original position separately. i will need to look into how to manipulate different 3-d objects separately. the rotation is done by object.
this object responds to mouse clicks and drags in the destination by generating rotate and position messages out its left outlet. then the messages are sent to 3-d objects, the objects can then be rotated and moved in space using mouse. next step is to solve my live green screen feed. first idea was to create a plain 3-d plane and import as my second 3-d object in patch. then, knowing that i can apply texture on my 3-d objects, i will try to apply live texture instead of static image. i will need to research on textures and how to replace the image texture with live video texture. here is the patch which successfully does this process:

i managed to apply live camera feed on top of my 3-d plane and i have my 3-d cubes around this plane in same environment. applying texture whether it would be just a static image, video or live feed is not so hard at all. i am using prepend texture object to assign any incoming in its left inlet data as a texture. in patch above there is a link going out of prepend texture object and going in ml object, which is the rendering engine for 3-d objects in my patch. in this case texture is applied directly on to plane. sometimes patch can get messy with linking chords and it is useful to know that you can send messages across whole patch without links. to send prepend texture message across render object, i will create texture message box and link in to render engine as shown here: . message takes data from objects containing the keyword “mytexture”. i can name it anything i want, it is something similar to flash coding, when you assign variables and use preset values at any stage of development or any place in patch in max/MSP case.
next step is chromakeying live footage. i have tried many attempts and they produce different result, i want to explain the chromakeying problems in next post.

post-production tecniques in max/MSP #1

March 17, 2010

for my post-production project i have chosen to work in different platform than after effects. after effects, final cut pro, adobe premiere are considered standard editing suites in film and video industries. post-production is a series of processes of editing footage after it has been filmed to bring in additional complements to the final footage which will be shown to the audience. there are many things which we can do with the footage and editing suites are specially built for that. just to mention few: colour correction, chromakeying, bringing in additional elements on separate layers and blending it with original footage, etc. the reason why i have chosen to work with different software and, respectively, with max/MSP is because it allows me to perform post-production techniques on the fly.
i have been doing vjing for couple of years and i am very familiar with video editing in real-time. i am more fond of applying spontaneous effects on footage and directing it in totally unknown grounds than sitting nailed ass down on chair and following scripted editing process which has to provide desired outcome. i am more adventurous person than systematic task performer and that’s why i want to challenge the traditional approach of editing footage. one might ask how vjing differs from real-time post-production. vjing is not as much post-production in its essence. on contrary, vjing is a field of working with video on real-time basis by observing running video and adding to it on the fly. vj has a library of clips and amount of effects it can apply to the footage, it can mix several layers of videos and toggle the speed and opacity. i think vjing is a constant flow of the very moment, it is never repeating itself and never giving an account of what has been made, the main aspect is experiencing the flow and experiencing “now”. if i look at the post-production,  i am aware of certain set of processes i can apply to my video footage but it can be viewed with a reference and analysis whereas vj set is just an entertainment and flux. i find max/MSP an application to achieve and present my real-time post editing. max/MSP provides such actions as chromakeying, colouration, adjusting brightness, contrast, mixing different layers, working with 3-d environments and many more and that all can be run and altered real-time. what after effects doesn’t have is the access to real-time rendering. rendering in after effects can’t be manipulated in real-time, the values only can be set and executed by program, there is no access to the tools of program in order to manipulate its process. i was researching in possibilities to use a midi controller with after effects and adjust real-time the opacity of effects or other editing processes. as i said, it is not possible. max/MSP is not used for preparing footage for broadcast or film industries, max/MSP is an environment allowing me to create a personal “editing-suite” simple and basic, but more unfolded in interaction with data and editing processes.

to make my idea more clear here is the video (made in after effects) which reflects my concept.

my concept can be only described in real-time performance which i will show at the crit. the main reason why i chose not to work in after effects is to create a rendered product which in my point of view is a dead piece of information. of course, over the period of film history there have been amazing film pieces created which can be watched over generations and never lose its magnificence, but that doesn’t justify rest forms of information which is more likely to pollute the mind. that sort of produce has been spreading in the ocean of information which people constantly revisit again and again. the idea of momentary and real-time (which can be recorded, of course, but i will not do it) is that there is not so much pressure  and responsibility applied to deal with the consequences. if the material is bad or so so, nobody will ever refer to it at all because of no record, or on the other side, if the real-time footage they witnessed was good and impressive, that will burn in persons memory as a special moment. it is same like with life, everything happens real-time and there is no preparation to what will come next. we remember good and bad things, everything in between is just a grey mass which will transcend in nothingness. i am not saying that records are bad, records which gather useful information about people on this planet and ways of life are good. there are many good things which we should conserve in terms of information and pass on to next generations. but nevertheless good stuff there is also lots and lots of useless forms of information which has taken up public and private spaces. it is publishing, tv and online advertising, youtube, many shit ads which find you even in most intimate space as a closet. there is chunks of unwanted and deliberately enforced information. why do people spend time, money and energy on producing all of this crap while there are so many other fields need attention?
back to my project, people have invented technologies which allow them to play with things in real-time and experience creations which every time can be shown differently. that’s why i prefer real-time over something set boundaries to it (again, not if that is a good masterpiece i like to watch over and over again – [like sunrise]).
this video above shows a chromakeyed footage shot against greenscreen along with some 3-d elements. if you observe more attentively, the rotation of cubes around body doesnt look natural. the reason for this is the process i created this video. i used 3 layers in total, each of them were chromakeyed so that just elements were seen. i layered body in between cube layers which i had to render out in maya separately. this is bit complicated way of thinking how to integrate 3-d elements around 2-d plane in aftereffects  but that is not of importance. my aim was to get across the main idea and represent what exactly i want to make max/MSP do.
this is very rough breakdown of processes shown in this video which will be translated into max/MSP.
1–i will have a real-time video feed of someone in front of greenscreen. max/MSP will do real-time chromakeying and i will end up with just a person standing in the void.
2–i will have another elements made in 3-d which i will incorporate into the scene.
3–i will track the movement of the person and use those values to manipulate 3-d elements.
at the beginning i wanted to have a footage shot against greenscreen and different additional elements 2-d and 3-d which could be manipulated across the scene real-time using midi pad. but then Jason suggested me this amazing interactive approach in turning actual person standing in between those elements into a trigger of how they will move around. in normal sort of post-production editor would try different positions and actions with elements within the scene till total satisfaction and then he will render out the final piece. there would be no interactivity between viewer and video piece. viewer is subjected to perceive already made footage unable to interact or direct anything in the scene. my installation is more towards to interactivity and integrating actor and editor in one person. i want to make this piece fun and showing my growth in skills using max/MSP.

postproduction-work with green screen

March 17, 2010

for our post-production brief we had some introduction in greenscreen compositing. we had a well-lit greenscreen set up with 4 red-heads. good lighting is a key element in capturing a good footage for peaceful post editing. two red-heads were pointed at the green background in order to get clear and smooth shot of green layout for further keying. the main key light or also called “hard light” is adjusted so that it shines on person from approx 45 degrees angle. it is important not to direct spotlight straight from front which will make person or object of interest look very flat. there should be some shadows on the face but no shadows on the green background. to make shadow appear more smooth and transparent, we clip some diffuse material on head of the key light. in order to get rid of shadow completely we need to double diffuse layer. to make chromakeying more successful we need to adjust one lamp behind the person. it will enhance the outline shape and make the edge brighter and , respectively, easier to key out. in this case it is important to watch out for lense-flares. here is some simple chromakeying and post-editing done in after effects. just messing about::

this is the root footage i was playing with

i did slick chromakeying with help of a plug-in built for after effects. it is very easy to use and is very effective. a great time saver and shit in teaching properly how to key out problem areas such as shadows. then i brough in some background image and tracked a gorillas head on top of my body. it is not good tracking and needs some manual adjustment in some places. for this sort of experiment i didnt bother. i used my friend’s music mix for sound.

i had an idea how to make my video more twisted and fucked up. i didnt like how it looked and it needed some sort of “strangeness” to it, nothing much , just bit off being blunt and silly. soooo, what i did here is i time stretched previously rendered video. it not only stretched the video flow but also audio and it was just what i was looking for. i did play with some effects and made this image totally reduce into animated noise. my aim was to create just a distorted layer of same video which i intended to blend with the original.

here is the final render with time stretched audio and effect layer which went through, again, some heavy effects till total non-recognition. this is just a mess but it gave me main idea how to chromakey in after effects with just one click of took me not more than an hour to come up with the final render which made me think of how chromakeying today on simple personal computer can be done so easily and quick in comparison to old techniques film people had to hassle through on more vintage and completely! analog machines. bluescreen compositing (“greenscreen” name is slightly less common in film industry) has been invented around 1930s which originally was a very expensive film process involving pricey lithographic colour separation. today it can be done with just one click of a mouse button (surely it depends on how good the footage is). video chromakeying in television industry is referred to as CSO – colour separation overlay. such name was given by BBC team in 1960s. Petro Vlahos who is a bluescreen inventor was awarded a Lifetime Achievement Award by the Academy of Motion Picture Arts as an acknowledgment of how popular his technology had become.

‘ I ‘ ‘-/ I-‘ €

February 12, 2010

i was researching on different mediums involving type and typography::

biscuit type [group of designers 1/2/3/4 came up with this idea–to bake letterforms]

a designer thomas kohl innovates plastic cups/light/wall as a platform for typography

i found this amazing play-away thingy built in Processing. it allows to navigate the letterform in 3-d realm as well as it takes a form of a letter one presses on a keyboard.
letter “k”

play here>>>alphabot