Norway produces approximately 2 million barrels of oil per day, 12% of the worlds production. This project seeks financing and formal approval for building and promoting a sculpture built from the same number of used oil barrels. (mer…)
Some years ago I worked on a series of images based on glitchy frames from the now defunct RealVideo format. Back in the day when high bandwidth still was something rare, RealVideo was great for compressing streaming and ondemand video, but sometimes it would spit out video with strange artifacts, especially when livestreaming. In the Pixelpeople-series I took screenshots from interesting frames, and worked on them in photoshop, enhancing and smoothing the glitches, using them sort of like a collaborative partner.
With rising bandwidth and better algorithms and formats, such artifacts and glitches are pretty much gone. I’ve sort of missed the surprising unpredictability as an artistic inspiration, so I’ve started to research software and hardware-based ways to generate digital artifacts.
One of the tools I found was Photomosh. Photomosh lets you upload and image, or use your webcam as an input, and has a ton of cool features to play with.
I downloaded my first copy of Processing (Processing.org) a few years ago, but I have never gotten past the initial few demos and small tutorials. I’ve been interested in generative computer art for many years, ever since I first saw the work of Marius Watz in the mid nineties and had a stint reading dadaist poetry and cutups, but I’ve never had the time to play with this stuff myself. Or the brains to handle the math, hehe. But then I came across this tutorial in Computer Arts #149 (The June 2008 issue), where there are a few really interesting tutorials, which basically gives you enough info to understand the key consepts that you need to create some very interesting apps, like the one below (slightly modified of course, I added random colors among other things).
(java applets no longer functions in Google Chrome). Nothing you can do about that.)
Oh, and I had quite a hard time finding out how to embed my app in my wordpress blog. I kept getting some heavy errors when I tried to post the html the Processing software generates straight into WordPress, but I eventually got it to work. Since I couldn’t find any tutorials on how to do this, I decided to write my own. So here it is:
How to embed a processing java application in wordpress:
First, you have to turn off the Visual editor for your user, if you don’t, wordpress will 100% garanteed screw up your code. And remember, if you turn the Visual editor back on after finishing editing your post, then DON’T open the post for editing again. When I did this wordpress replaced my embed code for java with a flash embed code!!! Luckily I had saved this article as a Google Docs document, and could simply copypaste it in here again.
(* Update: This might also be related to Adblock plus, but needs to be verified)
Second, paste in this code (just remember to replace the variables with where you’ve put your own .jar file etc. You get all the info you need when you choose File and Export in Processing, and open up the resulting index.html file in an editor of your choice. Note: the applet tag is slightly depricated, so I guess I have to figure out to do this with a “proper” object + embed.
I’ve been thinking alot about making a soundbased installation in Adobe Flash, using sensors and switches, and I’ve gotten around to making a few small eksperiments/prototypes as research, which I’m planning to share on this site later.
But I also found this old experiment I wrote in Flash 5 (!) and wanted to share it. It is a visual sequenzer (*) / sound toy that lets you drag icons onto a “soundstage”, each icon representing a sample. As you press play a line starts moving vertically, and as the line hits one of the icons, the corresponding sound is played. You can also click, drag and hold an icon, and move it on top of the moving line to trigger the sound. There are two types of sounds. The yellow icons trigger different “wet finger on glass” sounds, and the greywhite icons trigger sonar ping sounds.
The code is pretty old, and there are WAY better ways of making something like this with AS3, but still, here is the source code (fla) for it (it also includes the samples, which you are free to use in any way you like).
I still think it is a pretty nice little project, but I am toying with the idea of replacing the click and drag with a webcam mounted in the ceiling, and letting people moving around on the floor trigger the sounds. I have quite a lot to learn before I can make something like that, but I’ll get there!
* Ok, so I guess it is a stretch to call this a sequenzer, but I wanted to create a fun, easy and interesting way to generate a sound collage.
Pixelpeople is a series of digital portraits I made many years ago, back when I still had time to frolick. Some of them are stillframes grabbed from heavily compressed Real Video, which where know for creating rather strange artifacts in their encoding. Others are still frames from television, I believe one of them is even based on a still from Baywatch :). I was inspired by Dave McKean at the time, so that is where the layers of textures came from on some of the portraits.