Assignment: Computer Vision Proposal – Submit Here by Nov. 13

Submit the proposal below as a comment. Use plain text (no pdf’s or other files), and attach the sketch. If hand drawn either scan or take a photo of the sketch to upload. See syllabus for full description of the assignment.

27 thoughts on “Assignment: Computer Vision Proposal – Submit Here by Nov. 13”

  1. The idea for my project is a game which involves the user to use real life hand eye coordination and requires users to interact with the projection surface as well as an external ball. Users will view ‘enemies’ moving alone the projected surface and will be tasked with aiming and throwing a small ball at these enemies. If enemies are successfully hit they will be replaces by an image marker such as a splat to signify they have been defeated. The surface will be one made of a latex like material suspended in a frame made of wood. When the ball pushes the material back it will pass a threshold picked up by the Kinect and mark a point, checking if that connected with an enemy. This is suppose to be a fun spin on the concept of projection based games where the player doesn’t become the controller but instead uses the real world to interact with the projected one.

    My project is similar to the LU – interactive playground project, developed as a way to make children’s gym class more entertaining. This project also uses projection and collision detection to make a game but does function different than mine as the projector reads collision in a different manner.

    My project bends this to be used purely for entertainment and potentially competition, rather than health

  2. For my computer vision assignment, I want to use the Kinect to track movement, which will create generative art around the user. The projection location would be best projected onto a white surface (ex. a wall). The user, preferably a dancer who can make the generative art flow in an aesthetically pleasing way, will use their body to control the direction of the generative art. The entire body will be tracked, and depending on how each person moves, it will allow each user to create a different scenic background around them.
    I was inspired by ChristianMioLoclair, who made a similar project that allowed generative art to be controlled by movement. My concept is similar, but if time permits, I would want to create different generated effects specific to music genres.
    I was also inspired by actual soundboard users. The Youtube user Kaskobi creates Launchpad covers, and is able to remix songs with the press of a button, all while having a New Media aesthetic. When each button is pressed, it creates a colorful choreography in tune with the music. I was interested in the idea of projecting colors around the human body, all while moving in tune to music.

    ChristianMioLoclair’s project:

    Examples of Kaskobi’s music:

  3. Through the use of multiple projectors and human interaction, Circus takes the user on a journey of exploration. Beginning in a secluded room, the user is brought to actively engage with their surroundings. The activity is simple, the user is to walk from one side of the room to the next, journeying towards a new stage. However, the journey is not as simple as opening a door.

    Circus thrusts the user into a labyrinth of awe, abstraction, and fear. The ideas surrounding the project are those that are primal to human nature. The project seeks to expose these emotions in an exciting, immersive, and thoughtful performance of the personal engagement.

    The narrative is specific to each person involved with Circus. Unlike The Artist, this project seeks to break narrative conventions, instead, it seeks to provoke emotive responses in the user through sound, movement, and visuals.

    The ideas surrounding the project come from numerous media sources. The ideas of evoking emotion through abstract ways come from the feature and short films of David Lynch. In Twin Peaks, we see numerous methods utilized by the veteran filmmaker to create a sense of unease, disorientation, and horror. The horror imagery is inspired by the paintings of Francis Bacon who utilizes abstraction and distortion to create truly captivating paintings. The surrealist imagery is inspired by the paintings of René Magritte who was a leading artist of the surrealist movement.

    My project is a blend of Akash’s Kinect. Dance. and virtual reality experiences such as The RooM : VR 360° horror. Each project tethers human interaction to the experience/experiment. For the former, the user’s movement is utilized to create an animation. While the latter relies on the technology (what a user can see) in order to create an environment that elicits horror.

    Attached is the general setup for the installation. It is simply two screens, facing one another with the moving audience member in the center.


    David Lynch 1.

    Francis Bacon

    René Magritte

    Kinect. Dance.

    The RooM : VR 360° horror

  4. For this project, my vision is to create an interactive Holiday-themed “photo booth”, which, combined with physical props, can be used to create virtual holiday cards which can be shared to social media or sent via email to family and friends. Using a Kinect and OpenCV software, the software will count down until a photo is taken, and then present the user(s) with an opportunity to customize the photo with stickers, text, and drawings. The user(s) can use their hands to “grab” a tool with the cursor from the sidebar, select an option, and then decorate their photo however they would like before exporting it to their social media platform of choice (or email).

    My inspiration and obsession with the idea of photo booths stems from Andy Warhol’s similar fascination, where from 1963 to 1966 he produced hundreds of photo booth portraits depicting various performance narratives which he used to produce silkscreen paintings. The Andy Warhol Museum in Pittsburgh has a photo booth installation meant to provide visitors with a similar experience ( ,which I also wish to reproduce while also adding a touch of customization opportunity by bringing the concept into the digital age.

    The booth itself will consist of a single tower which holds both a projector and the Kinect in a single unit, attached to a bin which holds all of the physical objects that users may wish to hold for their photo. Optionally, a backdrop could be added to the photo either digitally through the software or physically using a simple seasonal tablecloth draped over a constructed frame.

  5. Jesse Stothers – HIVE_Continued
    Nov 2017
    Computer Vision Proposal


    The working name of this piece is HIVE_Continued this piece will be a continuation of the project mapping project I submitted earlier in the semester. The continuation will incorporate the sculpture I created for HIVE, thematically the content I want to explore and elements of code and visuals from the original project. This project will embark from the original by exploring the social element of interaction with the piece itself, as it’s visuals explore the social dynamics of honey bees and humans. This mirroring of the idea of interaction in the piece to the physical interaction with the piece that people will explore is one that I began to develop in HIVE, but I think will be beneficial in exploring further. The piece will appear similar to the original, the foam core structure will be 3 feet wide and 2 feet tall, the key difference will exist in the code techniques displayed in HIVE_Continued. Combing new code with the original keystone processing codes that created the dynamic four quadrant visualization of bees and humans, I hope to continue to highlight similarities in motivations, priories and patterns in bees and people. It is important for this piece to be a departure from the original in the technique as well as focusing the goal of the updated piece. Using the Kinect average point tracker, the viewer will be able to navigate several visualizations, activating a quadrants presentation by the user’s movement. This direct interaction builds upon the experience I began to develop previously, but strengthens the theme of human interaction, as HIVE will become a more literal a hive in a sense of the hub of interaction, a meeting place, as audiences will be able to gather and walk around the sculpture.

    The videos will most likely include some of the original video edits I created for HIVE, but I hope to include even more footage that didn’t make it into the first edit. The original inspirations can still be sourced in the new piece. For example Theaster Gates How To Build A House Museum’s Progress Palace room has the HouseBerg sculpture that I really admired and found inspiration to originally. But due to the new technology I will incorporate, another artist who’s piece is helping shape how I want the new piece to exist is Mathew Biederman’s Perspection(2015). The importance of physical structures for a computer vision project is one that I am concerned with and was demonstrated to a high level in Beiderman’s piece. The scale of the piece also played a role in the success of his work, the ability to demand the attention of the space is one that I wish to explore. I want my piece to be one that is a social experience and can be create an environment around the structure just as Perception did, while maintaining aesthetically pleasing visuals.

    Attached are several shots of what the activated quadrants would look like, as well as the original sketch, combined as one image.

  6. For this project, I have decide to create an obstacle dodging game- based on an indie game known as Neon Drive. The player must use gestures and to move a vehicle around side to side to in order to dodge the oncoming obstacles. As time progresses, the obstacles will start approaching the user at a faster pace, creating a more difficult challenge to beat the set high score. The game itself has no objective- simply being a form of an interactive endless runner type experience.

    (Neon Drive Gameplay)

    The game will be produced in processing (of course), manipulating video loops and sprites for much of the graphics to produce more eye catching experience.

    This link is of an example of how the Kinect has been utilized in the past for a similar game in the past, however the game itself differs from my actual intentions.

    Attached is a basic render of how the game could possibly look

  7. For my computer vision project, I hope to make interactive 3D pictures using head tracking. These pictures will be my own, and spilt in foreground and background elements in Photoshop, then reassembled in Processing to their original state, but with layers. This way, when the user moves to the left or right, the layers will mirror that. The foreground elements will move faster, as they are closer, and the background elements will move slower to give a sense of depth. These layers will also enlarge as you get closer to the image, creating a 3D environment. I’ve done something similar to this before with Adobe Flash:
    As I didn’t feel like this would be quite enough, I thought it might be interesting to bring these images to life with sound. There will be a base soundtrack, and depending on where the user’s hands are, sound effects. These effects will relate to the content of the pictures, such as the rustling of trees, traffic, or birds.
    I’m hoping to be able to have 5 pictures, with the user being able to select them along the bottom, as they’ll all be placed in a row.
    This idea was based on Johnny Lee’s Head Tracking using the WiiRemote video, in which at one point he demonstrates this technology with a picture. I was also intrigued by the use of the parallax effect in the documentary Killing Kennedy (!/loveloss-lockup). This is an example where the user is able to explore a photograph using their mouse. I hope to accomplish something similar, but instead of using a mouse to explore, it would be the user’s body.

  8. Christian McKay
    November 10th, 2017
    Computer Vision Proposal

    For my Computer Vision project, I will be utilizing the Kinect technology to create an interactive sound sampling and visual experience. The interaction will track the viewer’s body movements via a small cursor, and by using their hands, they will be able to trigger and loop various sound sample files, that will randomize and create a different basic graphics animation in the centre of the frame based on the type of sound file read by the program. Similar to a sound sampler such as an MPC or a Maschine, this program will emulate the experience of using this type of technology, that will invite users to create their own basic loops and patterns that can be saved and exported if desired. While the experience is running idle, it will feature a simple screen utilizing some of the point cloud technology to create an inviting and interesting piece for viewers to interact with. The work that inspired me to create an interactive digital environment for users to explore was Myron Kreuger’s Videoplace. I found this work very interesting in how to utilize computer vision to capture and illustrate movement as a part of the piece. It inspired me to create an experience that people can view as an artistic piece on its own, and combined with user interaction, to create a lasting experience that will hopefully work effectively for the purposes of this project.

  9. For my computer vision project, I’m going to make a murder mystery story. The victim of this murder is the cook who was murdered by the cook’s assistant who wants her job with a kitchen knife. The 2 other suspects are the widowed countess who owns the house who is planning on firing the cook and the gardener who is in love with the cook. There will be a brief video setting up the story before players move between the 3 different scenes to find clues and decide who the murder is. When a player’s hand hovers over the clue it will expand so they can see it. On the final screen players will put their hand over who they think killed the cook. After they decide there will be an explanation of the murder.

    This project is inspired by the Xbox Kinect game D4: Dark Dreams Don’t Die ( which is “a sort of noir murder-mystery where you can interact with the cel-shaded surroundings either with a controller or Microsoft’s do-all sensor. The latter of which apparently has you ‘grabbing’ one of the characters by her shoulders and pulling her off a kitchen table” ( My game is going to have similar interactions between the player and the game as Dark Dreams Don’t Die and have a similar vibe but a little less mature. The murder itself is inspired by the board game clue where all the players are in the same house and you have to figure out who the murderer is. In clue you have to have a victim, a murderer and a murder weapon so I’ve adopted this style for my game as well.

  10. This project has the user move their hands around a certain confined area in order to produce sound. On the screen, there would be a square split into eight separate smaller vertical rectangles from left to right with each containing a note of the basic E Major scale. These rectangles would have different colours in order to differentiate them from the others with each also possessing the letter of the note they represent above them and would produce the sound corresponding to the note. There would also be a sharp, natural, and flat symbol located on either side of the rectangle. The user would interact with this by piece by moving their hands through the use of a Kinect. Once threshold has been surpassed, the Kinect will register their hands and they will be able to move their hands across the rectangles to produce sound. If they move their hand higher the note becomes higher pitched, if they move their hand lower the note becomes lower pitched, and if they keep their hand in the centre it would play the natural note. Using the note ‘E’ as an example, a higher hand would make the sound of an E Sharp, a hand in the centre would make the sound of an E natural, and a lower hand would make the sound of an E flat. The main idea of this project is to essentially make an instrument style thing that people could interact with, even without any musical knowledge. One doesn’t need to learn hand positions, tuning, musical scales, musical theory, etc. to play with the piece. It would be simple enough for anyone to use, yet still allow for people more experienced in music to play songs on it. There are two main inspirations for this project, one being the Theremin, and the other being Mi.Mu gloves. The Theremin is an electronic instrument possessing two metal antennas that send electrical signals between each other. The one playing the instrument moves one hand to control the frequency, with the other hand being used for volume. On the other hand, the Mi.Mu gloves are a wearable instrument that connects to software, allowing the user to change and control any aspect or parameter such as pitch, faders, volume, and even a singer’s voice. Both the Theremin and Mi.Mu gloves inspired this piece through their use of hand motions to produce and change sound while working in a 2D or 3D space. My project will differ from these two as it will be done using code and a Kinect, as well as only musical notes rather than electronically altered sounds. As it will be done through the use of code, there will also be no physical object the user interacts with, more so just the Kinect that picks up the hand movements.

    Referenced Work:


    Mi.Mu Gloves:

  11. For my second project, I wish to create an interactive video installation. It will be projected onto a single screen and will utilize a Kinect. The User will be instructed to stand a few feet infront of the Kinect and move their hand to control the video. The Kinect will be used to sense the 3-dimensional location of a user’s hand and control the video accordingly. I will film a series of videos in an ultrahigh framerate which will result in it allowing to be slowed down tremendously. I will be aiming for a framerate of 240FPS. The user’s movement will allow them to alter the speed of the videos, speeding up what they choose to skip and slowing down what they choose to examine. A large runtime (hours) of footage will be used. It will be so much content to play with, one person or group could not possibly experience it all. Some video will be of very interesting events (which would be amazing in slow-motion), while others will be of not-so-interesting events. Some video may be beautiful, emotional events, and some video may be slightly romantic events. Some might be boring and political, and some video may be inspiring. An amalgamation of scenes such as these will allow for different user experiences when different people want to see or skip different things.

    The attached diagram shows the Slow-Motion and Fast-Forward buttons, with the (normal speed) Play button in the middle. The users hands will be used to manipulate the video with these controls being sensed by the Kinect.

    I am currently experimenting with editing techniques to see if applying other edits (perspective, viewing angle, color gradients, etc) would be as fun to play with as speed.

  12. As Christmas is coming close, I have chosen a christmas theme for my computer vision project. Using face recognition, users will be wearing a santa hat and a gingerbread glove. There will be five Christmas ornament balls at the top of the screen, each ball will represent a note (CDEFG). Users will touch the ornament which will play the note. The notes to Jingle Bells will be displayed at the bottom of the screen as a guide for the player to play a song. There will be snow falling as well for added aesthetics.

    The idea of this was to allow users to enjoy something simple and fun for the festive season. I wanted to users to get into the festive atmosphere by playing a holiday song. Music has always been something that brought people together which influenced my decisions for including music.
    This was inspired by Myron Kreuger’s Videoplace(1985). As one of the first artists to work in this space, he reached into the air touching letters which enabled him to type without touching the keyboard. My idea has a similar concept, the difference with my idea is that instead of a keyboard the user will be touching ornaments, like keys on a piano to play a song.

  13. Connor Vine
    November 13th, 2017
    RTA948 – Interactive Spaces
    Computer Vision/Kinect Proposal

    Kinect Synthesizer

    My idea for this project is to create a virtual instrument that uses the Kinect as a control surface, using the the position of the user’s hands to determine the note that is played, and using the range capabilities of the Kinect to determine the volume of the note. There will also be the ability to shift the range of the instrument by an octave up and down, and possibly the addition of a tone/effects selector to further customize the sounds that the synthesizer can create. Aesthetically the synthesizer will be rather basic, as it’s meant to focus on functionality instead of aesthetics, but the current plan is to mimic the aesthetic of the DaftDisco ( student project, using basic shapes to represent collision spaces that trigger notes when the user’s hands are within the boxes and within a certain depth parameter.

    Another inspiration for this project is Very Nervous System by David Rokeby ( which also uses the position and depth of the user’s extremities to create music. The main change I will be making from Rokeby’s project is creating more of an instrument than an experience, where the user will be able to easily control the pitch, length, effects and velocity of the notes played.

  14. Computer Vision Proposal

    For my Computer Vision project I’ll be mixing Universal studio’s Interactive Wand attractions with Design I/O’s environment/learning through doing attitude. The user will be given a “wand”, an item that can be tracked easily via Kinect. When the user traces a pattern with this wand, it will trigger a particle effect and a spell will be cast.

    Projected in front of the user will be an environment that will change depending on the spells cast at it i.e. house burns up, lake turns to ice, tree grows from sapling to full size.

    The plan is to make as smooth and “magical” an experience as possible, by drawing on these two works as points of inspiration.

  15. For my computer vision assignment, I plan on making a music based visual experience. This was initially just a passing idea I thought would be really interesting for a virtual reality experience, but I thought it could also work, in a less immersive way, as a computer vision based project as well. I plan on creating an array of 2D objects, most likely shapes (circles, squares, lines, etc.), and adding movement to them based on the tempo/rhythm of a song. Initially, the objects will appear to be moving in 2D space, but them using CV the user would be able to do defined movements to zoom in and out, and rotate left and right. When the user zooms in and out, in theory it would create a depth of field with some objects becoming larger than other because in the virtual space they would be closer to the person. Same idea with rotations, when the user rotates, depending on where they are, it would distort the shapes, depending on the POV of the person.

    Inspiration for my projects comes from visuals from music youtube channels such as Monstercat or Chill Nation, where they use these minimalistic visuals that move in accordance to music. This is what initially gave me the idea, being able to see these visuals in a 3D space could be an enhancement to the experience of a song, making it not only auditory, but visual as well.

  16. For the computer vision project, I am going to make an interactive game inspired by the new release of the horror movie IT. This game will use the average point tracking example, to track the user playing the game, along with a Kinect tracker. The game will be projected onto a wall for each player to participate. The game will track the user arm with a balloon. The objective of the game is to discover 3 areas of the background image in order to get to the next level. For example, the balloon will follow the subject through the Kinect tracker, the subject does not know where the discoveries are. The subject will waive their arm around the screen in order to find the three special areas of the screen. Once the subject causes a collision with the balloon and the three special parts of the screen, a surprise will pop up. Once 3 surprises are found the user moves onto the next level. There are 2 levels of the game and 6 surprises in total to find; 3 per level. Since this movie has become extremely popular since its release, this game is a great addiction for the fans of the movie. It gives them a chance to interact with the subject matter and feel as if they are a part of the movie, trying to discover where Pennywise (the clown) is. The Last of Us is a horror video game based on the hit TV series The Walking Dead. Players control two characters Joel and Ellie through the post-apocalyptic United States. Users say that the intensity of this game could make you sweat because of how realistic the scenarios are. This concept is something I want to incorporate into my project. I want the viewers to feel as if they are walking through an abended house, frightened by things that pop out at them. This will be done because, the subject playing the game will not know where the surprises are located on the screen. Arcade fire has created a lot of music videos using computer vision. A well-known example is their Just A Reflektor interactive music video. This interaction tracks an individual and changes the filter on the subject at different points in the song. I drew aesthetic inspiration from this piece as it has an eerie and horror-esque look to it.

    Last of us:
    Just a Reflektor:

  17. For our CV project, we want to make a light installation. We are going to make a “night-light” lamp that will project a starry night sky on surrounding surfaces. It is going to be a hexagonal prism made out of acrylic with a galaxy cut/etched into it. The piece will change colour (purples, pinks, & blues) and its spinning direction based on the users’ movement along an x and y-axis. The users’ movement will be tracked using a Kinect. The light piece will be set up inside a tent made of white fabric for an immersive experience. The idea came from something very different than what is proposed. There’s this artist named Andrew O’Malley who makes art using LEDs, and he made a project called In Transit: NYC, it is an animated acrylic light sculpture that shows the NYC subway map. We really liked the combination of acrylic and LEDs and wanted to incorporate that into our project.

  18. For our CV project we want to make a light installation. We are going to make a “night-light” lamp that will project a starry night sky on surrounding surfaces. It is going to be a hexagonal prism made out of acrylic with a galaxy cut/etched into it. The piece will change colour (purples, pinks, & blues) and its spinning direction based on the users’ movement along an x and y axis. The users movement will be tracked using a Kinect. The light piece will be set up inside a tent made of white fabric for an immersive experience. The idea came from something very different than what is proposed. There’s this artist named Andrew O’Malley who makes art using LEDs, and he made a project called In Transit: NYC, it is an animated acrylic light sculpture that shows the NYC subway map. We really liked the combination of acrylic and LEDs and wanted to incorporate that in our project.

  19. I really want to create an immersive experience for this interactive project. I’m hoping to use a surface other than the wall (like cloth) to project onto for a warmer/less dull experience. The original projection will be a space-like scene. The idea is that the user will be able to draw and manipulate this scene through the movement of their hands and their physical interaction with the cloth. Pressing into the cloth will result in more intense reactions (A) while hovering will create lighter reactions (B). Sensible, created in 2007 by Biopus, is an interactive installation in which participants are involved in a kind of game between vegetables, herbivores and carnivores. It utilizes a touch-screen which through human touch allows the movement of these creatures. The way in which the user interacts with the creatures creates real-time sound by reacting to the “population density, the amount of energy the organisms spend in their actions, the levels of pleasure and displeasure of each organism…”. I’d like my work to be inspired by and emulate this same use of visual detail and audio. The same idea of being able to physically interact with something and immediately be met with a reaction is often satisfying and would be successful in a user-oriented installation.

  20. For this project, I plan on having an interactive installation aimed for all ages. Using a microphone as the interactive object the audience can play around with, I will create a code based reaction to the sounds that the microphone is receiving. The screen will show 10 little black rectangles on the y-axis on both sides on the screen (it will look like a scale). The notes will range from low to high (low being the bottom of the y-axis, and high being the top). As a person sings a low note into the microphone, different small sized bubbles in a variation of light colors will shoot out of the bottom rectangles on both y-axis. As the notes get higher, the bubbles will come out of rectangles that are higher on the y-axis and the bubbles will become brighter and larger. This will be projected onto a flat white surface (wall or white board). I was inspired by the project shown in class by Jaap Blonk and Golan Levin; Messa Di Voce. I enjoyed how fun it was the watch and was inspired with the idea to create an installation that is fun to interactive with and also interesting to watch. The interaction between sound and screen is something that I’d like to work more with and Messa Di Voce showed how a simple interaction with variations of different sounds can make people laugh. My project’s basic sound and screen interaction will be similar to Blonk and Levin’s, however, the reaction of the screen to the sound will be differently shown due to the fact that the sounds will be broken down into Low and High notes rather than just noise.

    Referenced work:

  21. I plan to make a horror ARG/ Interactive experience using the Kinect. The player will begin in a wooded area and have to traverse the area, find out whats going on and why they’re there (or at least it will be hinted at) and get to safety. The actual interaction will be done with a flashlight, the player will move it around the screen in order to see certain aspects of the video. The rest of the video, besides where the flashlight exists will remain dark. I will implement audio cues to try to will the player to look a certain direction. I also want to implement a function where the character may switch perspectives by moving left or right or perhaps even speed up by moving quickly.

    My main focus of the game is to frighten and unnerve the player while creating an immersive experience. For inspiration I look to “Night Terrors” by Novum Analytics ( where the player actually turns there house into the ARG. While I would be unable to create this kind of ARG I want to replicate the filming style and the scares.

    Another inspiration of mine for the technical aspect is Arcade Fire’s Reflektor and similar interactive videos ( I take inspiration from the way the interaction is made with the user and the aesthetics of it. I want to recreate the feeling of looking through a small section to see the whole video.

  22. For my computer vision project, I want to create a interactive installation which uses the movements of the participants to generate music. Inspired by David Rokeby’s Very Nervous System, a Kinect sensor will track participant’s body and hands in order to control the sounds.

    The location and movement of each hand will alter properties such as the pitch or tempo of a predetermined set of sound layers. By using the 3d tracking capabilities of the sensor, the depth of the participants’s movements could be used to alter other qualities of sound such as its texture or reverb; adding an additional layer of interactivity and complexity in the code’s output. In addition, real time effects or sound samples could be mapped to activate when the user executes a certain motion or enters a specific area in space (which could be accomplished using tool such as Supercollider or minim for processing).

    The visual component of the artwork is inspired by the style of Ryoji Ikeda’s installation Transfinite. I want to project a pattern and flickering of flashing of light onto a large surface that can respond to the changes in music. Similar to Ryoji’s artworks, I want the sounds and visuals of my installation to convey a raw, technological undertone in order to create a stripped-down, utilitarianism aesthetic. Ryoji describes his music as amalgamation of discarded and unwanted electronic sounds, and I want to borrow this aspect of his works to create an immersive soundscape that one might not normally be consider music.

    I want the participant’s experience of my installation to be a convergence between Very Nervous System and Transfinite. I hope to recall the interaction and physicality of David Rokeby’s installation, while replacing its intimacy with the sense of detached abstraction and computational bias found in Royji’s artworks. In doing so, I hope my project can demonstrate my interpretation and of an hybridity between the two artist’s concepts and ideas.


    David Rokeby’s Very Nervous System


    Real time audio synthesization in Processing:

  23. The project I am proposing is a 2D platformer game about a ball going from point A to point B. This ball often finds himself in sticky situations where there is no clear path to his destination. This is where the player comes in, the ball needs their help to plug holes, make bridges, take out barriers, and get him to point B. In the start I want easy puzzles using hands to plug holes, but as the player progresses they might have to incorporate other body parts like heads, elbows, feet, etc. to help the ball go to point B. And at the end of the game one person should be simply not enough to help the ball, forcing the player to call a friend to help. To code this I plan to use a threshold to track body parts. Making early levels using hands only pretty easy, and as the game gets harder I want a similar effect to twister. I would use blob-detection to allow multiple players at once. This idea is relatively ambitious so I plan to have 3 levels with pretty simple puzzle solutions mainly consisting of hand bridges. To expand on the game I hope to have 5 levels with other interactions like breaking down barriers by hitting it and maybe having to toss the ball. The levels will be mainly geometric shapes and solid colours so that I can use Pixel Colour for collision detection. My inspiration for this was a video on youtube, I do not know who created this installation because the description is in a different language. ( This inspired me by a couple people working together to hold the particles. Along with an indie game called Thomas is Alone. Thomas is Alone follows a little square along his journey making friends and working together with them. The game does an incredible job of giving the shapes each their own personalities. This would be something I’d love to have in my game if I can but I functionality is a priority.

  24. I will be using Computer Vision via the Xbox Kinect to create an interactive experience akin to that of a character creation sequence as seen in many video games. The experience will follow a short narrative in which the player is being processed in some ministry or factory in a futuristic setting. Throughout the process, the player will have many options to augment their appearance, often in very outlandish ways (no spoilers!). The augmentations will be visible in realtime as the player builds their ‘avatar’. The final result is a character that resembles the user, as their image is being used, but is completely cartoonishly different due to the augmentations. The projection surface will be a flat white surface, and the interaction will involve the user moving as instructed to change aspects of their appearance when prompted.

    The main inspiration for this project as stated previous is video game character creation, as can be seen here:
    Whereas character creation is usually a footnote in most games, I aim to base the entire narrative of the experience around it and create a more exciting process out of it. Aesthetically and narrative wise, my main inspiration is Terry Gilliam’s black-humour dystopian sci-fi film Brazil (1985). The film deals with themes of outlandish bureaucracy processes set in a retrofuturistic sci-fi universe, and I aim to make my experience similar, though it will focus more specifically on this aspect of post-human or transhumanist body augmentation.

    Rough concept of a possible scene in the experience can be seen in the attached image.

  25. Utilizing the X-Box Kinect, users will interact with the screen to participate in a Spelling Bee! This game will require the user to participate in an interactive Spelling Bee, which will consist of an audio portion saying a word and that word used in a sentence, and then the user will have a certain time limit to spell the word out. There will be a line of letters at the top of the screen in which the player will “touch” each letter to correctly spell out the word spoken to them. After each letter is chosen, it will appear lower on the screen to show what letters the player has already chosen, therefore spelling out the word, in order to have a visual component to it. There will be an option on the screen to have the word repeated. As the player continues to spell out the words correctly, the vocabulary will get more complex. After 6 rounds, if the user spells out all of the words correctly, they win the Spelling Bee.

    The idea of this was to allow the user to have some fun interacting with the letters, however still have an educational aspect to it!

    The inspiration behind this project was from Myron Kreuger’s Videoplace (1985) ( Kreuger created an interactive piece that allowed him to reach into the sky to write out words by pointing to the letters. I wanted to use this idea to create a fun and competitive game!

Leave a Reply

Your email address will not be published. Required fields are marked *