Above is link to a YouTube video by Bruce Nauman called Manipulating the T bar, the first section is Nauman moving around a large metal T bar, shot in black and white at different angles. The video does strange things to perception of space as he moves the T around. This work defiitely reminded me of line drawings and the patterns we are used to seeing that tell us whether something is receding or emerging towards us ie. the Muller-Lyer Illusion:
As the T bar is pulled into different orientations i find perception of space tilting because i am using focusing on the T and using it to get a sense of space in the room. I find my eyes are doing this naturally probably because i am used to using these ‘lines’ to orientate myself in space just as in the Muller-Lyer Illusion.This way of understanding space is also probably why we can understand and perceive space in line drawings even though in reality black lines that wrap objects and separate figure/ground rarely exist.
It does make me curious about whether we are born with is me sort of facility to pick out these patterns in order to orientate ourselves and objects within space or if it something we learn through experience. It does make me think again aout the history of image making and the systems we have developed to generate the illusion of depth and space, things like linear perspective have come from analyzing what we see then creating a formula for regenerating that experience. The resulting regeneration creates an illusion of space in a line drawing, the fact that it is a drawing and not actually deep is clear because the formula doesn’t address other visual cues that provide us with information when viewing a natural scene ie. stereopsis and relative motion. From an early age children see 2D images and learn to interpret and produce them, this is something we have learnt to do since the first cave panting was slapped on a wall. Over years developing techniques and ideas, i wonder how this facility to create an image has effected the way that our eye function, vision is pulling order form the chaos; our eyes have specific receptors that pick up certain information our brains have developed mechanisms for interpreting that information and linking it to concepts that we have of things ie. a object that round, sphere like and red/green with a stalk is an apple. Our eyes see more than we perceive, our brains pick out relavant/useful information from a mass of visual materiel and that is what we ‘see’ at a basic level, the information is filtered again pre-consciously placing emphasis on certain details. Our attention centers fire at things that are ‘most relevant’ which seems to e mostly linked to survival/needs/wants, which makes sense if you think about evolution we would need to be able to pick out a tiger running at us faster than the swaying tree branches above it. This could also help to explain why there can be multiple diverse 2D representations of a tree, but people from lots of different cultures all still know it’s a tree. On that note though, i recently read about how there are some African tribes that live in a world based on curves and circles, unlike people in the western world thats full of straight lines and right angles, these tribes are unable to see depth in a perspective drawing that uses linear perspective. As we have developed image making, our visual system has picked out common patterns that have allowed us to rapidly interpret 2D images into 3D concept. These patterns are based on what we see around us and the history of images and image making that came before us. It is fascinating to me that these systems could be so malleable, the brain is constantly adapting to its environment, i wonder how differently we ‘see’ now compared to a thousand years ago, and how different again the people a thousand years in the future will see.
The last point i would like to note is that we are already translating a 2D image into 3D one, as the image that hits the retina is 2D and converted to 3D by brain using visual cues. We have been able to pick out some of them ie. perspective, haze (aerial perspective), occlusion, shading, stereopsis, relative motion but there is most likely more that we havent found yet. Once we do, how will that effect image making?