2013年6月27日星期四

THEORY OF ANSEl ADEM

The Zone System is a technique that was formulated by Ansel Adams
and Fred Archer back in the 1930′s. It is an approach to a standardized way
of working that guarantees a correct exposure in every situation,
even in the trickiest lighting conditions such as back lighting, extreme difference
 between light and shadow areas of a scene, and many similar conditions that
are most likely going to throw off your camera’s metering giving you a completely incorrect exposure.

★~☆﹒☆.~*∴*~★*∴*﹒∴~*★*∴*★~☆﹒☆.~*∴*~★°
More seriously, in the 40s Ansel Adams and his colleague Fred Archer were concerned by the difficulty to faithfully reproduce the reality they looked at and were willing to capture with their huge cameras. They were looking for a way to scientifically pre visualize a scene. As an example, let’s look at Ansel Adams’s snake river picture. He might have said: OK, I want the light behind the mountain to be really white, the trees in the front to be really black but I also want to preserve texture. How do I manage to record such different tonalities when I know that my meter will averaged everything in some kind of middle gray stuff. Yes! We know that a lightmeter is only recording middle grey. If under the same lightning we shot 3 pieces of paper: one pure black, one grey, one pure white- and use a lightmeter to define the settings of each individual paper, we will end up with 3 middle gray shots. Lightmeters record middle gray and middle gray only! This is key to understand. Unfortunately or hopefully, the world is far from being in middle gray only. The take home message is that we cannot trust our lightmeter and that we have to adapt the readings to what our eyes really see and to what our brain really wants to capture. Here comes the Zone System! Initially developed for films, it is easily transferable to digital sensors.



★~☆﹒☆.~*∴*~★*∴*﹒∴~*★*∴*★~☆﹒☆.~*∴*~★°
Basically, apart from the issue with the “gray” metering systems, the other limit photographers have to cope with is the limited dynamic range of the film/sensor when compared to the human eye. The dynamic range represents the scale of tonality from the darkest – regarded as pure black- to the brightest -regarded as pure white- in an image. When looking at a scene, the eye can dynamically adapt, grab parts of the pictures and let the brain recompose the image. The human eye dynamic range is assumed to exceed 24 f-stops! You will say that this is not fair game to compare a dynamic system, the eye, to a single static shot apparatus, the camera. We are not talking about HDR right! OK to be fair, let’s compare static eyes to static cameras and even then the dynamic range of the eye is anywhere between 10 to 14 f-stops ! What is the average dynamic range of a camera? Well, if we want to keep the details, let’s say around 5 stops! That a huge difference and it explains why we really have to concentrate on the metering in order to best reproduce what we see with our million years of evolution eyes.
Ansel Adams et al. brillant idea was to divide a scene into 11 zones, from pure white to pure black -from zone 0 to zone X-. What is also key to understand is that with digital cameras we only have 5 zones to deal with, so 5 stops

Photography has always been a "technical" art. Since its inception photography has relied on the leading technology of its day to capture and present images of the world around us. Much effort has been applied to photography to enhance the precision with which we record images as a way of sharing a visual experience as acurately as possible. Of course, artists have always extended this visual experience into the realm of the imagination, but to this day, photography has excelled in presenting a vision of the world that resonates with truth. As a result of these efforts, photography has become the domain of science and technology. The progression of photography into the world of computers is a natural extension of the desire for precision and control over the image presentation. Digital photography has evolved to give us a range of control over the captured and presented image that far exceeds anything that Ansel Adams ever dreamed of.
However, this promise of precision and control is often hidden behind a trend in U.I. design that invites a more casual, touchy-feely, intuitive involvement in the image making process. Everything is automated - auto-focus, auto-exposure, auto color adjustments, one-click this and one-click that. Almost all feedback data is confined to what the image looks like on screen and we are led to believe that "color management" will take care of everything. The professional image maker needs something more... well... professional. It is to this end that this first in a series of articles is devoted. We are going to dust off the traditional Ansel Adams Zone System and re-tool it for the digital age. In the process, we will pull back the veil that obscurs the true precision afforded by digital capture. The serious photographer should gain some new insight into the way his tools work and will be able to reclaim a more precise control over the photo making process.
Lets start with an overview of the Zone System as it applies to digital photography. Its perhaps easiest to gain some insight by building a "Zone Scale". Traditionally, this was done by doing a series of exposures, processing film amd making prints and painstakingly assembling patches of varying densities into a strip, also known as a step wedge.


                 ==============================================

  Here....For my second exercise, we need to take a scene or landscape which included the 3 zone in it.
After taking the best photo, then we have to sent it to the photo lab to print it out with the size of 5x7.
Moreover, we also need to photoshop the photo into 4 different type, which are RGB, B&W, grey and CMYK.
The purpose of doing this is to determine the slightly difference between these mode of photo editing, and we could choose the best way for our further photo shooting.

0 条评论:

发表评论

订阅 博文评论 [Atom]

<< 主页