One
morning in the late summer, I sat on the deck on our rooftop. Despite
the city surroundings, the air was clean and clear due to a heavy
rain the previous night. The sun was beating down on the silver-painted
surface of the roof, but hadn't yet been able to evaporate small
pools of water that had accumulated there. Out of the corner of
my eye, I noticed that a large dragonfly had been attracted to the
pools, probably expecting to find some mosquitoes for a breakfast
treat. For fifteen minutes or so, I watched the dragonfly perform
a delicate dance over the shallow pools. I was amazed at how the
dragonfly was able to skim the surface of the pool, rise up, and
dive down just to touch the surface. Its movements were so precise,
they didn't seem affected by the almost mirror-perfect reflection
on the still water.
The
shape of the dragonfly reminded me of the helicopters that I see
flying along the East River every day from the rooftop. Of various
shapes, colors and sizes, these helicopters patrol the area around
the bridges, report daily traffic patterns, and provide transportation
around Manhattan. I am always impressed by the skill of the pilots,
watching as two helicopters hover in sync, stationed over the towers
of the Queensborough bridge and then turn in tandem to continue
to travel along the clear passageway created by the river. Like
the wings of a dragonfly, the helicopter's propellers move so fast
that they appear as a blur to the eye. Also like a dragonfly, a
helicopter is designed for precision aerobatics. However, as I watched
this dragonfly, I saw a precision of movement unlike any helicopter
I have ever seen. The dragonfly could hover centimeters above the
surface of the water and then suddenly swing up and into position
high above the pool. To me, the movements were a signal of an extremely
effective visual system, and it made me wonder how the visual systems
of insects differ from those of humans.
Unlike
our eyes, insect eyes are immobile. This makes depth perception
much more difficult for an insect than a human. Insect eyes are
also much closer together than human eyes, another obstacle to effective
depth perception. How, then, was a dragonfly able to skim the surface
of a reflective pool of water without crashing into it?
Although
the compound eye of the insect lacks the depth perception abilities
of mammal eyes, insect eyes are adept at perceiving motion. Perhaps
it is this detailed motion vision that helps the insect navigate.
In experiments with locusts by G.K. Wallace in the late 1950s, researchers
determined that the fast, seemingly chaotic motion exhibited by
insects and other invertebrates (for example, crabs skittering along
a sandy beach) is part of the animals' depth perception. Wallace
observed locusts performing a series of head movements before moving
toward an object. He determined that this action, an action he called
the 'peering' of the locust, was used by the insect to determine
the distance of the object. [1]
Adjacent
to the rooftop garden where I was sitting that morning is the off-ramp
to New York's Queensborough bridge. While Manhattan's skyline itself
is an extraordinary sight from the rooftop, the roof also allows
for an almost 360 degree panorama of the city. I find myself wishing
that I had the wide angle vision of an insect eye to fully appreciate
the view. From my vantage point that morning, moving my head in
several directions, I noticed that the most dominant part of the
visual scene was the highway. Like every other day, the flow of
traffic on the bridge was a constant, steady stream. Over the years,
I have found that not once at any hour of the day or night has the
bridge been clear of moving vehicles, although it is possible to
discern a pattern of traffic flow density at different times of
the day.
As I watched each car exit the bridge, I thought about my own experiences
driving. The process of driving itself is an improvisation. Drivers
make choices based on the presence or absence of other drivers or
just on a whim. They respond to events on both a macroscopic scale
(for example, choosing a route) and a microscopic scale (such as
stepping on the brake in a split second in response to a moving
obstacle). The decisions drivers make are influenced by many factors,
from the visibility of other cars from inside a particular model
or make of car, to the maneuverabilty of a particular car, to the
position of a car on the road and even to the state of mind of the
driver. Despite the many factors that influence driving behavior,
the rules of the road serve to help create an organized pattern
of traffic flow, like the organization created in computer simulations
of flocking and swarming.
Computer
flocking and swarming algorithms were developed as a way to accurately
simulate the behavior of large groups of animals. Rather than a
model of the entire sensory system and specific behaviors of each
animal, flocking algorithms are a subset of artificial intelligence
in which representations of creatures like birds or buffalo are
given a simple set of behaviors. These simple rules function like
the rules of the road in organizing the represented creatures. If
the rules are designed just right, the result can be a striking
visual representation of a flock, herd, or swarm of animals. For
example, with a flocking algorithm, a bird may be programmed to
stay within a certain range of another bird without colliding with
this bird or any other obstacles. It's fascinating to look at the
resulting visual simulation when a group of these creatures interact
with each other, and to watch the flocks and swarms respond to novel
situations such as obstacles and intersecting flocks. [2]
Breve
is a 3D environment for the creation of simulations of artificial
life and decentralized systems http://www.spiderland.org/breve/.
In the demo section of the Breve application, there are a number
of visual simulations based on flocking algorithms that can be customized
through programming. One of these demos is called 'vision flocking.'
In the code of 'vision flocking,' an aspect of a bird's behavior
is actually based on a very simple simulation of the bird's ability
to see. Simply put, an individual bird is only able to detect other
birds that come within its line of sight which is defined as a 20
degree radius of vision, unlike in simulations without vision, in
which each bird detects other birds along a radius of a full 360
degrees. The resulting simulation of this flock of multiple birds
surprisingly appeared to me as less random than a flock simulation
without the vision restriction. The birds in the limited vision
flock appear to have a sense of purpose and direction whereas birds
in flock simulations who have omnipotence (or 360 degree vision)
appear to fly much more randomly. It is as if the birds in the vision-restricted
simulation have less degrees of freedom, limited to move along a
track or highway in the sky rather than cutting a more chaotic path.
Human
beings have experienced the high-speed motion of a train or highway
for only about 100 years. In the timeline of evolution, this is
far too short for any significant adaptation of the visual system
to the radically new kind of stimulus created by driving or riding
in a fast moving vehicle. Despite this, driving is an everyday experience
for many of us. Unfortunately but not surprisingly, traffic accidents
are one of the leading causes of death in the United States and
many other countries. In 2002, there were over 40,000 deaths on
US roads and almost 3 million injuries. [3] It's literally a matter
of life and death that we understand how the human perceptual system
works in the novel situations in which technology places us and
that we do everything possible to develop tools to more accurately
analyze and understand how the body responds to these situations.
It's not enough to give humans increased abilities through technology,
researchers must make sure that the human perceptual system is able
to handle these new capabilities and, if it doesn't, create ways
to augment human perception appropriately.
The
development of technologies for brain imaging -- like positron emission
tomography (PET) activation images have allowed researchers to locate
specific areas of activation in the brain during perception. The
human ability to see motion has been isolated by using PET imaging
on patients exhibiting a rare disorder called motion blindness.
Gisela Leibold was stricken by a stroke that damaged a very specific
pathway in her brain. After the stroke, she was unable to see motion.
In a crowd of people, she became panicked and disoriented, seeing
people disappear and appear suddenly in a different location. Riding
an escalator or crossing a street was terrifying to her, and pouring
a cup of coffee was almost impossible as she would see the stream
of coffee entering the cup as a solid, motionless shape. [4] Gisela's
condition, a severe impairment in the ability to recognize the motion
of objects, is called akinetopsia and has been found to occur following
bilateral lesions in a specific area of the brain called V5. [5]
About
half of the brain of an insect is devoted to visual processing.
Although the vision of insects is of a significantly lower resolution
than human vision, there are aspects of motion vision that humans
share with insects. For example, experiments with bees have provided
strong evidence that insects experience the motion aftereffect,
also known as the 'waterfall illusion.' The waterfall illusion occurs
when looking away after observing motion. The visual system perceives
movement in the opposite direction. Another visual effect humans
share with insects is the negative afterimage. After staring at
a high contrast picture and looking away, the visual system perceives
a negative of the picture superimposed on the visual field. [6]
The
afterimage effect in humans has been exploited in the development
of the moving image. To the human mind, a series of still images
appear to be moving due to what is called the phenomenon of persistence
of vision. In this phenomenon, an afterimage of a still image stays
on the retina long enough to produce the appearance of smooth motion.
In
the early 1880s, Etienne-Jules Marey conducted research in the portrayal
of motion using photography. He worked in a giant open-air laboratory
he constructed outside Paris, called the Station Physiologique.
His method of moving unexposed film, "chronophotography,"
allowed him to study the mechanics of motion, and many of his images
attempted to isolate the 'purity' of motion by dressing models in
all black, with white stripes, dots, and electric lights placed
lengthwise along the limbs and at axis points, very much like contemporary
models equipped with motion tracking sensors used in animation and
game design. Within the medium of photography, Marey, Muybridge,
and others experimented with the portrayal of the body in motion
and helped to bring about moving picture technology. To Marey and
many others at the time, motion could be broken down into a series
of short, discrete time intervals. [7]
In
1912, Max Wertheimer, the founder of Gestalt Psychology, was one
of the first to describe apparent motion or the 'phi phenomenon.'
The phi phenomenon is the perceptual fact that stationary objects
can appear as though in motion under certain circumstances. Wertheimer
writes that his study of the phi phenomenon was inspired by a perceptual
experience he had while riding a train. As he watched various lights
blinking, he realized that if two lights blink on and off at a certain
rate, he perceived them to be one light moving back and forth. Wertheimer's
perceptual observations were happening at the same time as the early
stages of the development of moving pictures, and he must have been
aware of toys like the zoetrope and flipbooks that exploited persistence
of vision. So, what was particularly new about Wertheimer's observation
and subsequent developments in Gestalt theory? What was new, besides
breaking down the phi phenomenon into very specific rates of time,
was the idea that the mind's perception of a temporal experience
is continuous and cannot be broken down into a series of snapshots.
In other words, our minds do not work like flip books or zoetropes,
cutting up our perceptual experiences into a string of still images.
Instead, our minds process the information as it unfolds over time.
The phi phenomenon works in tandem with the phenomenon of persistence
of vision to make the experience of the moving image in film so
compelling. [8]
Around
the same time when Wertheimer was defining the phi phenomenon, Frank
and Lillian Gilbreth were beginning an extensive research project
studying workplace efficiency in the industrial age. The married
couple produced over 2000 glass plate photographic images between
1910 and 1924. These works included various long exposures of workers
on assembly lines, operating typewriters, laying brick, involved
in basically any working activity of the time. Besides being a great
document of working life during the early 20th century, these photographs
helped to define photography's role as a scientific research tool.
The technique they created -- using timed exposure of workers with
lights attached to their hands and feet -- serve to distill motion
into a simplified form that can be more easily evaluated for efficiency.
This essence of motion that the Gilbreth's broke down into a combination
of basic building-block movements looks very much like motion paths
described by contemporary computer animators. Unlike the discrete
blocks of frozen time in the filmic model of motion, movement is
represented by a smooth, continuous line in the Gilbreth visual
model of motion over time. [9]
Wertheimer's
ideas emerged from the results of work done by experimental physiologists.
Between 1865 and 1868, Franciscus Cornelis Donders performed a series
of experiments attempting to break down the exact amount of time
taken up by decision making. His work influenced many experimental
laboratories to start measuring reaction times. The assumption implicit
in the study of reaction times is that perception and thought are
processes that occur over time. This assumption informed Wertheimer's
work which posed that, if perception unfolds over time, then events
that happen over time can have a gestalt, or be grouped into perceptual
units, just like aspects of still images or scenes are grouped into
identifiable units by the mind. The larger philosophical issue implicit
in Donders' work is that if thinking is a process that takes time,
then it could be a material process, not metaphysical or spiritual.
[10]
When
Wertheimer was observing the lights from the window of a train,
he at first might have thought that another train was passing next
to his train. Or, perhaps his mind didn't immediately make a judgement
about what he was seeing. For some minute amount of time, Wertheimer
may have observed the lights without a determination. Then his mind
may have started to negotiate the various stimuli. He may have begun
to differentiate between the reflection of lights from the inside
of the train and the lights outside. He may have done this through
slight movements of his eyes and head. He may have had to differentiate
various visual artifacts, like scratches on the window of the train
or specks of dust floating on the surface of his eye. Smoke from
the train may have distorted his vision of the lights. Negotiating
all these artifacts and differentiating them from the actual scene
made the experience of viewing the visual scene a constantly unfolding
process. A speck observed in the visual scene had to be analyzed
and placed in a group. Was it a passing train, a light inside the
train, or a station far off in the distance? For a very small but
certain amount of time, aspects of the scene would be unidentified,
until it would be placed into a category through examination.
So
there are two ways to look at the human experience of movement through
vision. On the one hand, movement is a continuous, what you might
call an analog, process. Like the Gilbreth images or motion paths
created by computer animators, movement can be broken down into
a 3-dimensional line in space that represents the trajectory of
movement. On the other hand, according to gestalt theory, the human
mind understands motion as a series of discrete chunks, and our
visual system can be fooled to see a series of still images as a
moving object if they are changing at the right rate.
The
theory that the visual scene unfolds over time has become an established
area of machine vision research, called Active Vision. Active Vision
is a task-oriented approach, and the idea is that a machine's (or
human's) perception of a visual scene is enhanced through interaction
with the environment. Active Vision is a concept that is opposed
to the idea of Pure Vision. The Pure Vision concept is that a scene
is analyzed by the mind using a hierarchy of information and representations
that flow from bottom to top. That is, low level representations
(like color and basic forms) lead to higher level representations
(like a building). The Active Vision concept opposes this idea by
saying that visual perception is not hierarchical and that information
flows both ways, informed heavily by memory. [11]
Active
Vision implies a constant process of evaluating and re-evaluating
a visual scene based on past experiences and best guesses. Active
Vision also implies a purpose-oriented viewing, that is, perception
to satisfy direct needs. This approach has been useful in the design
of effective machine vision systems, but also in the development
of videoconferencing applications that use the Active Vision model
to enhance the flat screen image. [12]
How
was the Active Vision model developed and why now? One reason might
be that, through new technology, perception science researchers'
have recently been able to make observations in the real world,
outside of the laboratory setting. According to Johannes M. Zanker
and Jochen Zeil of the Visual Science Group, Research School of
Biological Sciences at Australian National University, several factors
have made it possible for scientific researchers to move out of
the controlled environment of the laboratory and begin to study
perceptual systems in the field, allowing for a look at perception
under real-world conditions. These factors are: technological innovation
in the area of temporal and spatial high resolution portable recording
devices; new theoretical approaches that involve the analysis of
complex systems; new knowledge of neurophysiology and the ability
to monitor and record nerve cell activity; and advances in robotic
technology that allow the systems to be put to test interacting
with the real world. Zanker and Zeil believe that studying perception
in the real world has had and will continue to have a huge impact
on scientists' ability to study and understand human motion vision.
They see a theoretical understanding of complex systems as essential
to the process. [13]
Just
as scientific and technological development has made it possible
for researchers to make field observations and understand more of
how complex systems work in the real world, technology has made
our real world more complex. Take the case of driving, where a human
is interacting with a complex machine. This human-and-machine combination
has to negotiate a complex environment filled with other human-and-machine
combinations faced with the same challenge. But that's not all,
the negotiation has to happen at speeds faster than any speeds we
humans have experienced in our evolutionary history.
The
move of field research into the nature of perception is actually
a return to an earlier way of working. Historically, most of the
study of vision and perception was done through analysis of observations
in the real world. Observers like Goethe and Descartes were hybrid
philosopher / scientist / artists who drew from their own experience
and imagination. However, since the 17th century when experimental
sciences began to dominate the scientific method, experimenters
from all areas moved into the laboratory. This constrained system
allowed for a detailed empirical study of a specific process and
accelerated the development of science. What Zanker and Zeil are
now arguing is that the development in technology, combined with
a theory of complex systems, now makes it possible for experimenters
to return to the field and combine the best of both worlds: interaction
in the real world with the detailed analysis possible in the laboratory.
Scientists
moving back into the real world -- armed with knowledge gained in
the laboratory -- open the door for a renewed interaction between
the sciences and the arts. Although some artists choose to isolate
themselves in the studio for specific ends, most contemporary artists
choose to work and live in the real world. For artists, this interaction
is essential to the artistic process, and in fact, many contemporary
artists have gone so far as to as to refuse to separate art from
life. For example, the artist Sophie Calle's projects have blurred
the boundaries of art and life. She works as almost a private investigator,
following a chosen stranger in the 1983 book project Suite Venitienne,
or exposing the private life of an acquaintance through his misplaced
address book as in The Address Book, also from 1983. She exposes
her own life, too, documenting her job as a stripper in a Paris
nightclub and her Las Vegas wedding to filmmaker and collaborator
Greg Shephard in their film Double Blind. Calle often works like
a private investigator or scientist in the field. In 1986, she asked
a series of people born without sight to describe their personal
image of beauty in a project called The Blind. In its artistic context,
The Blind underlines notions of beauty in relation to the visual
since most of her subjects spoke of visual images and she presents
the work as a series of photographs. However, it is also possible
to look at this project as a kind of scientific investigation. Although
the artist did not specifically follow a quantitative scientific
method, The Blind reveals aspects of human experience and perception
from a qualitative point of view. [14]
Almost
100 years ago, in 1909, the Futurist Manifesto was written in Italy
by Filippo Marinetti, creating one of the longest living art movements
of the 20th century. At the time of its writing, Marinetti and his
artist colleagues were all under 30 years of age, and true to its
name, the manifesto is a testament to the future, embracing the
developments of technology and science that were rapidly becoming
a part of daily life at the time, especially the automobile. Reading
the manifesto today, one has to be struck by its ominous connections
to the Fascist movement and by its glorification of war, but also
by its unabashed embrace of technology. The manifesto embraces the
technology of the changing environment as a new aesthetic -- "We
declare that the splendor of the world has been enriched by a new
beauty: the beauty of speed" -- and violently rejects the traditional
definition of art: "We want to demolish museums and libraries."
Referencing the widespread idea that the development of the steam
engine brought on an annihilation of time and space, the manifesto
states "What is the use of looking behind at the moment when
we must open the mysterious shutters of the impossible? Time and
Space died yesterday." [15]
Although
many aspects of the Futurist Manifesto still appear radical, the
aesthetic appreciation of technological development and even the
rejection of the museum are ideas that are embedded in our contemporary
media art practice. Today, the role of the museum is constantly
called into question with endless discussion on how museums need
to evolve and adapt to new media technologies, and new media artists
themselves strive to adapt to developing technology, in many cases
even participating in the development of these technologies through
individual research or in collaboration with scientists and engineers.
This research brings technology developed for the scientific investigation
of perception, which 50 years ago would have initially been confined
to a laboratory setting, into the real world much sooner. In collaboration
with artists, scientists can bring new tools and knowledge of these
complex systems to the field, while artists contribute a working
process of observation and analysis that never abandoned the real
world for the laboratory. This is creating individual scientist
/ artist hybrids similar to Goethe and Descartes, but with new experimental
and experiential knowledge.
Stuart
Antsis of the Department of Psychology of the University of California
San Diego is one researcher whose laboratory includes real-world
observation and whose work is making a difference in the area of
perception and technology. If you ever had to drive on a highway
in a dense fog, you know the terror of encountering a fast-moving
car at close range with no warning, and in 2002, there were 1200
fog-related vehicle accidents in Wisconsin alone. The danger of
a driving situation in fog is caused by more than just decreased
visibility; there is also an optical illusion of motion that occurs.
Dr. Antsis and his team have determined that in addition to low
visibility, fog also creates the effect of low contrast and that
objects appear to move more slowly in low-contrast situations. Drivers
not only fail to see fast moving cars in a dense fog , but drivers
misjudge their own speed and the speed of other cars. Dr. Antsis'
work details the nature of the low contrast movement illusion and
suggests that even some simple graphic indicators on or around the
road might save lives. [16]
Anthony
Hornof and his colleagues in the Department of Computer and Information
Science at the University of Oregon are working on a series of projects
using eye tracking. One aspect of their project that is interesting
to me is that they are working with The Cognitive Modeling and Eye
Tracking Lab at the University of Oregon, a lab that analyzes eye
movement data for scientific research. Traditionally, eye tracking
data is analyzed at length after the experiment because of the complexity
of the information, but the Hornof team suggests that a sonification
of eye movements in real time could provide some information to
researchers before the data is analyzed in detail. [17]
Although
eye movements and attention have been determined to be separate,
there is an important link: attention is focused on a particular
stimulus a split second before the eye is directed to look at it.
The Volvo corporation has accepted the link between eye movement
and attention and is using it as the basis of a new driving system
that keeps track of the driver's eye movements and delivers a warning
if the driver is not paying enough attention to the road.
Of
special interest to Volvo is helping professional drivers monitor
drowsiness. This work is the contemporary equivalent of Frank Gilbreth's
work in the 1930s on workload management for machine operators.
In the case of Volvo, there is an attempt to measure mental energy
through the monitoring of eye movements rather than the physical
energy monitoring of Gilbreth's photographs. [18]
Interactive
moving image technology presents a unique opportunity to not only
portray objects and subjects in motion, but to portray the experience
of the observer in motion. Computerized vision systems have to be
able to distinguish form and color in various lighting situations,
and perhaps most importantly, they have to be able to perceive various
kinds of movement. Robotic vision systems also have to be able to
differentiate between internal movement (i.e. the movement of the
robot itself or the camera input) and external movement. Since most
computer vision systems use digital video as the input source, it
is possible to detect a change in pixels from one frame to the next
and compare the changes to camera movement data and other information.
In
an attempt to create realistic computer gaming experiences and intelligent
robot movement, researchers have begun to study how biological creatures
are able to perceive and resolve the motion of their bodies. There
has recently been an increasing interest in the visual systems of
insects by researchers designing computer vision for use in robot
navigation. For example, researchers at The Center for Visual Science
at Australian National University (ANU) in Canberra and the Department
of Computer Science at Curin University in Perth are collaborating
on a project exploring robot navigation inspired by principles of
insect vision. [19]
Imagine
trying to develop a way for your car to travel without you or any
person as the driver. Although it would be convenient to have your
car do errands for you while you stayed at home, it would certainly
be a challenge to design your car to respond to all the unexpected
events that happen while driving. You might be able to easily program
the basic sequence for starting the car, accelerating, stopping,
and turning, but it would be impossible to predict every situation
your unmanned drone car might encounter. So, you might consider
giving your drone car a basic perceptual system. As you start to
consider all the ways in which the drone car might need to respond,
the perceptual system starts to require more and more capabilities.
Then you might consider that the perceptual system you design perhaps
shouldn't be an exact model of human perception. For example, you
are aware of the motion-based illusion that occurs in fog and was
discovered by Dr. Antsis, so perhaps the perceptual control system
for the drone car should behave in a different way.
If
you imagine increasing the complexity of the problem of the drone
car to a helicopter, you have the problem that Professor Mandayam
Srinivasan of the Vision Sciences Group at ASU is trying to solve.
Small, pilotless aircrafts called 'drones' are in high demand by
the defense industry. By operating planes without pilots on board,
a military operation obviously risks the lives of fewer soldiers.
However, not having a pilot on board means that there is no pilot
able to perceive and respond to the complex situation. One of the
Visual Sciences Group's possible solutions to that problem is the
Bee Chopper. [20]
Srinivasan
has discovered that bees are able to navigate down the center of
a narrow tunnel through evaluating the speed of motion on either
side of the tunnel. It turns out that the wider angle of view afforded
by the insect eyes and the fact that the eyes themselves are immobile
is actually a feature and not a bug (pardon the pun). A perceptual
system of an organism that estimates location and navigation through
estimating movement around it benefits from having more information
about its surroundings. Hence the benefit of a wider angle of view.
How does an organism tell the difference between its own movement
and the movement of its eyes? Well, in the simplified system of
the insect's visual system, at least, the need to distinguish between
an eye movement and a body movement is not part of the picture since
the insect's eyes cannot move. By observing bees and their fantastic
ability to navigate long distances through the air only to gently
land on a specific flower petal, Professor Srinivasan and his research
group are finding a model that solves the complex problem of unmanned
flight and are shedding light on ways that artificial vision systems
might assist humans in adapting to rapidly changing technology.
The dragonfly, the bee, and the peering locust are also helping
to illuminate this world of vision in motion.
References:
[1]
Wallace, G. K. "Visual Scanning in the Desert Locust Schistocerca
Gregaria" in The Journal of Experimental Biology 36 (1959),
p. 512-515
[2] One of the
first people to simulate birds flocking in virtual 3D was Craig
Reynolds. He has created a number of Java-based simulations he calls
'boids' available on his web site: http://www.red3d.com/cwr/boids/
[3] USDOT Releases
2002 Highway Fatality Statistics, U.S. Department of Transportation,
Office of Public Affairs
Washington, D.C., http://www.nhtsa.dot.gov, Thursday, July 17, 2003
- Tuesday, October 7, 2003
[4] Shipp S.,
de Jong B. M., Zihl J., Frackowiak R. S. J., Zeki S. in Brain 117(5)
(01-OCT-94), p.1023-1038
[5] From "Seeing,
Hearing, and Smelling the World," A Report from the Howard
Hughes Medical Institute "How We See Things That Move: The
Strange Symptoms of Blindness to Motion"
[6] New Frontiers
in Science, 2000. "Seeing the brain through a fly's eye"
is an interactive science exhibit based on the work of the Insect
Vision Group. The exhibit was first presented as part of the Royal
Society's New Frontiers in Science exhibition, held
during June 2000, and then at the University of Cambridge's Museum
of Zoology. The exhibit was designed by Simon Laughlin, Brian Burton,
Rob Harris, Gonzalo Garcia de Polavieja at the University of Cambridge
and Ben Tatler at the University of Sussex. http://www.zoo.cam.ac.uk/ZOOSTAFF/laughlin/nfis2000/index.html
[7] Kernan,
Michael, "Catching a Glimpse of America's Industrial Past"
in Smithsonian Magazine (May 1998)
[8] Wertheimer,
Max, "Über Gestalttheorie" [an address before the
Kant Society, Berlin, December 7, 1924], Erlangen, 1925. Translated
by Willis D. in Source Book of Gestalt Psychology (Harcourt, Brace
and Co: New York, 1938)
[9] Thomas,
Ann, ed., Beauty of Another Order: Photography in Science (Yale University
Press: New Haven 1998)
[10] Wozniak,
Robert H., "Mind and Body Rene Descartes to William James" in Serendip,
http://serendip.brynmawr.edu/exhibitions/Mind/Consciousness.html
[11] Slaney,
Malcolm, "A Critique of Pure Audition Interval Research, Inc."
from the Proceedings of the Computational Auditory Scene Analysis
Workshop, 1995
[12] Marr, D.,
Vision (Freeman Publishers, 1982)
Blake, A. and Yuille, A., eds., Active Vision (MIT Press: Cambridge,
MA, 1992)
[13] Zanker,
Johannes M. and Zeil, Jochen (Visual Science Group, Research School
of Biological Sciences at Australian National University), "Processing
Motion in the Real World" in Motion Vision - Computational,
Neural, and Ecological Constraints (Springer-Verlag: New York, 2001)
[14] Bois, Yves-Alain,
"Character study," Artforum (April 2000)
[15] Marinetti,
F.T., "The Founding and Manifesto of Futurism," Le Figaro,
Paris, February 20, 1909
[16] Antsis,
Stuart, "Moving in a Fog: Stimulus Contrast Affects the Perceived
Speed and Direction of Motion," Unpublished Paper (2002); Department
of Psychology UCSD, 9500 Gilman Drive, La Jolla, CA 92093
[17] Hornof,
Anthony, Cavender, Anna, Hoselton, Rob, and Sato, Linda, "Art
and Music With the Eyes," unpublished manuscript, Department
of Computer and Information Science, University of Oregon
[18] Volvo Trucks
Researches Driver Distraction, Volvo Truck Corporation Press Release,
May 7, 2003
[19] Srinivasan,
M. V. et. al, "Robot Navigation Inspired by Principles of Insect
Vision" in Robotics and Autonomous Systems 26 (1999), p. 203-216
[20] Quantum
ABC Television interview with Srinivasan, M. V.: "Bee Chopper"
Thursday, February 15, 2001; transcript available at http://www.abc.net.au/quantum/s244449.htm
|