Friday, April 22, 2022

Reading Notes: April 22nd, 2022

All accounts of awareness other than intentionalism require objects of awareness to be actually existent or, as I shall put it, real. Central to intentionalism is the denial that the expression “is aware of” must express a relation between two entities. On this view, to speak of an object of awareness is not necessarily to speak of an entity that is an object of awareness; for some objects do not exist.” (Smith, The Problem of Perception, 234) [Underlining is mine]
“For according to a fairly widespread alternative usage—to be found, for example, in both Germanic and the Scholastic traditions—the term “object” specifically connotes being an object for a subject. The relevant German and Latin terms themselves suggest this: a Gegenstand is that which stands opposite, over against a cognizing subject; and an objectum is that which is thrown towards—towards a cognizing subject.” (Smith, The Problem of Perception, 236) 
“One immediate objection that many will have to the invocation of non-existents is that it cannot possibly do justice to the real sensory states that are involved in perception. Suppose you hallucinate a vivid green patch on a wall. You attend to it carefully, perhaps describing its particular color and brightness. Although intentionalism does not follow Evans and McDowell in claiming, absurdly, that in this situation you are aware of nothing, is it really any better to be told that you are not aware of anything that actually exists? Surely there is, in the situation just described, a concrete exemplification of green (or ‘green’). Surely something really exists, something that you are attending to. Although perhaps initially tempting, this reaction in effect simply ignores the analysis of perceptual consciousness developed in Part I of this work. “Something,” indeed, really does exist in the situation in question: you exist, and your visual experience with its sensory character exists. In particular, there is, actually in your sensory experience, something corresponding to the greenness that you see on the wall: namely, an instance of a chromatic quale. Neither this quale, nor the sensory experience of which it is a characteristic, is, however, the object of awareness—as we saw in Part I. Your object is a patch on a wall. It is only that that doesn’t exist.” (Smith, The Problem of Perception, 238) 
Perception is the analysis of sensory input in the context of our prior perceptual experience of the world. The goal of such analysis in visual perception is to infer the identities, forms, and spatial arrangement of objects in the three-dimensional (3-D) scene based on our two-dimensional (2-D) retinal images. Computational approaches to perception seek to elucidate the theoretical principles and to model the mechanisms underlying these analyses and inferential processes.” (Lee, Entry on “Computational Approaches to Perception” in The Encyclopedia of Perception, Vol. I, 278) [Underlining is mine]
Reductive functionalists claim that mental states are identical to certain functional states: The conditions that define the different types of mental states of a system, whether biological or not, refer only to relations between inputs to the system, outputs from the system, and other mental states of the system. The relations among inputs, outputs, and mental states are typically taken to be causal relations. However, the reductive functionalist does not claim that these causal relations cause mental states. Instead, this functionalist claims that mental states are certain functional states. In particular, states of consciousness are mental states and are thus, according to the reductive functionalist, identical to certain functional states….Nonreductive functionalists claim that mental states arise from functional organization but are not functional states. Consciousness, in particular, is determined by functional organization, but it is not identical to, or reducible to, functional organization. Nonreductive functionalism is, in one sense, a weaker claim than is reductive functionalism because it claims only that functional organization determines mental states, but drops the stronger claim that mental states are identical to functional states. But in another sense nonreductive functionalism is a stronger, and puzzling, claim: Mental states, and conscious experiences in particular, are something other than functional states, and therefore have properties beyond those of functional states. This proposed dualism of properties raises the unsolved puzzle of precisely what these new properties are and how they are related to functional properties….[M]ost arguments in favor of computer consciousness are based on functionalist assumptions. Thus, the possibility of spectrum inversion is still widely debated” (Hoffman, Entry on “Computer Consciousness” in The Encyclopedia of Perception, Vol. I, 284-285) 
“Although the input to the human visual system is just a collection of values associated with outputs of individual photoreceptors, we perceive a number of visual groups, usually associated with objects or well-defined parts of objects….The process of image formation, whether in the eye or in a camera, results in the loss of depth information. All points in the external three-dimensional (3-D) world that lie on a ray passing through the optical center are projected to the same point in the two-dimensional image. During reconstruction, we seek to recover the 3-D information that is lost during projection….Given a single image, many possible 3-D worlds could project to the image.” (Malik, Entry on “Computer Vision” in The Encyclopedia of Perception, Vol. I, 296) 
Depth perception in general can be understood as a reconstructive process that interprets the retinal image in our eye such that a three-dimensional (3-D) object arises in our mind. Pictures and films can also provide vivid impressions of depth. This pictorial depth differs in nature. It is a constructive process of its own and presents an additional level of difficulty. Normal vision allows us to glean information about an object’s shape and color as well as about such things as its spatial relations, its mass, and its potential danger. Normal vision typically reconstructs the real-world object which gives rise to the retinal image with admirable precision. This is possible because our visual system is able to resolve the many ambiguities present in the retinal image. Pictorial depth is both more confined and broader than normal depth….In normal viewing, a large number of 3-D objects would qualify as permissible reconstructions that could be made on the basis of one given retinal image. To date, perceptual psychologists have not been able to agree about just how the mind solves this so-called underspecification problem and singles out the one reconstruction that ends up in our awareness.” (Hecht, Entry on “Depth Perception in Pictures/Film” in The Encyclopedia of Perception, Vol. I, 358-359) [Underlining is mine]
“In particular, to say that the perceiver is aware of an internal representation sets up a logical regress, implying an inner perceiver (homunculus) who must create an inner representation of the representation….The regress can be avoided by claiming that awareness of an external object is constituted by having an internal representation of it; the perceiver is not aware of the representation itself, but of what it represents (its content). Thus, one perceives the tomato (the object of awareness) in virtue of possessing an internal representation of it (the vehicle of awareness). This move satisfies some philosophers that perception is direct in the traditional sense, yet on this view, the perceiver experiences the content of a representation rather than the living tomato. The representation must somehow be derived from the visual input by a process that establishes its content….If perceptual awareness consists of having representations, how does the perceptual system determine the environmental entities to which they correspond? Without some independent, extrasensory access to the world, there appears to be no way to establish which internal states indicate which environmental properties, or which representations stand for tomatoes and which for elephants.” (Warren, Entry on “Direct Perception” in The Encyclopedia of Perception, Vol. I, 367) [Underlining is mine]
The indirect solution is inference to the best explanation: The perceptual system infers a representation of the world that best accounts for the order in sensory input. For example, a particular sequence of gray blobs with an extended protuberance may be best explained by the presence of an elephant, rather than a tomato. However, as Hermann von Helmholtz understood by the mid-19th century, this inference process presumes that the perceptual system already possesses knowledge about (1) the structure of the world, including the sorts of entities that exist and predicates to describe them, and (2) how the world structures sensory input, such as a theory of image formation and transduction. The trouble is that such prior knowledge must somehow be acquired, again in an extrasensory manner….A common response is that prior knowledge has evolved via natural selection or learning, but…this seems to require an organism that already has a working perceptual system—including the requisite prior knowledge—as a precondition. The indirect position thus appears to be circular. There is a further problem with treating perception as a process of inference. Inference is a logical relation that holds between conscious mental states (beliefs, thoughts, statements) corresponding to premises and conclusions. But as we have just seen, if we are to avoid the representationalist fallacy, perception cannot be based on conscious awareness of internal states. If the perceptual process is unconscious, then whatever else it may be, it cannot be inferential; the same goes for related terms such as hypothesis, clue, evidence, and assumption. The notion of perception as unconscious inference…is thus inconsistent. Computational theories seek to avoid this objection by treating perception as a process of computation over representations, but this leaves [the] problem unresolved.” (Warren, Entry on “Direct Perception” in The Encyclopedia of Perception, Vol. I, 368) [Underlining is mine]
“Another argument against direct perception holds that perception is underdetermined by the available information. The stimulation at the receptors is said to be inherently impoverished or ambiguous, insufficient to uniquely specify environmental objects and events. A tomato is a three-dimensional spherical object, but its retinal image is just a two-dimensional circular form; working backward, this image could correspond to a flat disk or various ellipsoidal objects stretched along the line of sight.” (Warren, Entry on “Direct Perception” in The Encyclopedia of Perception, Vol. I, 369) [Underlining is mine]
“How are events in the external world transformed into perceptual experiences via electrical coding in the brain? This simple question forms one of the most basic and long-standing problems of perception. Magnetoencephalography, or MEG, is one of several noninvasive brain imaging techniques that allow scientists to explore the link between neural activity and perception. Like the related technique of electroencephalography (EEG), MEG essentially measures electrical currents generated by neural activity. MEG measures these electrical currents indirectly, through their magnetic fields. (It is a basic principle of physics that moving electrical currents produce magnetic fields.) MEG has excellent temporal resolution, on the order of milliseconds, allowing noninvasive real-time recording of neural activity. Therefore, this technique is well suited to examine the time course of perceptual processing in the brain. However, in contrast to its high temporal resolution, the spatial localization of MEG is relatively poor. That is, MEG can indicate when neural responses occur with great precision, but not exactly where the activity takes place. Nevertheless, its millisecond temporal resolution makes MEG a valuable tool for both basic research and clinical applications. As previously described, MEG measures magnetic fields associated with neural activity…. Recorded MEG data represents changes in magnetic field strength as a function of time.” (Harris, Entry on “Magnetoencephalography” in The Encyclopedia of Perception, Vol. I, 543-544) 
“Neurons communicate throughout the brain using spikes of activity, known as action potentials, on the order of one millisecond in duration. These spikes are caused by a transient change in ion concentrations across the cell membrane. This leads to a change in membrane voltage that can be picked up by electrodes inserted near the neuron. Although there are methods of measuring neural response collectively (such as [fMRI], [EEG], optical imaging) single-unit electrophysiology offers an accurate means of measuring individual neural response both in time and space. For example, imagine trying to understand human speech by listening to a crowd of voices. EEG measures electrical impulses from the scalp. It is temporally precise but averages over space—like a microphone above a crowd. The analogy for fMRI, which measures changes in blood flow to a particular area following neural response, would be moving the microphone closer to a smaller group, but giving the average sound level every second. Single unit electrophysiology, in contrast to these other techniques, places a good microphone next to a particular speaker. It may not tell everything about the crowd (or even the other half of the conversation) but it gives detailed information about the individual person/neuron. A full understanding of neural representation involves relating this response to the outside world. Early studies by Stephen Kuffler and others demonstrated that neurons in visual areas have receptive fields: regions of visual space where patterns of light can influence the firing of the neuron. The receptive fields of retinal ganglion neurons (whose axons make up the optic nerve) typically demonstrate a center-surround organization, where a central region might excite (or suppress) a neuron, while a surrounding region would do the opposite—these neurons respond best to differences in light levels.” (Albert, Entry on “Neural Representation/Coding” in The Encyclopedia of Perception, Vol. I, 627-628) 
“Later work by David Hubel and Torsten Wiesel measured the receptive fields of V1 neurons, which have significantly different responses to stimuli. These neurons fire more strongly to particular orientations of lines, along with other stimulus features. One class of neurons (simple cells) was shown to have oriented regions of alternating lighting preference. In contrast to retinal ganglion neurons, which have a circularly symmetric, center-surround structure, these V1 cells find the difference between two or more nearby elongated regions. Later work mapped these receptive fields and showed how the particular pattern of light and dark preferences could be fit by a particular mathematical function, a two-dimensional (2-D) Gabor function—the details of which are not discussed here. A simple cell’s selectivity for particular stimulus features, such as line orientation and position, are evident from a map of its subregions, or equivalently, the mathematical parameters of the Gabor function representation. However, it is well understood that such models are idealizations that account for only a portion of the neural response in these cells, and later visual stages are more difficult to characterize.” (Albert, Entry on “Neural Representation/Coding” in The Encyclopedia of Perception, Vol. I, 628) 
“Using spots, bars, and gratings as stimuli is helpful, as they can be fully described by a small set of numbers—such as position, orientation, and size—but often these stimuli do not provide enough variation in examples to fully probe how stimuli can affect the behavior of the neuron. Unstructured noise stimuli (like the “snow” on an old TV set that wasn’t tuned properly) can also be presented to a neuron and related to the resulting neural response. For example, if all the random stimuli that produced a neural spike are collected and then averaged together, the result is called the spike-triggered average (STA). This produces a receptive field as the simplest method of reverse correlation. However, such models require a great deal of data and have only a limited ability to characterize a neuron’s response; because of this, it is clear that the response to a set of simple stimuli does not provide a straightforward prediction of how the neuron will respond to more complex stimuli. As previously noted, a number of methods are used in an attempt to characterize “what” causes a neuron to respond. Such methods can give us succinct mathematical descriptions that offer some predictive value for individual neurons. However, as the mathematical models become more complex, it becomes more difficult to understand the behavior of the neuron in a coherent way. Even if we could fully describe and predict the response behavior of a neuron, the question remains: “Why” does the neuron respond in that particular way? For our visual example, why do V1 simple cell receptive fields have the particular pattern of light and dark preferences (2-D Gabor functions)? The ecological, efficient coding approach states that the goal of sensory processing is to efficiently represent the information that is behaviorally relevant to the animal. In the case of V1, the incoming visual information is coming from the natural world. There are particular properties of natural images that would suggest some codes are better than others. For example, light intensity often only changes at contours, so encoding primarily those changes by responding to lines/edges would be more efficient. What occurs in the left eye correlates with what occurs in the right eye, so receptive fields in each eye should be related for a particular neuron. In general, natural images are highly redundant, and removing these forms of redundancy in natural images would allow animals to use the information efficiently. Work by a variety of researchers, discussed next, has demonstrated that many goals of the early visual system directly relate the behavior of these neurons to the mathematical properties of natural images.…One can place coding strategies along a spectrum from local, grandmother cell codes to distributed codes. A local code uses one neuron or relatively few neurons to represent a single, relevant piece of information. The traditional example is a grandmother cell code where the firing of one particular neuron represents information, such as whether or not your grandmother is present. Such neural responses would be easy to learn from and react to (approach or avoidance, for example), but clearly this code has disadvantages. For example, there are not enough neurons in the brain to represent every potential combination of visual features.  On the other extreme, a distributed code uses many neurons to represent a single, relevant piece of information. For example, compression strategies can often result in highly distributed codes because part of the goal is to fully utilize the response range of every neuron. Taken to the extreme, a fully distributed code would be unreasonable in the brain as learning from and decoding such representations can be cumbersome. It would be difficult to respond to a neuron’s firing if you need to sample input from every other neuron to interpret what that response means….The efficient coding approach argues that the ultimate goal of any neural representation is to be useful ecologically. The type of representation should increase the animal’s evolutionary fitness.” (Albert, Entry on “Neural Representation/Coding” in The Encyclopedia of Perception, Vol. I, 629-630) 
A person’s entire life experience—everyone, everything, every experience he or she has ever known—exists to that person only as a function of his or her brain’s activity. As such, it does not necessarily reproduce the physical reality of the world with high fidelity. Nonveridical perception is the sensory or cognitive discrepancy between the subjective perception and the physical world. Of course, many experiences in daily life reflect the physical stimuli that fall into one’s eyes, ears, skin, nose, and tongue. Otherwise, action or navigation in the physical world would be impossible. But the same neural machinery that interprets veridical sensory inputs is also responsible for one’s dreams, imaginings, and failings of memory. Thus, the real and the illusory or misperceived have the same physical basis in a person’s brain. Misperceptions (that is, perceptions that do not match the physical or veridical world) can arise from both normal and pathological processes. Everyday perception in the normal brain includes numerous sensory, multisensory, and cognitive misperceptions and illusions….Sensory misperceptions are phenomena in which the subjective perception of a stimulus does not match the physical reality. Sensory misperceptions occur because neural circuits in the brain amplify, suppress, converge, and diverge sensory information in a fashion that ultimately leaves the observer with a subjective perception that is different from the reality….In visual illusion, the observer may perceive a visual object or scene that is different from the veridical one. Alternatively, the observer may perceive an object that is not physically present, or fail to perceive an object that is extant in the world….In an auditory illusion, the listener may perceive sounds that are not present or that are different from those physically present.” (Martinez-Conde and Macknik, Entry on “Nonveridical Perception” in The Encyclopedia of Perception, Vol. I, 637-638) [Underlining is mine]
Thus, in a way, we all live in the illusory “matrix” created by our brains. [N]eurologist and Nobel laureate Sir John Eccles wrote that the natural world contains no color, sound, textures, patterns, beauty, or scent. Thus, color, brightness, smell, and sound are not absolute terms, but subjective, relative experiences that are actively created by complicated brain circuits. This is true not only of sensory perceptions, but of any other experience. Whether we feel the sensation of “redness,” the appearance of “squareness,” or emotions such as love or hate, these are constructs that result from electrochemical impulses in our brain.” (Martinez-Conde and Macknik, Entry on “Nonveridical Perception” in The Encyclopedia of Perception, Vol. I, 642) [Underlining is mine]
A central assumption of sensory neurobiology is that the neural substrate of perception is the electrical activity of the sensory neurons activated by a given stimulus, that is, that understanding how sensory neurons respond to sensory stimuli will lead to an understanding of how organisms respond to sensory stimuli. But no less important than which stimuli are effective is where stimuli must be located to elicit neural responses. The receptive field of a sensory neuron is the region in the sensory periphery, for example a portion of the retina or of the body surface, within which stimuli can influence the electrical activity of that cell. The concept of the receptive field is central to sensory neurobiology in providing a description of the location at which sensory stimuli must be presented to a neuron to elicit responses….The receptive field of a sensory neuron anywhere in the nervous system is defined by its synaptic inputs; each cell’s receptive field results from the combination of fields of all of the neurons providing input to it. Because inputs are not simply summed, referring to the receptive field properties of a neuron commonly means what stimuli the cell responds to….The characterization of the receptive field properties of neurons informs us how single cells analyze the sensory world. The question remains: How are their collective responses put together to form sensory experience?” (Levitt, Entry on “Receptive Fields” in The Encyclopedia of Perception, Vol. I, 860-862) [Underlining is mine]
Schopenhauer attempts to reinforce this theoretical position by an argument based on empirical observation, thus falling into the most gross and self-evident of contradictions. “…that which is immediately [and empirically] known is limited by the skin, or rather by the external end of the nerves which lead out from the cerebral system. Without lies a world of which we have no other knowledge than through pictures in our head.” Here the subjectivity of the impressions of sense is attempted to be proved by a process of reasoning based on the physical construction of our bodies, but unfortunately for Schopenhauer’s contention our bodies…it is evident that such an argument could have weight only when we assume we possess a knowledge of the construction of our bodies not derived through empirical means….Thus Schopenhauer, following the most radical of the French materialists, reduces the intellect to a simple function of the brain, while at the same time asserting that the whole [empirical] world hangs on a single thread, consciousness.” (Colvin, Schopenhauer's Doctrine of the Thing-in-itself and His Attempt to Relate it to the World of Phenomena, 18-19) [Underlining is mine]

No comments:

Post a Comment