I have said, Pastor of the largest church in Western Europe, Matthew Ashimolowo looks at the glorious past of the Black race and examines uncompromisingly the conformations that have molded Black people. His fascinating insight celebrates the rich heritage and confronts today s challenges. What does a shifting demographic cycle mean?
How does the explosive growth of emerging markets matter? Why does the world's population affect my portfolio? Does the global monetary system impact my results this year? How does government intervention in markets impact my strategy? In Pragmatic Capitalism, Cullen Roche explores how our global economy works and why it is more important now than ever for investors to understand macroeconomics.
Cullen Roche combines his expertise in global macro portfolio management, quantitative risk management, behavioral finance, and monetary theory to explain to readers how macroeconomics works, and provides insights and suggestions for getting the most out of their investment strategies. This book will uncover market myths and explain the rise of macroeconomics and why it impacts the readers' portfolio construction.
Pragmatic Capitalism is a must for any sophisticated investor who wants to make the most of their portfolio. The relationship between macrocosm above and microcosm below is the key to the Hermetic Teachings.
Macrocosm refers to the Eternal Reality of Light; the realm of God. Microcosm is its reflection; a fragmentation of Light; the world of human existence and human ego. The core of the Hermetic Teachings for centuries has been focused on transforming the fragmented body of Light in human consciousness and uniting below with above. Score: 4. Math allows us to see the hidden structures underneath the messy and chaotic surface of our world. Armed with the tools of mathematics, we can see through to the true meaning of information we take for granted: How early should you get to the airport?
Surprise, bafflement, fascination, excitement, hilarity, delight: all these and more are a part of the optimistic understanding of error. This model is harder to recognize around us, since it is forever being crowded out by the noisier notion that error is dangerous, demoralizing, and shameful.
But it exists nonetheless, and it exerts a subtle yet important pull both on our ideas about error and on our ideas about ourselves. These two models of error, optimistic and pessimistic, are in perpetual tension with each other.
We could try to study them in isolation—the discomforts and dangers of being wrong over here, its delights and dividends over there—and we could try to adjudicate between them. But it is when we take these two models together, not when we take them apart, that we begin to understand the forces that shape how we think and feel about being wrong. For a representative of the pessimistic model, we might return to Thomas Aquinas, the medieval monk who tipped his hand in the last chapter by associating error with original sin.
For Aquinas, error was not merely abhorrent but also abnormal, a perversion of the prescribed order of things. Given that all of us get things wrong again and again, how abnormal, he might have asked, can error possibly be? This debate over whether error is normal or abnormal is central to the history of how we think about wrongness.
Take Aquinas and James: they fundamentally disagreed, but their disagreement was only secondarily about error. These competing ideas of error crop up in efforts to define the term, as we saw when we tried to do so ourselves. As error goes from being a hallmark of the lawless mind to our native condition, people cease to be fundamentally perfectible and become fundamentally imperfect.
Meanwhile, truth goes from being a prize that can be achieved through spiritual or intellectual discipline to a fugitive that forever eludes the human mind. The history of error is not an account of the shift from one of these frameworks to the other. Instead, it is an ongoing, millennia-long argument between the two. Over that time, this argument has come to be defined by several other questions, in addition and closely related to whether screwing up is basically aberrant or basically normal.
One of these questions is whether error is with us to stay or if it can somehow be eradicated. James Sully, a British psychologist whose Illusions constitutes perhaps the most thoroughgoing early investigation of human error, thought that most forms of it would eventually be overcome. A story, it might be observed, traditionally has a beginning, a middle, and an end, and Jastrow clearly thought we were approaching the final chapter in the history of wrongness.
We will welcome the new, test it thoroughly, and accept it joyously, in truly scientific fashion. But the idea that we can eradicate error—through evolutionary advancement, technological innovation, establishing an ideal society, or spreading the word of God—has a timeless hold on the human imagination.
Implicit in this idea is the belief that we should want to eradicate error. But eradicating the entirety of error is another matter.
Practicality aside, such an objective presents three problems. The first is that, to believe we can eradicate error, we must also believe that we can consistently distinguish between it and the truth—a faith squarely at odds with remembering that we ourselves could be wrong. Thus the catch—22 of wrongology: in order to get rid of error, we would already need to be infallible.
The second problem with this goal is that virtually all efforts at eradication—even genuinely well-intentioned ones—succumb to the law of unintended consequences. The final problem with seeking to eradicate error is that many such efforts are not well intentioned—or if they are, they tend in the direction for which good intentions are infamous. In either case it was the duty of Christians to destroy them. And, as they also show, there is a slippery slope between advocating the elimination of putatively erroneous beliefs, and advocating the elimination of the institutions, cultures, and—most alarmingly—people who hold them.
The idea that error can be eradicated, then, contains within it a frighteningly reactionary impulse. And yet, at heart, it is an idea about progress: a belief that there is an apex of human achievement, and that the way to reach it is through the steady reduction and eventual elimination of mistakes. But we have another, competing idea of progress as well—one that rests not on the elimination of error but, surprisingly, on its perpetuation. The gist of the scientific method is that observations lead to hypotheses which must be testable , which are then subjected to experiments whose results must be reproducible.
If all goes well, the outcome is a theory, a logically consistent, empirically tested explanation for a natural phenomenon. As an ideal of intellectual inquiry and a strategy for the advancement of knowledge, the scientific method is essentially a monument to the utility of error.
Most of us gravitate toward trying to verify our beliefs, to the extent that we bother investigating their validity at all. But scientists gravitate toward falsification; as a community if not as individuals, they seek to disprove their beliefs. But the important part is that it can be—no matter how much evidence appears to confirm it, no matter how many experts endorse it, no matter how much popular support it enjoys. In fact, not only can any given theory be proven wrong; as we saw in the last chapter, sooner or later, it probably will be.
And when it is, the occasion will mark the success of science, not its failure. This was the pivotal insight of the Scientific Revolution: that the advancement of knowledge depends on current theories collapsing in the face of new insights and discoveries. In this model of progress, errors do not lead us away from the truth.
Instead, they edge us incrementally toward it. During and after the Scientific Revolution, the leading minds of Western Europe took this principle and generalized it. As they saw it, not only scientific theories but also political, social, and even aesthetic ideas were subject to this same pattern of collapse, replacement, and advancement.
In essence, these thinkers identified the problem of error-blindness on a generational and communal scale. We can no more spot the collective errors of our culture than we can spot our own private ones, but we can be sure that they are lurking somewhere.
The thinkers responsible for this insight came by it honestly. They lived at a time when fifteen centuries of foundational truths had lately been disproved or displaced by a staggering influx of new information: about previously unknown plants and animals, about geology and geography, about the structure of the universe, about the breadth and diversity of human culture.
I suppose that if tomorrow a UFO landed in Pittsburgh, I might experience a comparable combination of stunning error and thrilling possibility.
Certainly I would have to rebuild my understanding of the cosmos from the ground up. Faced with that task, many of these thinkers concluded that the best and safest tool for this sweeping intellectual reconstruction was doubt: deep, systematic, abiding, all-encompassing doubt.
Thus Michel de Montaigne, the great Renaissance philosopher and essayist, inscribed above the door of his study que sais-je? They believed in truth, and they wanted to discover it. But they were chastened by the still-palpable possibility of drastic error, and they understood that, from a sufficiently distant vantage point, even their most cherished convictions might come to look like mistakes. That idea is at least as old as Plato. It appears in the Bible as well—for instance, as the question of how to tell false prophets from true.
Less romantically, false fires also referred to the ones lit by bandits to fool travelers into thinking they were approaching an inn or town. In either case, the metaphor says it all: error, disguised as the light of truth, leads directly into trouble.
But Enlightenment thinkers mined a previously unnoticed aspect of this image. Instead, it shed a light of its own. True, that light might be flickering or phantasmagoric, but it was still a source of illumination. In this model, error is not the opposite of truth so much as asymptotic to it—a kind of human approximation of truth, a truth-for-now. This is another important dispute in the history of how we think about being wrong: whether error represents an obstacle in the path toward truth, or the path itself.
The former idea is the conventional one. The latter, as we have seen, emerged during the Scientific Revolution and continued to evolve throughout the Enlightenment. Also known as the error curve or the normal distribution, the bell curve is a way of aggregating individually meaningless, idiosyncratic, or inaccurate data points in order to generate a meaningful and accurate big picture.
Laplace, for instance, used the bell curve to determine the precise orbit of the planets. By using the normal distribution to graph these individually imperfect data points, Laplace was able to generate a far more precise picture of the galaxy. Unlike earlier thinkers, who had sought to improve their accuracy by getting rid of error, Laplace realized that you should try to get more error: aggregate enough flawed data, and you get a glimpse of the truth.
The truths he cared about are the ones we stash away in our unconscious. By definition, those truths are inaccessible to the reasoning mind—but, Freud argued in The Psychopathology of Everyday Life, we can catch occasional glimpses of them, and one way we do so is through error. Today, we know these truth-revealing errors as Freudian slips—as the old saw goes, saying one thing and meaning your mother. According to Freud, these seemingly trivial mistakes are neither trivial nor even, in any standard sense, mistakes.
Instead, they arise from—and therefore illuminate—a submerged but significant psychic truth. In addition to these slips, Freud also thought there were a few other avenues by which the secret truths of the unconscious could seep out. One of these, dreams, is relevant to us all. Another, relevant only to the unfortunate few, is insanity.
At first, dreams and madness might not seem terribly germane to this book. To better understand our mundane misperceptions, it pays to look closely at our extreme ones. So that is where I want to turn now—to dreams, drug trips, hallucinations, and madness; and, by way of those examples, to a closer look at the notion that, through error, we perceive the truth.
However far-fetched this connection between wrongness and whacked-outness might seem, you yourself invoke it routinely. I say this with some confidence, because our everyday ways of thinking and talking about error borrow heavily from the argot of altered states. For starters, we commonly if crudely compare being wrong to being high.
Of all these analogies, the association between erring and dreaming is the most persistent and explicit. Once awake, you recognize them for what they are—baseless chimeras. Although we treat errors and altered states as analogous in certain ways, there is one important respect in which we treat them very differently. But altered states—some of which really can sicken or kill us—frequently enthrall us.
We keep journals of our dreams and recount them to our friends and family to say nothing of our therapists. We feel that our lives are illuminated and enriched by them, and we regard those who seldom remember theirs as, in some small but important way, impoverished. We are highly motivated to seek out the reality-altering power of drugs, despite the danger of overdose, addiction, or arrest.
Yet I will say this: once, while running a very high fever in a tropical rainforest, I carried on a long conversation with the poet Samuel Taylor Coleridge, who was sitting on the end of my bed, knitting. Altered states are so compelling that we often do what we can, wisely or otherwise, to produce, reproduce, and prolong them.
The attraction of an altered state is not, as one might initially imagine, just its pure weirdness—how far it diverges from everyday life. Instead, it is the combination of this weirdness and its proximity to everyday life. What is altered, in an altered state, are the elements of the world, the relations among them, and the rules that govern them.
But the way we experience these states remains essentially unchanged. The tools we use to gauge and understand the sober world—our reason, our emotions, and above all our senses—are largely unimpaired and sometimes even enhanced in the trippy one.
As a result, these false worlds have all the intimacy, intensity, and physicality—in short, all the indicators of reality—of the true one. And, conversely, what does it mean about the supposedly unreal if it is so easy to conjure and so intensely convincing? One of the most consistent answers—and the crucial one, for my purposes—is that the false and the true are reversed: that the unreal is, so to speak, the real real.
So did the writer Artemidorus Daldianus, who, almost two thousand years earlier, penned the Oneirocritica—a Greek Interpretation of Dreams. Virtually every culture in every era has believed that dreams express otherwise inaccessible truths about the dreamer: about her forgotten or unknown past, her secret beliefs and desires, her destiny.
In the same vein, virtually every culture in every era with the halfway exception of the industrialized West has regarded visions and hallucinations as revealing the otherwise inaccessible truths of the universe.
From Siberian shamans to Aztec priests to the Merry Pranksters to spiritually inclined potheads the world over ancient Christians, early Jews, Scythians, Sikhs, Sufis, and Rastafarians, to name just a few , we have regarded our drugs as entheogens—substances that can lay bare the truth of the cosmos and show us the face of God. If dreams and drug states create acute but temporary alterations in our understanding of reality, the acute and ongoing version is insanity.
You might think and hope that insanity would take us even further away from everyday error, but instead it brings us full circle. Ultimately, only three factors seem to distinguish the false reality of madness from the false reality of wrongness.
We can be wrong about all manner of things, even persistently and purely wrong about them, while still retaining our claim to sanity—just so long as enough other people are wrong about them, too. Madness is radical wrongness. Like all equations, this one is reversible. If madness is radical wrongness, being wrong is minor madness. Minor madness can also be an apt description of how being wrong actually feels. We will meet more than one person in this book who characterizes his or her experience of error as scarily similar to insanity.
We already saw that hallucinations and dreams are widely regarded as revealing greater truths. So too with madness. Societies throughout the ages have nurtured the belief that the insane among us illuminate things as they truly are, despite their own ostensibly deranged relationship to reality. This narrative of wrongness as rightness might have achieved its apotheosis in King Lear, a play that features a real madman Lear, after he loses it , a sane man disguised as a madman Edgar , a blind man Gloucester , and a fool the Fool.
And insanity is intellectual and moral clarity: it is only after Lear loses his daughters and his senses that he understands what he has done and can feel both loss and love. This idea—that from error springs insight—is a hallmark of the optimistic model of wrongness. It holds even for mundane mistakes, which is why proponents of this model myself included see erring as vital to any process of invention and creation.
The example of altered states simply throws this faith into relief: make the error extreme enough, depart not a little way but all the way from agreed-upon reality, and suddenly the humdrum of human fallibility gives way to an ecstasy of understanding.
In place of humiliation and falsehood, we find fulfillment and illumination. Unfortunately, as proponents of the pessimistic model of wrongness will be quick to point out, the reassuring notion that error yields insight does not always comport with experience.
Sometimes, being wrong feels like the death of insight—the moment when a great idea or a grounding belief collapses out from under us. And sometimes, too, our mistakes take too great a toll to be redeemed by easy assurances of lessons learned. Our errors expose the real nature of the universe—or they obscure it. They lead us toward the truth, or they lead us astray.
They are the opposite of reality, or its almost indistinguishable approximation—certainly as close as we mere mortals can ever hope to get. They are abnormalities we should work to eliminate, or inevitabilities we should strive to accept. Together, these two conflicting models form the backbone of our understanding of error.
Before we turn to those experiences, I want to introduce two figures who vividly embody these different models of wrongness. They are creatures of mythology, and they do not so much err as animate—and illuminate—the ways we think about error.
That root gave rise to the Latin verb errare, meaning to wander or, more rakishly, to roam. Implicitly, what we are seeking—and what we have strayed from—is the truth. One of these is the knight errant and the other is the juif errant—the wandering Jew. The latter figure, a staple of anti-Semitic propaganda, derives from a medieval Christian legend in which a Jew, encountering Jesus on the road to the crucifixion, taunts him for moving so slowly under the weight of the cross.
In response, Jesus condemns the man to roam the earth until the end of time. To err is to experience estrangement from God and alienation among men. The knight errant is also a staple of medieval legend, but otherwise he could scarcely be more different. Where the wandering Jew is defined by his sin, the knight errant is distinguished by his virtue; he is explicitly and unfailingly on the side of good.
His most famous representatives include Galahad, Gawain, and Lancelot, those most burnished of knights in shining armor. A bit further afield, they also include Don Quixote, who, as both knight errant and utter lunatic, deserves his own special place in the pantheon of wrongology. Although far from home, the knight is hardly in exile, and still less in disgrace.
Unlike the juif errant, who is commanded to wander and does so aimlessly and in misery, the knight errant is on a quest: he wanders on purpose and with purpose, as well as with pleasure.
It will be clear, I hope, that I am not invoking these archetypes to endorse their obvious prejudices. As embodied by the wandering Jew, erring is both loathsome and agonizing—a deviation from the true and the good, a public spectacle, and a private misery. This image of wrongness is disturbing, especially given the all-too-frequent fate of the non-mythological Jews: abhorred, exiled, very nearly eradicated.
Yet it far more closely resembles our everyday understanding of wrongness than do the virtue and heroism of the knight errant. If this bleak idea of error speaks to us, it is because we recognize in the wandering Jew something of our own soul when we have erred. Sometimes, being wrong really does feel like being exiled: from our community, from our God, even—and perhaps most painfully—from our own best-known self.
So we should acknowledge the figure of the wandering Jew as a good description of how it can feel to be wrong. In light of that, why cleave any more closely than necessary to the most disagreeable vision of wrongness around? We have, after all, a better alternative. In fact, the idea of erring embodied by the wandering knight is not just preferable to the one embodied by the wandering Jew. It is also, and somewhat remarkably, preferable to not erring at all. Being right might be gratifying, but in the end it is static, a mere statement.
Being wrong is hard and humbling, and sometimes even dangerous, but in the end it is a journey, and a story. Who really wants to stay home and be right when you can don your armor, spring up on your steed and go forth to explore the world?
True, you might get lost along the way, get stranded in a swamp, have a scare at the edge of a cliff; thieves might steal your gold, brigands might imprison you in a cave, sorcerers might turn you into a toad—but what of that?
To fuck up is to find adventure: it is in that spirit that this book is written. Our Senses A lady once asked me if I believed in ghosts and apparitions. I answered with truth and simplicity, No, madam, I have seen far too many myself. The existence of such a route was an open question, but its potential economic significance was beyond dispute.
Because virtually all commercial goods were transported by water at the time, faster transit between Europe and Asia would fuel a surge in global trade. By the time the expedition set sail, explorers and fortune seekers had been looking for the route for more than years. This was a place Ross had never been. Although he had joined the navy at the age of nine, his northernmost service prior to had been in Sweden; the rest had been in the English Channel, the West Indies, and the Mediterranean.
It might seem odd to select a man with no regional experience to captain such a pivotal expedition, but as it happened, John Barrow, the subsecretary of the British Admiralty who sponsored the voyage, had little choice. Given wide latitude by Barrow to conduct the expedition as he saw fit, Ross determined to explore those sounds to see if any of them gave out onto the hoped for Northwest Passage. In July, after three months at sea, he and his crew reached Baffin Bay—something of a triumph itself, since Barrow, for one, had openly doubted its existence.
After concluding that Smith Sound and Jones Sound were impassable, they turned their attention to Lancaster, which Ross had considered the most promising of the three. Shortly thereafter, the fog lifted completely and, Ross wrote in his account of the voyage: I distinctly saw the land, round the bottom of the bay, forming a chain of mountains connected with those which extended along the north and south sides.
This land appeared to be at the distance of eight leagues [about 27 miles]; and Mr. Lewis, the master, and James Haig, leading man, being sent for, they took its bearings, which were inserted in the log…. Instead of opening westward onto a waterway out of Baffin Bay and onward to the Pacific, it ended in land—a vast expanse of ice and high peaks.
Disappointed, but having fulfilled the terms of his naval mandate, the commander returned to England. But something odd had happened. When he got home, he made this fact known to John Barrow.
A cloud of mistrust and derision began to gather around Ross, even though, by most measures, he had achieved the extraordinary. Chief among his accomplishments was navigating a British ship through the treacherous waters of the eastern Arctic and returning it safely home. But in the face of the fervor over the Northwest Passage, none of that carried much weight.
Less than a year after the expedition returned, Barrow sent Parry back to Lancaster Sound for a second look. This time, Parry did see the Croker range—and then he sailed right through it. The mountains were a mirage. John Ross had fallen victim to one of the stranger and more fascinating optical phenomena on earth.
Anyone who has been in a car on a hot day is familiar with the mirage in which a pool of water seems to cover the highway in the distance but disappears as you approach. This is called an inferior mirage, or sometimes a desert mirage, since the same phenomenon causes nonexistent oases to appear to travelers in hot, sandy lands.
This type of mirage is known as a superior or arctic mirage. But superior mirages show us things that do exist. The mountains that Ross saw were real. They were two hundred miles west of him, on a distant island in the Canadian Arctic. But by bending light rays from beyond the horizon up toward us, superior mirages lift objects into our field of vision that are usually obscured by the curvature of the earth.
Such mirages begin with a temperature inversion. Normally, air temperatures are warmest near the surface of the earth and start dropping as you go up. Think about how much colder it is on top of a mountain than in the valley below. But in a temperature inversion, this arrangement reverses. A cold layer of air close to the earth—say, directly above the polar land or sea—meets a higher, warmer layer of air created by atypical atmospheric conditions.
This inverted situation dramatically increases the degree to which light can bend. In the Arctic or Antarctic, where surface air temperatures are extremely cold, light sometimes bends so much that the photons that eventually strike any available human retinas can be reflected from objects up to several hundred miles away. The result is, in essence, another kind of false fire—a trick of the light that leads unwary travelers astray. The illusory Croker Mountains, as drawn by John Ross in his travel journal.
Ross was by no means the first or last seafarer to be fooled by an Arctic mirage. Likewise, historians speculate that the Vikings ventured to North America where they landed sometime around AD after spotting a superior mirage of the mountains of Baffin Island from the coast of Greenland.
As these examples suggest, superior mirages are particularly likely to consist of mountains and other large land masses. On July 17, , while sailing between Greenland and Iceland, Bartlett suddenly spotted the coast of the latter country, looming so large that he could easily make out many familiar landmarks.
Like John Ross, Bartlett estimated the apparent distance of the coast at twenty-five or thirty miles away. But he knew its actual distance was more than ten times that, since his ship was positioned roughly miles from the Icelandic coast. That he could see land at all is astonishing—akin to seeing the Washington Monument from Ohio.
Thanks to those advances in technology, including information technology, Bartlett was able to override his own judgment. His resources may have been better, but his senses were equally, spectacularly deceived. Of the very long list of reasons we can get things wrong, the most elementary of them all is that our senses fail us. Although these failures sometimes have grave consequences just ask Captain Ross , we usually think of sensory mistakes as relatively trivial.
And yet, in many respects, failures of perception capture the essential nature of error. The rest of us do this, too, albeit mostly without realizing it. When we discover that we have been wrong, we say that we were under an illusion, and when we no longer believe in something, we say that we are disillusioned. More generally, analogies to vision are ubiquitous in the way we think about knowledge and error.
People who possess the truth are perceptive, insightful, observant, illuminated, enlightened, and visionary; by contrast, the ignorant are in the dark. When we comprehend something, we say I see.
And we say, too, that the scales have fallen from our eyes; that once we were blind, but now we see. This link between seeing and knowing is not just metaphorical. For the most part, we accept as true anything that we see with our own eyes, or register with any of our other senses. We take it on faith that blue is blue, that hot is hot, that we are seeing a palm tree sway in the breeze because there is a breeze blowing and a palm tree growing.
Heat, palm trees, blueness, breeziness: we take these to be attributes of the world that our senses simply and passively absorb. Moreover, they are capable of doing so under entirely normal circumstances, not just under exceptional ones like those John Ross experienced.
For the purpose of this thought experiment, imagine that you step outside not in Chicago or Houston, but in someplace truly dark: the Himalayas, say, or Patagonia, or the north rim of the Grand Canyon.
If you look up in such a place, you will observe that the sky above you is vast and vaulted, its darkness pulled taut from horizon to horizon and perforated by innumerable stars. Stand there even longer and it will dawn on you that your own position in this spectacle is curiously central. The apex of the heavens is directly above you. And the land you are standing on—land that unlike the firmament is quite flat, and unlike the stars is quite stationary—stretches out in all directions from a midpoint that is you.
It is also, of course, an illusion: almost everything we see and feel out there on our imaginary Patagonian porch is misleading. The sky is neither vaulted nor revolving around us, the land is neither flat nor stationary, and, sad to say, we ourselves are not the center of the cosmos. Not only are these things wrong, they are canonically wrong. They are to the intellect what the Titanic is to the ego: a permanent puncture wound, a reminder of the sheer scope at which we can err.
What is strange, and not a little disconcerting, is that we can commit such fundamental mistakes by doing nothing more than stepping outside and looking up. No byzantine theorizing was necessary to arrive at the notion that the stars move and we do not. We simply saw the former, and felt the latter. The fallibility of perception was a thorn in the side of early philosophers, because most of them took the senses to be the main source of our knowledge about the world. One early and clever solution to this problem was to deny that there was a problem.
That was the fix favored by Protagoras, the leader of a group of philosophers known as the Sophists, who held forth in ancient Greece around the fifth century BC. You might imagine that this conviction would lead to a kind of absolute realism: the world is precisely as we perceive it. But that only works if we all perceive the world exactly the same way.
To borrow an example from Plato whose extensive rebuttal of the Sophists is the chief reason we know what they believed : if a breeze is blowing and I think it is balmy and you think it is chilly, then what temperature is it really? And if my senses happen to contradict yours—well, then our realities must differ. In matters of perception, Protagoras argued, everyone was always right. Protagoras deserves recognition for being the first philosopher in Western history to explicitly address the problem of error, if only by denying its existence.
For most of us, though, his position on perception is intrinsically unsatisfying much as relativism more generally can seem frustratingly flaccid in the face of certain hard truths about the world. Plato, for one, thought it was nonsense. He noted that even a breeze must have its own internal essence, quite apart from whoever it blows on, and essentially advised Protagoras to get a thermometer.
But Plato also rejected the whole notion that our senses are the original source of knowledge. Since, as I mentioned earlier, he thought our primordial souls were at one with the universe, he believed that we come to know the basic truths about the world through a form of memory. This seems like a reasonable position, and one we are likely to share, but it raises two related and thorny questions. First, how exactly do our senses go about acquiring information about the world? And second, how can we determine when that information is accurate and when it is not?
Early philosophers regarded the first question as, essentially, a spatial-relations problem. The world is outside us; our senses are within us. How, then, do the two come together so that we can know something? Instead, our senses must somehow bridge the gap I described in Chapter One: the rift between our own minds and everything else.
One way to understand how they do this is to think of sensing as two different although not normally separable operations. The first is sensation, in which our nervous system responds to a piece of information from our environment. The second is perception, in which we process that information and make it meaningful. Perception, in other words, is the interpretation of sensation. Interpretation implies wiggle room—space to deviate from a literal reading, whether of a book or of the world.
As that suggests, this model of perception unlike the one in which our senses just passively reflect our surroundings has no trouble accommodating the problem of error. This model also answers the second question I asked about perception: How can we determine when it is accurate and when it is not? Unfortunately, the answer is that we cannot. Since we generally have no access to the objects of our sensory impressions other than through our senses, we have no independent means of verifying their accuracy.
In perception, as in so many things in life, departing from literalism often serves us uncommonly well—serves, even, a deeper truth. Consider a mundane visual phenomenon: when objects recede into the distance, they appear to get smaller. If we had sensation without interpretation, we would assume that those objects were actually shrinking, or perhaps that we were growing—either way, a bewildering, Alice-in-Wonderland-esque conclusion.
Instead, we are able to preserve what is known as size constancy by automatically recalibrating scale in accordance with distance.
For a different example of the utility of interpretation, consider your blind spot—the literal one, I mean. The blind spot is that part of the eye where the optic nerve passes through the retina, preventing any visual processing from taking place. But we do not, because our brain automatically corrects the problem through a process known as coherencing. These, then, are instances—just two of many—in which the interpretative processes of perception sharpen rather than distort our picture of the world.
No matter what these processes do, though, one thing remains the same: we have no idea that they are doing it. The mechanisms that form our perceptions operate almost entirely below the level of conscious awareness; ironically, we cannot sense how we sense. And here another bit of meta-wrongness arises. Or, more precisely, we cannot feel that we could be wrong. Our obliviousness to the act of interpretation leaves us insensitive—literally—to the possibility of error.
The trick is that the square labeled A and the square labeled B are identical shades of gray. Is it really true that working longer hours makes you more successful? Do you really need to hide your emotions in order to gain respect as a manager? Does higher pay really always lead to higher performance?
The world of management is blighted by fads, fiction and falsehoods. In Myths of Management, Cary Cooper and Stefan Stern take you on an entertaining journey through the most famous myths surrounding the much-written about topic of management. They debunk false assumptions, inject truth into over-simplifications and tackle damaging habits head-on. This engaging read offers you authentic insights into the reality of work, drawn from extensive research and real-world business examples, to give you the essential knowledge you need to become a better manager.
Myths of Management is the guide you need to become an enlightened manager. This original work of theological anthropology looks at original sin in the light of the Resurrection. It is based on the conviction that the doctrine of original sin is a vital perspective on what it is to be human when seen with Resurrection eyes. From this point of view, one is able to read all the major doctrines of Christianity from the order of discovery, and forgiveness becomes the way of transformation.
Absolutely nothing. The result is often depression, divorce, addiction, violence—even suicide. Her conclusion: Living the single lifestyle, free and independent, may just be the best prescription for what ails America. When we make mistakes, cling to outdated attitudes, or mistreat other people, we must calm the cognitive dissonance that jars our feelings of self-worth.
And so, unconsciously, we create fictions that absolve us of responsibility, restoring our belief that we are smart, moral, and right—a belief that often keeps us on a course that is dumb, immoral, and wrong. Backed by decades of research, Mistakes Were Made But Not by Me offers a fascinating explanation of self-justification—how it works, the damage it can cause, and how we can overcome it.
Mistakes were made—but not in this book! The Queen's Diamond Jubilee and the Olympics look set to make as successful as the royal weddings of when it comes to creating a surge of patriotism across our once self-assured land.
But despite the latest wave of nostalgic British pride, Britain is in the midst of an identity crisis, with British values and identity the butt of scorn and sneers. Motivated by the sense that the notion of Britishness has been hijacked, and intrigued by the ever-vexed question of British identity and what it really means, Peter Whittle has set out to examine what's actually wrong with being British.
With his trademark wit and insight, Whittle explores how, despite being chipped away at from all sides for the past five decades, pride in being British has shown an amazing ability to survive. This is the right book for right now. Yes, learning requires focus. But, unlearning and relearning requires much more—it requires choosing courage over comfort. In Think Again, Adam Grant weaves together research and storytelling to help us build the intellectual and emotional muscle we need to stay curious enough about the world to actually change it.
In our daily lives, too many of us favor the comfort of conviction over the discomfort of doubt. We listen to opinions that make us feel good, instead of ideas that make us think hard.
We see disagreement as a threat to our egos, rather than an opportunity to learn. We surround ourselves with people who agree with our conclusions, when we should be gravitating toward those who challenge our thought process. The result is that our beliefs get brittle long before our bones. We think too much like preachers defending our sacred beliefs, prosecutors proving the other side wrong, and politicians campaigning for approval--and too little like scientists searching for truth.
Intelligence is no cure, and it can even be a curse: being good at thinking can make us worse at rethinking. The brighter we are, the blinder to our own limitations we can become. Organizational psychologist Adam Grant is an expert on opening other people's minds--and our own.
As Wharton's top-rated professor and the bestselling author of Originals and Give and Take, he makes it one of his guiding principles to argue like he's right but listen like he's wrong. With bold ideas and rigorous evidence, he investigates how we can embrace the joy of being wrong, bring nuance to charged conversations, and build schools, workplaces, and communities of lifelong learners.
You'll learn how an international debate champion wins arguments, a Black musician persuades white supremacists to abandon hate, a vaccine whisperer convinces concerned parents to immunize their children, and Adam has coaxed Yankees fans to root for the Red Sox. Think Again reveals that we don't have to believe everything we think or internalize everything we feel.
It's an invitation to let go of views that are no longer serving us well and prize mental flexibility over foolish consistency. If knowledge is power, knowing what we don't know is wisdom.
The columnist for Slate's popular "Do the Math" celebrates the logical, illuminating nature of math in today's world, sharing in accessible language mathematical approaches that demystify complex and everyday problems. Argues that dogmatism is a serious problem and offers case studies documenting the characteristics of this personality trait.
Thoroughly researched and extensively referenced, this highly credible work uses evidence from biblical, anthropological, historical, and ancient literature sources dating as far back as 3, years ago to support the facts that: People of color have a positive history.
0コメント