In this final episode of a three part series on John McDowell's Mind and World, I take a look at McDowell's Transcendental Argument. I feel his transcendental argument comes up a bit short in making McDowell's case and neither does it seem to hold the same gravitas as other transcendental arguments like Kant's. Basically, the conclusions sound a lot like the premises. But, McDowell makes another interesting claim namely that there is nothing unnatural about our role as conceptualizers and judgement-makers in carving out the epistemic content of our worldviews. Our normative nature is, well, natural. Natural to us anyway. And, if we acknowledge this, then we see that there is not some divorce or separation of us from the world when we apply concepts to our perception of the world. Our way of seeing is no less natural than a cat's way of seeing. It's just unique to us. And, to fill out how we might understand how the rational and social can be fully natural, we need to look at Aristotle's ethical theory. Which we will. In the episode.
In this second episode of a 3 part series on the work of John McDowell, I look at McDowell’s epistemic distinction between the active and the passive. When we perceive the world, are we soaking up empirical data like a dull sponge or actively sorting fuzzy, impressionistic content into familiar categories? For McDowell, perceptions are conceptual through and through. Despite this, we can make sense of the idea of the contributions of our conceptual apparatus in coloring our perceptions even if we can’t sharply cleave the boundaries. We even get reminders of the unconceptualized world behind our experiences when we make perceptual mistakes or have some sort of perceptual confusion. Squinting, for example, reminds us when we struggle to make our concepts fit the world that opens itself up to us. All that, and McDowell’s answer to the skeptic and his transcendental argument for realism. P.S.: for some reason, when I say ‘veridical’, it sounds like vertical and it looks like I’m not going to solve it any time soon, so please allow your ears to make the appropriate adjustments.
In this first episode of a three part series on John McDowell, I talk a bit about the splash that McDowell's Mind and World made on the philosophy scene when it was published in 1994. Then, I get into and onto the work of McDowell's philosophy itself. Mind and World is quite the apt name as McDowell focuses on the meta-epistemological question of how the mind can know about the world. I look at McDowell's take on the history of philosophy particularly his debt to the work of Immanuel Kant, in developing his theory about how our minds connect to their environment. I attempt to show how McDowell establishes his unique view of how minds connect to the world through the lens of traditional correspondence and coherence theories and how he feels these two approaches fall short in providing accounts of how our brains produce accurate information about our environment. I focus on two dichotomous concepts that McDowell borrows from Kant: the active and the passive and how our epistemic direction towards the world can be understood through this dichotomy. McDowell says that there is no notion of pure experience that is delivered to us. The world, even in our passive intake of it, comes wrapped up in concepts and to look for something preconceptual in experience is a fools errand. All this et plus.
In this final installment of a four episode series, I take a look at criticisms of Thomas Kuhn's idea of incommensurable scientific paradigms. Kuhn makes use of a vague notion of seeing that allows him to say some surprising things about how people see the world. For example, Kuhn theorizes that 18th century scientists Joseph Priestley and Antoine Lavoisier would have had different visual experiences had they seen the same jar of oxygen on account of their belonging to different scientific paradigms. Further, we can see that by using Wittgenstein's work on rule following, that there is no easy way to define the borders of a community and Kuhn's work risks a relativism where every scientist belongs to an isolated paradigm of one.
Finally, I take a look at Hillary Putnam's argument for scientific realism called the 'No Miracles Argument'. Though it is a simple argument, it does seem to make the most compelling case for the apparent everyday notion that most people have that science, at its best, offers the most accurate representation of the world.
In this third installment of a four part series on Thomas Kuhn and the allegedly incommensurable revolutions of science, I look at the idea of epistemic incommensurability. Last episode, I looked at semantic incommensurability - a more intuitively easier idea to get your head around. Semantic incommensurability is the idea that a shift in intensional meanings of a concept such as 'planet' or 'bile' can lead to that concept being untranslatable in terms of it’s former variations within older scientific paradigms so that an idea of progress in moving from one variation of a concept to the next becomes unintelligible. In this episode, I want to look at the idea of epistemic incommensurability where a shift in the intensional meanings of a concept and it's connected theory can produce two people to have two different experiences when viewing the same object or phenomenon. You see phlogiston; I see oxygen. We will see how our sweet eyes deceive us. Or so says Kuhn.
In this second episode of a four part series on the work of Thomas Kuhn, I look at his idea of semantic incommensurability. Semantic incommensurability as applied to science for Kuhn centers around the fact that the meaning of particular scientific terms change over time. These changes become radically different as scientific paradigms shift. 'Bile' and 'planet' meant something different for Aristotle and Ptolemy than they do for us. In this episode, I go through some examples of scientific conceptual change to try and get at the meaning of it all.
In this final part of a two part series on our ability to morally evaluate historical figures, I continue my look at the work of Bernard Williams. After taking into account Williams' theory of the relativism of distance, I look at British philosopher Miranda Fricker's criticism of Williams. Fricker believes that historical figures are capable of being morally blameworthy according to our lights and even in cases where blame is inappropriate, she sets out conditions where we would be justified in feeling moral disappointment. We can indeed be Kant at the court of King Arthur.
In this first episode of a two part series, I look at an issue that has been hot of late (are there any non-hot issues in the internet age?) – the issue of how we should judge our historical heritage – particularly the prominent figures of history. Winston Churchill, Christopher Columbus and others have had statues removed in public places along with a reassessment of their historical legacy. Its a healthy dialogue to be having even if it is isn't always carried out in a healthy manner. The dialogue lacks any nuanced underlying theoretical ethical structure that can guide conflicting groups to consensus- which is my way of saying that there has been a lot of shouting. So, in this episode, I look at candidates for theoretical guidance on the ethical judgment of historical figures. Ethical theories tend to assess an agent's actions according to universal standards or contextual, local ones which may be fine for justifying giving the stink eye to your neighbor but doesn't really give us an insight about what a moral choice would have looked like for Genghis Khan. But the ever broad eye of Bernard Williams provides us with some tools to tackle the problems associated with ethical assessment of the historical figures and, in this episode, we see what Williams' 'relativism of distance' theory can offer us.
Apologies for the Buzzfeedesque title - In this final episode of a two-part series on the work of Donald Davidson, I look at Davidson’s work on a theory of meaning, his principle of charity, and, what he believed were his arguments that put the final nail in the coffin of empiricism. Davidson claims that we should develop a theory of meaning by imagining interpreting the utterances of others. In order to make a program of interpretation, we must kindly assume that the subjects of interpretation are at least largely correct by our own standards - they must share the majority of our background beliefs. We must interpret with this principle of charity. But when we acknowledge the necessity of a principle of charity to arrive at shared meaning, we see that the interpretation of other speakers cannot involve direct 'word to world' or 'sentence to stuff' relations. Here, in interpretation, empiricism is false and a holism of massive shared background beliefs is necessary. Meaning and interpretation between speakers can only be mediated through a whole heap of background beliefs that are assumed to be shared between interpreter and interpretee. This mass of shared beliefs undermines any workable notion of people having differing conceptual schemes and, funnily enough, provides us with a somewhat janky answer to the Cartesian skeptic.
In this first episode of a two part series on Donald Davidson, I examine the work of this often puzzling yet seminal American philosopher. Davidson offers a seemingly baffingly simple theory of meaning - that 'snow is white' is true if and only if snow is white. In other words, that above sentence about snow is true if and only if snow is actually white and that fact about the whiteness of snow is the only thing that we need to know about snow if we are too understand the meaning of the sentence 'snow is white'. Isn't that a little too simple of a theory you might ask? Fair question and in this episode, I'll attempt to explain why this theory may be all we need to understand the entirety of what a concept of meaning provides for a language. It's adequate. Maybe ...
In this first episode of a two part installment, I look at the work of David Hume and his ideas that justify that famous quote of his “Reason is, and ought only to be the slave of the passions.” This quote has always troubled me. As politico-moral beings, many don't want to classify a horrific act as merely bad. There is also an urge to classify that horrific act as irrational. Does reason really tell us nothing about morality? Is reason just a way of determining efficient means to an end? Was Hitler evil and rational, or, just evil? What work can a concept of rationality do to condemn an evil act? In this episode, I look at the work of Peter Railton, a Hume scholar, who argues that people often interpret Hume's quote incorrectly. According to Railton, Hume believed that rationality does have a robust role to play in determining which acts are moral or immoral. Hume's point was rather that rationality in isolation could not tell us much about morality but working in conjunction with our sentiments, rationality could help determine for us which acts are moral or immoral.
In this second installment of a two part series on that loftiest of philosophical questions - ‘what is the meaning of life?’, I will make a flailing attempt to answer the question but, hopefully, it is an attempt that may have certain traction. Through looking at nihlism and the work of British analytic philospher James Tartagila, I will show that even if we live in a nihlistic universe, this recognition of a nihilistic realism isn't necessarily a bad thing. It's not a good thing either. It's a no-thing. Just the lack of an answer to what is the meaning of life. Within this universe (if nihlism is indeed the case), we must create our own meaning - we must be the authors of our own lives. If this sounds difficult, it actually isn't. We humans do it all the time in finding meaning in what we do. We have whole civilizations of people finding meaning through life and its activities whether embedded in a social context or self-authored. And, any account of these people’ lives would amount to empirical third person data that would hold up in any social science. So, the fact of meaningful lives is empirically grounded - a fact both obvious but often forgotten in philosophical discussion. And, its not living a lie to create your own meaning in a nihlistic universe. It's just living well. In a thoroughly materialist and nihilistic framework, the universe provides the stage for a meaningful life but not the answers.
In this first installment of a two part series, I look at that most deepest of all questions of the philosophical variety, 'what is the meaning of life'? ''What is the meaning of life?' is the very question that witty conversational partners will volley back with when they hear you are studying philosophy ... 'Hey, so what's the meaning of life?'. Despite this converational trope, actual academic philosophy blatantly defies this stereotype by almost never asking broad questions about the meaning of life. We have to meander back to the ancient Greeks to hear schools of philosophy devoted to the rigorous discussion of the topic (OK, the existentialists certainly discussed it too). In this episode, I want to clear the brush and discuss what the meaning of life isn't. The meaning of life isn't a mock-evolutionary call to selfishness of the genetic or material variety or the making of babies that have your nose and hairline. Nor is the meaning of life something that a science like physics could tell us about. And religion had its moment a while ago, but it no longer seems to sustain a viable choice in the meaning of life game. John Stuart Mill provided a very promising answer to what a meaningful life could consist in and we'll examine his ideas. Then, in the next episode, in a fit of modesty, I'lll reveal what the meaning of life is. Perhaps.
All that and more.
In this third and final installment on WVO Quine's Two Dogmas of Empiricism, I look at Gary Gutting's examination of the paper in his 2009 book What Philosophers Know. Gutting argues that although analytic philosophers pride themselves on the rigor of their argumentation and Two Dogmas is seen as one of the most important papers of 20th century analytic philosophy, Quine offers few actual arguments in favor of rejecting the analytic/synthetic distinction. Rather he relies on a sympathetic audience perhaps exhausted with logical positivism to appeal on pragmatic and even somewhat minimalist aesthetic sensibilities to abandon the analytic-synthetic distinction in favor of a behaviorist and radically empirical approach to questions of meaning. Perhaps the analytic-synthetic distinction is not robust enough to do the heavy lifting that the logical positivists require of it, but, it is still a relevant and very clear distinction. Or so argued Gutting.
In this series, I want to look at W.V.O Quine's 1950 essay Two Dogmas of Empiricism which many feel put the final nail in the coffin of the logical positivist project. It's often regarded as the most important or impactful paper of the 20th century. Gary Gutting, formerly of the University of Notre Dame, felt otherwise. We will explore Quine's argument as well as Gutting's case that it wasn't a very well argued piece of philosophical work at all. But, in this first installment of a three part series, we will look at the logical positivist movement that Quine supposedly had stopped dead in its tracks with his Two Dogmas paper. In particular, I will examine the extent to which the logical positivist project hung on the analytic-synthetic distinction. Plus, the usual trivia.