In the previous post, I proposed that development and learning co-exist alongside winning and that contriving debate to place them at odds actually misconstrues their concerted relationship. I add, here, that development and learning are characteristic of people, and winning and losing is inherent to the Game of Football and to sport in general. In other words, development & learning and winning & losing are not at odds; they arise in concert as people compete with one another by participating as opponents when they play a game.
I also suggest in that post that all sorts of people have fun playing the Game of Football for all sorts of reasons and that competition and fun, like development and winning, are not and should not be mutually exclusive.
Another facet to this topic, based on the inherent nature of winning & losing to sport, is that any and all attempts to win are justifiable. This discussion becomes especially heated in the context of youth sport because such a purist approach can be detrimental to the players as they learn how to play and be members of a team. In that light, what I discuss below is development & learning in youth sport – specifically, in youth football (soccer).
To those who say that the Game is purely about winning & losing: saying so is a red herring. We must account for the fact that youth football has been distinguished from the adult game, and this distinction is for good reason.
Although at first it might seem contradictory, I already grant that the objective of the Game of Football is to win. I have clearly claimed that every team plays to win. Nobody plays to lose – in sport, or cards, or board games, or any game. Youth modifications don’t change that. Yes, as in any game, the objective in the Game of Football is to win.
But the objective lies apart from learning how to play and training to play to win. The modifications to Youth Football have come about on account of younger peoples’ traits and abilities. By analogy, it’s like when cars are modified for those learning to drive: two steering wheels, wider mirrors, or driving on quieter out-of-the-way roads, or using VR simulators. There’s a gradual learning process by which new drivers grow accustomed to the road.
Reversing that analogy, U9s play 7 a-side on a smaller field with a smaller ball and various rule alterations – the very existence of such modifications is evidence that the Youth Game differs from the Adult Game on account of youth differing from adults.
If someone is coaching a Youth team in accordance with the modifications, they tacitly acknowledge the difference. Therefore, to see nothing wrong with a purist viewpoint – that winning is utterly and always justifiable, even in the context of youth football – strikes me as insincere, perhaps in denial that young people differ from adults, or that priorities are skewed to place the self-security of winning above all else, or that someone is ignorant or uninterested in child growth & development , or some combination of these.
To simply say the Game is about winning… yes, it’s correct as far as the pure Game is understood, as a concept, but it reduces your margin for error. On that basis, we’d better be flawless now, and play with mastery, or else we amount to nothing more than a loser and a failure. I suspect none of our teams is flawless, as much as a purist belief might require them to be.
One youth team I coached (Ass’t Coach) years ago was successful enough that, during our U11 year, we were able to play versus three professional F.A. Academy sides. The results were 0-15, 0-5, 0-9. We had no illusions, and our players were shattered by the reality that same-age teams could have such quality and be so dominant, just as we were back home – that’s how we were accepted to play these Academies in the first place. In any event, there it was: a level of mastery relative to us that we were obliged to respect.
So, given a belief in the purist objective of winning… unless you take on similar opponents, who can challenge your team, then purist winning reflects poorly upon you, making you look ignorant, if not cowardly. If the Game is simply to be played to its purest, then nothing short of mastery will do. And if that playing field is to be a level one, then the best example of mastery we have, in reality, is the pro game. To purists, I say this: if you test your youth team at that level, as I’ve done, you may well discover that…
(a) your challenge may not even be accepted but, if it is, then
(b) you may have a rude awakening.
In fact, that may be exactly what a purist needs. On the other hand, if it comes at your players’ expense, it’s not worth the cost. As I say, our team was shattered, and we had a great deal of respect for youth training and development, being professional educators and researchers as we (still) are.
Things are always much easier when all’s well and we’re winning. Real humility is found when we aren’t winning. To those who take a purist approach to sport, enjoy the ride at the top while it lasts because, someday, you may discover that you’ve not learned how to cope, yourselves.
Outside of corruption, throwing the game, which has no place in this discussion, I submit that nobody deliberately plays to lose.
Specifically, I’m talking about football, less commonly known as soccer, and perhaps this discussion even applies to many different sports. But, as a player and coach, football is the beautiful game that I know best, so here goes.
Playing football, we would anticipate the team that makes the fewest mistakes ought to win – as in, the fewest mistakes both in and out of possession, from the kick-off until full-time. If so, then consistent quality performances are key because these should result in more opportunities to earn a win and prevent a loss. What’s more, as the reward for winning grows more lucrative, and the stakes are raised, players must all-the-more learn to develop that “consistent quality performance” on demand, under whatever pressure: effective decisions, executed at the proper moments, skillfully, every time, or at least as frequently as possible. Developing this “quality performance” consistency also demands that opponents earn victories rather than handing them the result, unimpeded, because now they’re challenged to execute just as consistently, if not just as flawlessly. As I say, no one competes to lose.
So, what of development and winning in light of all this? Too often, for me, these two ideas are falsely conflated into sides of what is truly a non-existent – or, at least, a very ill-conceived – debate. As ends-in-themselves, development and winning are typically deemed incompatible. Further, winning is then often vilified since winners produce losers while development is commended for being inclusive. At that point, I find the debate often sidetracks into competition versus fun, another false dichotomy, but in any case, the parameters are so muddled as to render all a meaningless waste of breath. For the sake of dispensing with the issue, I simply ask: why would we not reasonably expect to see fun in conjunction with competition? These are not oil and water, nor do they need to be, nor should they be deemed to be.
Football, the Game, can be played for fun, exhilaration, fitness, camaraderie, focus, perseverance, discipline, teamwork, all manner of virtues and benefits, yet all these on account of the very nature of the Game as a contest of opposition. And where one person finds things fun and enjoyable, another does not necessarily agree, yet who’s to say who is correct, if the Game has enabled all? All sorts of people find all sorts of fun in all sorts of things – who’s to say that finding competition to be fun is wrong, if only because it makes you squeamish? Just the same, if someone’s threshold for intense competitive drive is lower than another’s, can each still not enjoy playing with like-minded peers? In fact, just for instance, this is exactly why various youth and adult leagues categorize levels of play into (for ease of this discussion) gold, silver, and bronze tiers. Everyone must learn to play, and development (to whatever degree) will occur as they go. That implicates teammates, the quality of coaching, and other factors relating to a team or league’s motives for playing in the first place (i.e. gold vs silver vs bronze). Motive, however, does not change the nature of the Game, itself, or the nature of effective learning, development, coaching, and teaching.
As I see it, the issue is not Development for its Own Sake versus Winning for its Own Sake or even Development for its Own Sake versus Development in order to Win. The issue is Development and Learning as a concept, altogether, period, because how else could you learn to play? And the more you play, the more you develop. Whether that development is good or poor is down to context, and a separate issue.
And when the arguments start, what’s really being debated, it seems to me, is how any one person simply wants to be “right” and demand that everyone else agree with what constitutes “successful” participation in the Game. Ironically, it’s a territorial argument over ideology. But to win an egotistical war suggests to me that we might better spend our efforts re-evaluating our culture and how we wish to treat other people.
Fair enough, people want to be “right.” We all have egos. But can we at least offer some basis from which to claim what the word “successful” can mean? So here goes.
Since losing a match always remains a possibility, no matter how consistent our quality performance might be, we ought to measure “success” as the degree to which a player or team has developed that consistent quality of performance (process) over time, at their corresponding level and motive for play, regardless of winning (product).
**I’ll specify, as I did above, that where wins are lucrative – such as in professional play – the stakes grow higher, and different debates will ensue about what “success” means. Yet that’s a commercial issue, relating to development and learning on the basis of peoples’ patience and tolerance for financial pleasure or pain: in other words, the two issues are not inherently related but coincidental: a crowd of supporters or sponsors are willing to pay to back the team for a season.**
For the Game, itself, we must let winning take care of itself because players control what they are able to control, under conditions that also include the pitch, the ball, the referee, the weather, health, fitness, and so forth. So what can we measure? Measurements ought to fall under player and team control, e.g. shots at goal, completed passes, tackles won, saves made, etc. Far from counteracting the importance of winning, such consistent measurements of quality performance provide feedback, i.e. if our pass completion is 90% successful around the penalty box, then maybe we don’t score because our shooting is infrequent or inaccurate. One might even argue that the statistical measurements we gather are less important than the ones we’ve overlooked.
In any case, successful players and successful teams identify strong and weak areas by regularly measuring consistent quality across a range of performance details, and they develop each area for consistency – which we anticipate will translate into more wins – because consistent quality performances usually translate into what can be measured as an “ongoing success.” Success now defines a degree of purposeful, committed, consistent hard work, which makes for more focused, more effective training. Developmentally, the more successful you are, the more often you can theoretically win – but if your opponents also train and measure, and respond better than you do, then guess what? That’s called competition.
Development and winning not only can but already do co-exist. And they always have. It’s people who separate them, falsely, perhaps because they want to win more than they want to earn wins – or, worse, perhaps because they merely want to win a territorial argument about development vs winning that never existed before someone’s ego dreamt it up.
Beyond on-field training and competing, development and learning should cover a range of areas that affect yet lie beyond the Game, e.g. health, fitness, nutrition, goal setting, mental preparation, personal responsibility. Coaches ought to take players beyond the Game, teaching them how to train, how to contribute to a team, how to compete at higher levels of skill and intensity, how to manage the dynamics and emotions of competition, and how to conduct themselves with personal integrity in all respects. Of course, the Game is included within the scope of these matters because that’s why we’re a team in the first place. And the range of these inclusions will comprise a more holistic football program. We implement and evaluate that program as we go, or we ought to.
Effective programs inevitably reveal the crux of commitment, either thanks to peoples’ dedication or on account of their inconsistency. Effective programs encourage trust and a shared pursuit of common goals. Where trust and commitment are maintained consistently and respectfully, a team and its members learn to measure quality and respond consistently, i.e. successfully. Such programs require time, discipline, and patience to learn, but the degree to which participants buy into the philosophy is met with concomitant developmental consistency, and again, one can expect winning to result more often than not, relative to the quality of the opposition. Likewise, individual people can take credit for this-or-that achievement only relative to their teammates, who are also active participants in the program.
Active participation should find team members applying complementary strengths by filling key roles on the path to team success. Individual contributions accumulate, and if these have been consistently defined by common goals and measured for consistent quality, “success” is more likely because people can envision it more clearly and pursue it more meaningfully.
Opponents, especially of equal or slightly higher abilities, likewise play a key role in a team’s pursuit of success since measuring consistent quality performances against them is, in one sense, what the Game – and what sport – is all about. Active involvement in a program unites a team, preparing everyone for more advanced challenges. Occasionally, a teammate might advance to more elite programs, and when a team member grows beyond the scope of the program, that is a team success that all of us can share.
“The needs of the economy and our society are changing and therefore you need to have a learning system that fits the purpose, and that purpose is constantly shifting.”
So said Anthony Mackay, CEO of the Centre for Strategic Education (CSE) in Australia, during an interview with Tracy Sherlock from The Vancouver Sun. Mr Mackay was at SFU’s Wosk Centre for Dialogue in Vancouver on January 29, 2015, facilitating a forum about the changing face of education. Although links to the forum’s webcast archive and Sherlock’s interview are now inactive, I did save a copy of the interview text at the time, posted here beneath this essay. Tracy Sherlock has since told me that she doesn’t know why the interview’s links have been disconnected (e-mail communication, January 27, 2017). Nonetheless, there remainsampleon-line and pdf-printpromotion and coverage of the event.
The forum and the interview were first brought to my attention via e-mail, shared by an enthusiastic colleague who hoped to spur discussion, which is altogether not an uncommon thing for teachers. Originally, I wrote distinct yet connected responses to a series of quotations from Mr Mackay’s interview. Here, some thirty-two months later, I’ve edited things into a more fluid essay although, substantively, my thoughts remain unchanged. Regrettably, so does the bigger picture.
For starters, Mr Mackay’s remark tips his hand – and that of the CSE – when he precedes society with economy. Spotting related news reports makes the idea somewhat more plausible, that of a new curriculum “…addressing a chronic skills shortage in one of the few areas of the Canadian economy that is doing well” (Silcoff). Meanwhile, in Sherlock’s interview [posted below this essay], Mr Mackay concludes by invoking “the business community,” “the economy of the future,” and employers’ confidence. Make no mistake, Mr Mackay is as ideological as anyone out there, including me and you and everybody, and I credit him for being obvious. On the other hand, he plays into the hands of the grand voice of public educators, perhaps willfully yet in a way that strikes me as disingenuous, couched in language so positive that you’re a sinner to challenge him. Very well, I accept the challenge.
Whatever “purpose” of education Mr Mackay has in mind, here, it’s necessarily more specific unto itself than to any single student’s interests or passions. In other words, as I take his portrayal, some student somewhere is a square peg about to be shown a round hole. Yet this so-called purpose is also “constantly shifting,” so perhaps these are triangular or star-shaped holes, or whatever, as time passes by.
Enter “discovery learning” – by the way, are we in classrooms, or out-and-about on some experiential trip? – and the teacher says only what the problem is, leaving the students to, well, discover the rest. I can see where it has a place; how it enables learning seems obvious enough since we learn by doing – teach someone to fish, and all. But when it comes to deciding which fish to throw back, or how many fish are enough when you don’t have a fridge to store them in before they rot and attract hungry bears… when it comes to deciding what’s more versus less important, those minutiae of mastery, it’s not always as easy as an aphorism or a live-stream video conference. Where it’s more hands-off from the teacher, in order to accommodate the student, discovery learning seems to me better suited to learners well past any novice stage. And if the teacher said, “Sorry, that’s not discovery learning,” would the students remain motivated? Some would; others most certainly would not: their problem, or the teacher’s? When both the teacher and the students say, “We really do need to follow my lead just now,” which party needs to compromise for the other, and to what extent? Teaching and learning ought to be a negotiation, yes, but never an adversarial one! In the case of “discovery learning,” I wonder whether “teacher” is even the right title anymore.
In any case, Mr Mackay appears guilty of placing the cart before the horse where it comes to educating students according to some systemic purpose. I’ve got more to say about this particular detail, what he calls “personalization.” For now, it’s worth setting some foundation: Ken Osborne wrote a book called Education, which I would recommend as a good basis for challenging Mr Mackay’s remarks from this interview.
That Osborne’s book was published in 1999 I think serves my point, which is to say that discernment, critical thinking, effective communication, and other such lauded 21st century skills were in style long before the impending obscurity of the new millennium. They have always offered that hedge against uncertainty. People always have and always will need to think and listen and speak and read, and teachers can rely on this. Let’s not ever lose sight of literacy of any sort, in any venue. Which reminds me…
“Isn’t that tough when we don’t know what the jobs of the future will be?”
I must be frank and admit… this notion of unimaginable jobs of the future never resonated with me. I don’t remember when I first heard it, or even who coined it, but way-back-whenever, it instantly struck me as either amazing clairvoyance or patent nonsense. I’ve heard it uttered umpteen times by local school administrators, and visiting Ministry staff, and various politicians promoting the latest new curriculum. The idea is widely familiar to most people in education these days: jobs of the future, a future we can’t even imagine! Wow!
Well, if the unimaginable future puzzles even the government, then good lord! What hope, the rest of us? And if the future is so unimaginable, how are we even certain to head any direction at all? When you’re lost in the wilderness, the advice is to stay put and wait for rescue. On the other hand, staying put doesn’t seem appropriate to this discussion; education does need to adapt and evolve, so we should periodically review and revise curricula. But what of this word unimaginable?
For months prior to its launch, proponents of BC’s new curriculum clarified – although, really, they admonished – that learning is, among other things, no longer about fingers quaintly turning the pages of outmoded textbooks. To paraphrase the cliché, that ship didn’t just sail, it sank. No need to worry, though. All aboard were saved thanks to new PDFs– er, I mean PFDs, personal floatation devices– er, um, that is to say personal floatation e-devices, the latest MOBI-equipped e-readers, to be precise. As for coming to know things (you know, the whole reason behind “reading” and all…), well, we have Google and the Internet for everything you ever did, or didn’t, need to know, not to mention a 24/7 news cycle, all available at the click of a trackpad. It’s the 21st century, and learning has reserved passage aboard a newer, better, uber-modern cruise ship where students recline in ergonomic deck chairs, their fingertips sliding across Smart screens like shuffleboard pucks. Welcome aboard! And did I mention? Technology is no mere Unsinkable Ship, it’s Sustainable too, saving forests of trees from the printing press (at a gigawatt-cost of electricity, mind, but let’s not pack too much baggage on this voyage).
Sorry, yes, that’s all a little facetious, and I confess to swiping as broadly and inaccurately as calling the future “unimaginable.” More to the point: for heaven’s sake, if we aren’t able to imagine the future, how on earth do we prepare anybody for it? Looking back, we should probably excuse Harland & Wolff, too – evidently, they knew nothing of icebergs. Except that they did know, just as Captain Smith was supposed to know how to avoid them.
But time and tide wait for no one which, as I gather, is how anything unimaginable arose in the first place. Very well, if we’re compelled toward the unknowable future, a cruise aboard the good ship Technology at least sounds pleasant. And if e-PFDs can save me weeks of exhausting time-consuming annoying life-skills practice – you know, like swimming lessons – so much the better. Who’s honestly got time for all that practical life-skills crap, anyway, particularly when technology can look after it for you – you know, like GPS.
If the 21st century tide is rising so rapidly that it’s literally unimaginable (I mean apart from being certain that we’re done with books), then I guess we’re wise to embrace this urgent… what is it, an alert? a prognostication? guesswork? Well, whatever it is, thank you, Whoever You Are, for such vivid foresight– hey, that’s another thing: who exactly receives the credit for guiding this voyage? Who’s our Captain aboard this cruise ship? Tech Departments might pilot the helm, or tend the engine room, but who’s the navigator charting our course to future ports of call? What’s our destination? Even the most desperate voyage has a destination; I wouldn’t even think a ship gets built unless it’s needed. Loosen your collars, everybody, it’s about to get teleological in here.
Q: What destination, good ship Technology?
A: The unknowable future…
Land?-ho! The Not-Quite-Yet-Discovered Country… hmm, would that be 21st century purgatory? Forgive my Hamlet reference – it’s from a mere book.
To comprehend the future, let’s consider the past. History can be instructive. Remember that apocryphal bit of historical nonsense, that Christopher Columbus “discovered America,” as if the entire North American continent lay indecipherably upon the planet, unbeknownst to Earthlings? (Or maybe you’re a 21st century zealot who only reads blogs and Twitter, I don’t know.) Faulty history aside, we can say that Columbus had an ambitious thesis, a western shipping route to Asia, without which he’d never have persuaded his political sponsors to back the attempt. You know what else we can say about Columbus, on top of his thesis? He also had navigation and seafaring skill, an established reputation that enabled him to approach his sponsors in the first place. As a man with a plan to chart the uncharted, even so Columbus possessed some means of measuring his progress and finding his way. In that respect, it might be more accurate to say he earned his small fleet of well-equipped ships. What history then unfolded tells its own tale, the point here simply that Columbus may not have had accurate charts, but he also didn’t set sail, clueless, to discover the unimaginable in a void of invisible nowhere.
But what void confronts us? Do we really have no clue what to expect? To hear the likes of Mackay tell it, with technological innovation this rapid, this influential, we’re going to need all hands on deck, all eyes trained forward, toward… what exactly? Why is the future so unimaginable? Here’s a theory of my very own: it’s not.
Discovering in the void might better describe Galileo, say, or Kepler, who against the mainstream recharted a mischarted solar system along with the physics that describe it. Where they disagreed over detail such as ocean tides (as I gather, Kepler was right), they each had pretty stable Copernican paradigms, mediated as much by their own empirical data as by education. Staring into the great void, perhaps these astronomers didn’t always recognise exactly what they saw, but they still had enough of the right stuff to interpret it. Again, the point here is not about reaching outcomes so much as holding a steady course. Galileo pilots himself against the political current and is historically vindicated on account of his curious mix of technological proficiency, field expertise, and persistent vision. For all that he was unable to predict or fully understand, Galileo still seemed to know where he was going.
I suppose if anyone might be accused of launching speculative missions into the great void of invisible nowhere, it would be NASA, but even there is clarity. Just to name a few: Pioneer, Apollo, Voyager, Hubble – missions with destinations, destinies, and legacies. Meanwhile, up in the middle of Nowhere, people now live in the International Space Station. NASA doesn’t launch people into space willy-nilly. It all happens, as it should, and as it must, in a context with articulated objectives. Such accomplishments do not arise because the future is unimaginable; on the contrary, they arise precisely because people are able to imagine the future.
Which brings me back to Mr Mackay and the government’s forum on education. It’s not accurate for me to pit one side against another when we all want students to succeed. If I’ve belaboured the point here, it’s because our task concerns young people, in loco parentis. Selling those efforts as some blind adventure seems, to me, the height of irresponsibility wrapped in an audacious marketing campaign disguised as an inevitable future, a ship setting sail so climb aboard, and hurry! Yes, I see where urgency is borne of rapid innovation, technological advancement made obsolete mere weeks or months later. For some, I know that’s thrilling. For me, it’s more like the America’s Cup race in a typhoon: thanks, but no thanks, I’ll tarry ashore a while longer, in no rush to head for open sea, not even aboard a vaunted ocean liner.
We simply mustn’t be so eager to journey into the unknown without objectives and a plan, not even accompanied as we are by machines that contain microprocessors, which is all “technology” seems to imply nowadays. There’s the respect that makes calamity of downloading the latest tablet apps, or what-have-you, just because the technology exists to make it available. How many times have teachers said the issue is not technology per se so much as knowing how best to use it? Teleology, remember? By the way, since we’re on the subject, what is the meaning of life? One theme seems consistent: the ambition of human endeavour. Sharpen weapon, kill beast. Discover fire, cook beast! Discover agriculture, domesticate beast. Realise surplus, and follows world-spanning conquest that eventually reaches stars.
Look, if learning is no longer about fingers quaintly turning the pages of outmoded textbooks, then fine. I still have my doubts – I’ve long said vinyl sounds better – but let that go. Can we please just drop the bandwagoning and sloganeering, and get more specific? By now, I’ve grown so weary of “the unimaginable future” as to give it the dreaded eye-roll. And if I’m a teenaged student, as much as I might be thrilled by inventing jobs of the future, I probably need to get to know me, too, what I’m all about.
In truth, educators do have one specific aim – personalized learning – which increasingly has come into curricular focus. Personalization raises some contentious issues, not least of which is sufficient funding since the need for individualized attention requires more time and resources per student. Nevertheless, it’s a strategy that I’ve found positive, and I agree it’s worth pursuing. That brings me back to Ken Osborne. One of the best lessons I gathered from his book was the practicality of meeting individuals wherever they reside as compared to determining broader needs and asking individuals to meet expectations.
Briefly, the debate presents itself as follows…
Side ‘A’ would determine communal needs and educate students to fill the roles
In my humble opinion, this is an eventual move toward social engineering and a return to unpleasant historical precedent. Know your history, everybody.
Side ‘B’ would assess an individual’s needs and educate a student to fulfil personal potential
In my humble opinion, this is a course that educators claim to follow everyday, especially these days, and one that they would do well to continue pursuing in earnest.
In my experience, students find collective learning models less relevant and less authentic than the inherent incentives found in personalized approaches that engender esteem and respect. Essentially, when we educate individuals, we leave them room to sort themselves out and accord them due respect for their ways and means along the way. In return, each person is able to grasp the value of personal responsibility. Just as importantly, the opportunity for self-actualisation is now not only unfettered but facilitated by school curricula, which I suspect is what was intended by all the “unimaginable” bluster. The communal roles from Osborne’s Side ‘A’ can still be filled, now by sheer numbers from the talent pool rather than by pre-conceived aims to sculpt square pegs for round holes.
Where I opened this essay with Anthony Mackay’s purposeful call to link business and education, I’ve been commenting as a professional educator because that is my field, so that is my purview. In fairness to government, I’ve found that more recent curricular promotion perhaps hints at reversing course from the murk of the “unimaginable” future by emphasizing, instead, more proactive talk of skills and empowerment. Even so, a different posture remains (touched upon in Katie Hyslop’s reporting of the forum and its participants, and a fairly commondiscursivethread in education in its own right) that implicitly conflates the aims of education and business, and even the arts. Curricular draft work distinguishes the “world of work” from details that otherwise describe British Columbia’s “educated citizen” (p. 2).  Both Ontario and Alberta’s curricular plans have developed comparably to BC’s, noting employers’ rising expectations that “a human capital plan” will address our ever-changing “world of work” (p. 5) – it’s as if school’s industrial role were a given. Credit where it’s due, I suppose: they proceed from a vision towards a destination. And being neither an economist nor an industrialist, I don’t aim to question the broader need for business, entrepreneurship, or a healthy economy. Everybody needs to eat.
What I am is a professional educator, and that means I have been carefully and intentionally trained and accredited alongside my colleagues to envision, on behalf of all, what is best for students. So when I read a claim like Mr Mackay’s, that “what business wants in terms of the graduate is exactly what educators want in terms of the whole person,” I am wary that his educational vision and leadership are yielding our judgment to interests, such as commerce and industry, that lie beyond the immediately appropriate interests of students. Anthony Mackay demonstrates what is, for me, the greatest failing in education: leaders whose faulty vision makes impossible the very aims they set out to reach. (By the by, I’ve also watched such leadership condemn brilliant teaching that reaches those aims.) As much as a blanket statement, Mr Mackay makes an unfounded statement, and I could hardly do better to find an example of begging the question. If Mr Mackay is captain of the ship, then maybe responsible educators should be reading Herman Wouk – one last book, sorry, couldn’t resist.
Education is about empowering individuals to make their own decisions, and any way you slice it, individuals making decisions is how society diversifies itself. That includes diversifying the economy, as compared to the other way around (the economy differentiating individuals). Some people are inevitably more influential than others. All the more reason then for everybody, from captains of industry on down, to learn to accept responsibility for respecting an individual’s space, even while everybody learns to decide what course to ply for themselves. Personalized learning is the way to go as far as resources can be distributed, so leave that to the trained professional educators who are entrusted with the task, who are experts at reading the charts, spotting the hazards, and navigating the course, even through a void. Expertise is a headlight, or whatever those are called aboard ships, so where objectives require particular expertise, let us be lead by qualified experts.
And stop with the nonsense. No unimaginable future “world of work” should be the aim of students to discover while their teachers tag along like tour guides. Anyway, I thought the whole Columbus “discovery” thing had helped us to amend that sort of thinking, or maybe I was wrong. Or maybe the wrong people decided to ignore history and spend their time, instead, staring at something they convinced themselves was impossible to see.
“The learning partnership has gotto go beyond the partnership of young person and family, teacher and school, to the community and supportive agencies. TONY MACKAY CEO, CENTRE FOR STRATEGIC EDUCATION IN AUSTRALIA
Tony Mackay, CEO at the Centre for Strategic Education in Australia, was in Vancouver recently, facilitating a forum about changing the education system to make it more flexible and personalized. He spoke about the rapidly changing world and what it means for education.
Q Why does the education system need to change?
A The needs of the economy and our society are changing and therefore you need to have a learning system that fits the purpose, and that purpose is constantly shifting. So it’s not just a matter of saying we can reach a particular level and we’ll be OK, because you’ve got such a dynamic global context that you have a compelling case that says we will never be able to ensure our ongoing level of economic and social prosperity unless we have a learning system that can deliver young people who are ready — ready for further education, ready for the workforce, ready for a global context. That’s the compelling case for change.
Q Isn’t that tough when we don’t know what the jobs of the future will be?
A In the past we knew what the skill set was and we could prepare young people for specialization in particular jobs. Now we’re talking about skill sets that include creativity, problem solving, collaboration, and the global competence to be flexible and to have cultural understanding. It’s not either or, it’s both and — you need fantastic learning and brilliant learning in the domains, which we know are fundamental, but you also need additional skills that increasingly focus on emotional and social, personal and inter-personal, and perseverance and enterprising spirit. And we’re not saying we just want that for some kids, we want to ensure that all young people graduate with that skill set. And we know they’re going to have to effectively “learn” a living — they’re going to have to keep on learning in order to have the kind of life that they want and that we’re going to need to have an economy that thrives. I believe that’s a pretty compelling case for change.
Q How do you teach flexibility?
A When I think about the conditions for quality learning, it’s pretty clear that you need to be in an environment where not only are you feeling emotionally positive, you are being challenged — there’s that sense that you are challenged to push yourself beyond a level of comfort, but not so much that it generates anxiety and it translates into a lack of success and a feeling of failure that creates blockages to learning. You need to be working with others at the same time — the social nature of learning is essential. When you’re working with others on a common problem that is real and you have to work as a team and be collaborative. You have to know how to show your levels of performance as an individual and as a group. You can’t do any of that sort of stuff as you are learning together without developing flexibility and being adaptive. If you don’t adapt to the kind of environment that is uncertain and volatile, then you’re not going to thrive.
Q What does the science of learning tell us?
A We now know more about the science of learning than ever before and the question is are we translating that into our teaching and learning programs? It’s not just deeper learning in the disciplines, but we want more powerful learning in those 21st-century skills we talked about. That means we have to know more than ever before about the emotions of learning and how to engage young people and how young people can encourage themselves to self-regulate their learning.
The truth is that education is increasingly about personalization. How do you make sure that an individual is being encouraged in their own learning path? How do we make sure we’re tapping into their strengths and their qualities? In the end, that passion and that success in whatever endeavour is what will make them more productive and frankly, happier.
Q But how do you change an entire education system?
A Once you learn what practice is done and is successful, how do you spread that practice in a school system so it’s not just pockets of excellence, but you’ve actually got an innovation strategy that helps you to spread new and emerging practice that’s powerful? You’re doing this all in the context of a rapidly changing environment, which is why you need those skills like flexibility and creativity. The learning partnership has got to go beyond the partnership of young person and family, teacher and school, to the community and supportive agencies. If we don’t get the business community into this call to action for lifelong learning even further, we are not going to be able to get there. In the end, we are all interdependent. The economy of the future — and we’re talking about tomorrow — is going to require young people with the knowledge, skills and dispositions that employers are confident about and can build on.
Some leaders talk a great game, but no matter the words coming out of their mouths, people respond to the culture they’re part of, and within it, they respond in both overt and subtle ways.
By the way, leaders aren’t limited to those in the head office… leaders are people who take initiative, work to their strengths, and lift others to do the same thing… so pay attention to making your strengths and inspiration constructive instead of deflating or injurious.
If you aren’t getting the results you expected, then reflect and consider what reality people are experiencing and which messages they may be receiving around your place, maybe unintended ones, that might conflict or work against your aims for attitude and behaviour.
It’s a shame when aims and culture contradict. It’s hypocrisy when aims are ignored or undermined by deliberately contradictory culture.
No shame in reflection. Reflection is learning, and learning’s a virtue.
Also agreed! School education should be “looking beyond the short term and thinking more about what kinds of adults they’re trying to develop.” That’s always been my approach.
Post-secondary, career, parenthood, civic involvement… all these and more will come about, and with guidance, let each person find their own way. But the adult human beings making their life decisions need a virtuous, thoughtful, positive foundation, and that’s what school education should always be about.
On-line comments are not guns, they don’t kill people. And the people who wield them, as in write them, are not having a stand-off at high noon. On-line comments are not deadly but, boy, can they be deadly stupid.
They’re so very often uninformed, superficial, and emotionally driven as well as – frankly – bloody lazy. Plenty of opinions from plenty of people carrying free-speech chips on self-righteous shoulders. On-line comments, these days, are just another sign of the times.
“Just how many people bother to research and draft for a ‘Comments’ section response, anyway?”
Does it show I’m fed up with people trying to win personal pissing matches in the “Comments” section? Does it show? …people clawing their way to the top of some imagined pile of respect, in a community comprising whomever read the article – unless of course they only read the headline. Does it show? …the invective, the insults, the one-liner spree? Commenters affirming, negating, defending, attacking. Pointing out who’s so obviously wrong, what’s so evidently right. Commenters commenting, exercising their democracy, one comment at a time? On-line comments are the Twitter of– er, hmm, I’ll need some time to work on that one.
Of course I’m unable to say on-line comments kill people, but that’s not because they actually don’t kill people. It’s because, in the analogy, on-line comments are just the bullets. Computer keyboards would be the guns. And it’s still people pulling the trigger by pressing send – there’s got to be a triggering joke in here somewhere, I’m sure of it. For now, enough to say that guns don’t post comments, people do.
Time was when a letter-to-the-editor was the main public recourse. But sending one to your chosen publication was no guarantee of being published, or at least not published in full. But then came the Internet, the great equalizer. I can only suspect that, way back when, when that first on-line article permitted readers to leave comments, that the author or editor or publisher proudly lifted a glass of wine to rejoice the enabling of the public voice. One step forward for free speech. Here’s to democracy.
How often I’ve read an article, then followed up with the on-line comments, thinking, “I’d like a sense of the broader opinion out there, maybe encounter some different perspectives, pick up a hyperlink or two for this topic.” This does still happen, and it’s what makes on-line comments, for me, worthwhile. It also means I’m relying on the other commenters to offer anything of substance. But, obviously (…is it obvious?) substance doesn’t always just happen. Honestly, though, pretty naïve to expect that it would. And if you thought, in the sheer amount of comments for just one single article, that the law of averages would help, then you probably haven’t read too many on-line comments. They can far, far surpass the length of the article and illustrate far, far less than broader opinion or different perspectives or anything useful at all about the topic. Just as often they proliferate because somebody needed to win.
How often is someone’s on-line comment about the article as compared to that commenter seeking personal affirmation or recognition as some kind of uber-reliability source? How often does an on-line comment chain turn into a personal on-line shoving match? And how often has somebody replied along these lines: “You’re pretty tough when it’s not face-to-face…” ?
Nobody thinks they’re even beginning to solve the issue [whatever it is] in the on-line Comments section. Do they? At least, they couldn’t possibly think so when all they’ve written is a sentence or two, right? At least, when they’ve written sentences. But, unquestionably, essay-length on-line comments are the exception to the rule. Aren’t they? At least, they are in the Age of Twitter – wait, sorry, I already slammed Twitter. This time, I’ll go with Google making us stupid (not for the first time). By the way, even shortened attention spans have been called into question (have a look, neither’s a long read). My own sense, for what it’s worth, is that we attend to what stimulates us the most although – egregiously – I have no research to back my opinion, and if any of you trolls call me on that, I’ll comment you back. So just be warned. Gotta be almost time for that trigger joke.
Are people commenting when maybe they should be writing an article of their own? Would that be too much responsibility to bear? to ask? Would writing an article require too much effort? People seem to care enough to leave a comment yet not enough to offer something more substantive than a line or two, or a paragraph the odd time. Even a few paragraphs, that one time by that one person, but anything truly edited for cohesion – are you kidding, what are we, journalists? How many of us are writers, period, much less paid ones? Heaven forbid anyone be expected to offer more than a few lines of opinion masquerading as oh-such-obvious-fact, or a one-liner, or a dogmatic tirade! (Yes, I not only see the irony, I intended it.) Leave all that responsibility crap for whoever else. Whomever, actually, but that would mean caring.
Who are you, anyway, that you’d present yourself in so superficial a manner as on-line comments yet expect to be taken seriously? Who are you, that you’d conflate your real-life person with your on-line persona in such a way where one belies the other? Which one is demonstrating the true you? Who are you, to be taking this so personally right now when, in fact, right now I’m giving you the benefit of the doubt? Cynicism aside, everyone can think – hence my frustration. If on-line comments suggest anything, they suggest that emotion rules, not thinking.
Don’t misconstrue – thinking and emotion both occur, but by default (I’d say), emotion controls thinking more than the other way around. Far more rarely does rationality show up beyond the article itself, if even there.
Below is an edited chain of comments that I cut & pasted from an NPR article posted to Facebook, about the bombing of the Manchester arena following the Ariana Grande concert in 2017.
To be precise, these comments that I cut & pasted are from the Facebook post, not NPR’s website. I also present these comments as a single, focused discussion when, in fact, other peoples’ semi-related comments had appeared in between some of these, responding to still other people. But the way comments appear is evidently controlled more by their time stamp, when they were posted, than by which person’s receiving a reply in the thread. So, in selecting only these comments here, I tried to maintain the direct discussion between particular people, back and forth. Finally, I’ve published their Facebook names with hyperlinks because all this is publicly published anyway, and nobody’s owed any shelter.
Rather than take sides, see if you can read this thread to understand my point, the futility of trying to solve such grand issues in a Comments section, the pointlessness of on-line comments in general. (Yes, I see the irony in having my own Comments section below. I even intended it.)
Ask of each comment, and each commenter…
what, really, is the motive behind this comment getting not just written but posted?
what, really, is the response that this person…
believes for themselves?
presumes from the other person?
seeks from anyone else (like us) who may be reading?
Read not only with self-awareness but with other-awareness, with empathy. But please resist taking sides on the issues, irrespective of your own feelings, because the point here is the comments having been crafted and shared, not the terror incident or the politics that are introduced. A tangential point is to acknowledge that it’s possible and sometimes productive to keep our feelings and our rationality separate.
James Alford What a great freedom festival! I just don’t know what we’d do without all the freedom that comes with unfettered access to semiautomatic weapons. Thanks for sharing this awesome display of our enviable freedom!
How do other nations cope without our awesome brand of freedom?! I mean, other than longer life expectancy, ultra low crime rates, drastically lower prison populations, and better overall quality of life.
Yep. Would never wanna swap my bullet-y freedom for any of that.
Scott Macleod How are the Ariana Grande concerts in places without freedom?
James Alford Scott, they do have WAY less freedom, don’t they?! They only have 0.23 gun homicides per 100,000. We have almost 11!!! Murkah!
Scott Macleod James, that is a meaningless statistic. Here I’ll show you:
Last year cars killed:
United States 36,166
Deaths from drowning, children under 14:
United States 548
Deaths from alcohol per year:
United States 88,000
The United States is an outlier on all of these. You can do the same breakdown with antibiotics. You can do it with hot water heaters. Or with deaths from bees. And the US will have higher death rates.
Jacqui Parker Percentages based on overall population would make more sense in your example.
Seth Martin Did anyone in this thread actually read the article?
Scott Macleod The numbers don’t change when broken down per capita. The US is still an outlier. Know why? Because deaths from X will always be higher in countries with more X. Determining causality is much more complicated. Would taking X away eliminate those deaths? Or would X just be substituted for something else and what would have been deaths from X become deaths from Y? This is what’s important.
Jim Chan Total death doesn’t equal to death rate. What are you a 2nd grader?
Scott Macleod Jim, see comment above. Per capita break down does not change the analysis.
Amon-Raa Valencia Scott Macleod the replacement theory can be checked by looking at life expectancy.
Do the countries you point out have higher life expectancy than the US?
James Alford I’m afraid you’re unacquainted with how percentages work, Scott.
If I have 10 tomatoes in my garden, and 2 of them are rotten, and you have 10,000 tomatoes, and 200 of them are rotten, then my problem is still 10 times bigger than yours, even though you have 100 times more rotten tomatoes.
Find a local 5th grader. He’ll be happy to provide more illustrations.
Wesley D. Stoner So Scott Macleod, in your example, if X = guns then the US logically has more gun deaths because there are more guns, right? Where do you think I am going next….
Scott Macleod So by that logic, James, those other countries have the same problem the US does from guns? Please respond without insults.
Scott Macleod <<Scott Macleod the replacement theory can be checked by looking at life expectancy.
Do the countries you point out have higher life expectancy than the US?>>
Other factors go into determining life expectancy. Access to healthcare for example.
James Alford Nevermind, Scott. The original stat said it all. The U.K. (who you brought up) has a gun homicide rate 44 times lower than ours. If you can’t grasp that incredibly straightforward piece of empirical data, then we don’t have a starting point.
Chris Toscano James Alford, you have unlocked Master Troll Level 99. Fine work sir! Look at all the ammosexuals that you have up in arms.
Margaret Moore Bennett Scott Macleod, I am a statistics teacher, you show a basic lack of understanding for how statistics work. You are a poster child for why the GOP is successful with the un and under-educated.
Scott Macleod <<Nevermind, Scott. The original stat said it all. The U.K. (who you brought up) has a gun homicide rate 44 times lower than ours. If you can’t grasp that incredibly straightforward piece of empirical data, then we don’t have a starting point.>>
A) Again, this stat is meaningless. It tells us nothing about causality or how public policy changes the death rates.
B) Those are GUN death rates. Of course a country with 330 million GUNS is going to have higher death rates from GUNS. Just like a country with greater access to antibiotics has more deaths from antibiotics. It tells us nothing about whether antibiotics or guns are good or bad for society.
Scott Macleod How about explaining it to me Margaret rather than resorting to ad hominem and appeals to authority?
Scott Macleod I am not uneducated. I am not a republican. There’s two misses. What are the odds your third claim that I demonstrate a lack of understanding for statistics is correct.
David Houghton Well, you led with raw numbers and not per capita numbers. Not exactly putting your best foot forward on the stats front.
Tandy Fitzgerald Scott Macleod does that mean a US citizen is more reckless when it comes to driving that the rest of the world and less aware when it comes to their children swimming or less aware of health issues and oblivious to the affects of alcohol? Man US citizens really do prefer to live on the edge far more than anyone else in the world…I guess freedom has more prices then just serving in the military.
Scott Macleod I led with what I had available to copy and paste to demonstrate what I was getting at. I agree it would have been better to break them down per capita. Alas, I’m on my phone and these comments move quickly.
Normally when you see this argument, though, it involves raw numbers. As I have said, what I was illustrating does not change when broken down per capita.
Nick Lucas Scott Macleod My favorite part about your posts is that you are trying to dismiss data because of your claims of causality but you make your first statement of the Ariana Grande concert without the same rule of thought.
What gun would have somehow stopped that bomb from exploding? Why didn’t a person with a gun stop the OK bombing or Boston bombing?
This is the problem with bias is we tend to not be able to apply the same logic to our own beliefs that we do others we disagree with.
Michael Dugger Scott Macleod no gun would’ve have stopped a silent bomb carrier Scott.
Scott Macleod Nick, my original comment was a quip. It was a snarky counter to the OP. I feel like you are reading too much into it.
Nevertheless, it does illustrate what I mentioned earlier. When X is not available, people will substitute with Y and nearly the same amount of people would likely die anyway. Why commit suicide with a $500 gun when you can do it with $3 of rope? Looking at guns only is a disingenuous way of looking at the problem. To be sincere, we would need to look at all homicides to determine causality.
I have not made the claim that access to guns will stop bombings.
Bill Melton “Would it make you happier, little girl, if they were pushed out of a seventh floor window?” Archie Bunker
Jenny Caldwell Scott Macleod Those aren’t death rates, those are simply the numbers of deaths. Death rates are population-based, i.e. # of deaths by drowning/1000. US death *rates* by gun violence are indeed much higher than other countries.
Paul Errman James Alford go cry yourself a river. When their violent crime rate drops and they actually have a population of over 300 million call us.
Onica Annika Scott Macleod you can’t take a gun into a concert permit or not. Stupid example.
Scott Macleod <<Scott Macleod you can’t take a gun into a concert permit or not. Stupid example.>>
I never said you could or that you should. Why are you bringing this irrelevant insight into the conversation?
Scott Macleod <<Scott Macleod Those aren’t death rates, those are simply the numbers of deaths. Death rates are population-based, i.e. # of deaths by drowning/1000. US death *rates* by gun violence are indeed much higher than other countries.>>
I know this. I never disagreed. US death rates by gun violence are higher. I never claimed otherwise. What I dispute is the significance of this information.
Scott Macleod We also have higher death RATES due to drowning, alcohol consumption, motor vehicle accidents, and a whole host of other phenomena. Why?
Looking at RATES and ignoring all other factors gives people a misleading glimpse into reality.
Onica Annika Scott Macleod YOU WROTR “How are the Ariana Grande concerts in places without freedom? 👌🏻”
By comparing the bombing in Manchester to carrying guns and implying people would be safer at concerts WITH GUNS is how THIS WAS BROUGHT UP.
You cannot shoot a suicide bomber without expecting to have an explosion. It would have made absolutely NO DIFFERENCE!
Next are comments following NPR’s report on two bounty hunters who engaged a fugitive at a car dealership in Texas, also in 2017 and also posted to Facebook.
The Next Thread
‘Jonathan Fitzgerald So I’m starting to get a little pissed by all this bounty hunter bashing. While a little rough around the edges. Boba Felt was a pretty decent guy. And could tell some great jokes, once he got a few drinks in him. Cad Bane was a generous and loving fellow. He was known to work at the soup kitchens all the time. So chill out. They aren’t ALL bad.
Candy Ellman Johannes That may be but when you see the video it’s quite clear that the two bounty hunters handled the situation very badly. Because they were like that doesn’t mean they were good at their JOB. It doesn’t mean they’re bad. Just that they shouldn’t have been handling this job.
Isaac Unson Wow, what a tragic and visceral story! Should I maybe post a comment to spur discussion about bounty hunting, or the lack of consideration for things going south very badly?
Nahhhh, that’s original content worthy of discussion. Why not just be cynical and predictable instead and make the usual jokes about guns?
Russell Good It’s fitting this happened in a death merchants offices. More people are killed with vehicles, than anything else, and yet anyone can buy a car without a background check. Unlicensed drivers and unregistered vehicles are hurtling past the innocent in their thousands at this very minute. When will we stop the insanity?
Candy Ellman Johannes A death merchant? No, they’re not. They can’t control how someone is going to drive the cars they buy. And a background check will not tell you how they will do that. Just drive around our community in Texas and you will see a lot of idiots on the road. Most of whom would clear any background check you might think they should conduct.
I hardly even know where to begin with either discussion. The responses, pardon the pun, speak for themselves, from the aggressors to the defenders to the cooler heads to the comic relief. I don’t even say “aggressors” and “defenders” with any political bent so much as simply noting name-calling and tone. The fact that one person or another, with [whichever] political beliefs, is the aggressor or the defender, here, is not my point, which is why I cautioned to read without taking sides – everybody can be mean-spirited or good-willed, aggressive or defensive. My point is that everybody can also think and listen and reflect, if only they wish to do so, which means more targeted effort and more controlled emotional reaction.
The futility of quoted statistics, which are then attacked and defended, as are the people themselves, in a forum that is informal and, for the most part, unmonitored (perhaps beyond hate speech or something that Facebook would moderate) …what’s the point of it all? Once these people close their browsers, what does each one feel he or she has accomplished? If little to nothing, then why even participate? If something more, then who besides themselves is measuring their effectiveness and, anyway, to what end? And who besides themselves even has a right to judge their effectiveness, especially since this cast of characters – evidently – would have plenty more to say about being judged, and we just kick-start another thread!
How many of you just now reading saw either of these comment threads before reading them here in my post? Which audience needs to see these threads, and why?
Are Comments sections some kind of exercise of free speech? If so, are they worth the trouble? On-line comments are not always anonymous, but they’re also not face-to-face, and that’s perhaps most significant here as to the point of being responsible and thorough before posting something regarding another person. However, it’s most significant in both positive and negative ways – positive because we owe the other person enough dignity to offer them an intelligent reply that respects their point of view, and negative because we can insult the bastard without (likely) ever feeling some physical repercussion. At the opening, I called on-line commenters lazy. Maybe they’re cowardly, too.
Geez, how seriously am I taking this? They’re just blog comments, for goodness’ sake!
Oh please, it’s a comments section not a peer-reviewed journal.
Here’s another partial comment thread, cut & pasted from The Atlantic website, this time without any comments removed from in between – these are consecutive responses to an article about America’s intellectual decline – a topic not too dissimilar from this very post, even though I disagree in detail with a number of the writer’s claims. However, again, the point here is not to debate the issues. It’s to note the motives and tone behind the comments.
Start with Gutenberg. Then move on to education, art, medicine, culture, and philosophy. Don’t forget Martin Luther and Henry VIII.
Yes the Greeks and Arabs and others made their contributions. But how did those contributions find their way to becoming building blocks for Western Civ? Via the Romans (Christians by the end) and the Crusades (Christian holy wars). For centuries it was literate clergymen who preserved the ancient knowledge which would eventually set the stage for the Enlightenment.
Like it or not, Christianity is at least as inextricably entwined with the building of Western Civilization as any other influence one could name.
It’s truly odd that you find this overwhelmingly obvious fact truly odd.
In other news, if your Mom had chosen a different man to be your father, not only can we never know what you’d look like today–it wouldn’t in fact be you. That child might well not have even existed.
Europe’s faced many existential threats over the millennia. Change just a few events, and Western Civilization wouldn’t have survived. Subtract Christianity, and there’s a strong chance the region becomes conquered by neighboring civilizations, and never even develops the thing we now call Western Civilization.
And that’s as far as I need to go chasing after this particularly nonsensical counter-factual.
“exactly, it’s a counter factual, no need to chase it.”
Asking what might have happened if different decisions had been made is often vital to understanding historical events. Although the process is inherently fraught with ambiguity, it’s a valid exercise.
Your reply makes it seem like perhaps you don’t grasp the purpose of a counter-factual.
This counter-factual is nonsensical. Not all of them are.
yes I understand that as well. This is getting a little exasperating. My only point in this was that one cannot attribute western civilization’s existence to christianity. At most, one can say that christianity was instrumental in the history and current state of western civilization.
Had David simply crafted a more thorough reply to begin with, as he indicates in the final response of this chain, he might have pre-empted all these back-and-forth remarks. More complete remarks might have stirred new ideas, better avenues for discussion, alternatives for research, and just a more thorough model for others to consider. And, sure, where his exchange with Duncan Tweedy was essentially civil and pain-free, he has still made the potential for a bunch of negative things to occur…
(i) people might have accepted his cursory remarks, thereby reinforcing (albeit superficially) their own beliefs in that echo-chamber kind of way
(ii) people might have rejected his cursory remarks, thereby reinforcing (albeit superficially) their own beliefs in that polarising kind of way
(iii) people might have misread, misunderstood, misconstrued, or otherwise missed the context of his remarks and, additionally, might have failed to follow up this thread as far as the point where I have cut & pasted it here
(iv) as a result of (ii) or (iii), people might have grown upset or angry with his cursory remarks and taken him on with vitriol, or worse, simply have begun insulting him outright, neither of which contributes to any constructive progress but, rather, destructive regress and both of which inspire ill feelings that those people now carry into everyday life, and which might later be echoed – on-line or off-line – by still others
(v) some other outcome I haven’t mentioned
Had David afforded more time and thought to (a) the kind of response that could adequately convey his thinking, and also (b) the kinds of responses he might elicit from people who read his remarks if he were to write them this way or that way, then he wouldn’t have responded the way we find here. Yet it hardly seems worth critiquing these, or any, on-line comments at all, they’re so ubiquitous! Yes, I see the irony; in fact, I intended it.
Just how many people bother to research and draft for a “Comments” section response, anyway? The whole concept of the on-line “Comments” section seems tailor-made to evade the vetting and sober second-thought of taking a breath and waiting the requisite 24 hours before responding to messages we don’t like. I’d pay $49 for the t-shirt that reads, “Who took the ‘Editor’ out of ‘Letter to the Editor’? Send me $50 and I’ll tell you.”
Obviously, the question is not how many bother to research and draft. It’s not even a question of whether to bother researching and drafting. It’s a question of whether to bother engaging in the on-line comments, to begin with. And I describe it as “bother” because research and drafting mean “work,” i.e. “What a blasted bother!” as in making a deliberate driven effort versus the blurt of perfunctory emotional reaction, which has always been a human foible and which, these days, seems even that much more common.
All that bother for… what, exactly? For someone to reply with one-line invectives? Who are we trying to reach, on-line? And, in light of that, who are we trying to be?
“Who are you on-line? The person versus the persona – it’s a concept worth considering.”
We’re all still responsible for the things we say, especially when they get published, and especially when “published” now means forever to be seen on-line (a consideration discussed here as well) – a newspaper or a book might at least fall out of print or get tossed in the trash. We might consider being responsible for the writing of an article, so why not for the offering of a comment? All the people I’ve quoted here have plenty to offer, I suspect, given their apparent literacy. But taking the time and resources at their disposal and using them in a more constructive way evidently hasn’t happened. Can that be changed?
What incentive would motivate these, or any, people to offer more when commenting on-line? Do people care about a growing reputation, however much or little it permeates the Cloud of e-culture? Who do they think they are? Who do they think that we think they are? Do they even care what others think they are? Do they care, themselves? There’s so little accountability, no formal editing or vetting as might be found in print publication, aside from moderation, as I’ve said, and that often simply automated. Sometimes there’s a log-in procedure via Facebook or Disqus, say, for whatever assurance that offers. It allowed me to publish selected commenters, here.
“Who are we on-line?” asks Flora the Explorer. It’s a question worth considering before we ever touch a keyboard. So, okay: who are you on-line? The person versus the persona – it’s a concept worth considering since I suspect 99% of on-line commenters will never meet their fellows face-to-face. Yet, precisely because of that physical separation, I suspect few care to consider (or even just bother to actively recognise?) this concept. Yes, that’s ironic, and it’s a shame. If people did care more (or at least actively acknowledge?) the fact that dialogue comprises more than self, how much more might a conversation yield? As it is, on-line dynamics affect our selves so subtly yet profoundly that the Internet, the great democratic equalizer, is proving its ability to take us one step forward and two steps back.
In and of themselves, given their entire context including their culture, article & blog comments tend to run the risk of oversimplifying issues that warrant and deserve far greater diligence and time spent in meaningful appreciation. They deserve… really? Why? Well, for starters, somebody published an article about [whatever it was], so now it’s out there for public consideration. Moreover, somebody decided that publishing [whatever it was] was worth the bother, and like you and me and everyone, that somebody deserves some basic dignity and respect, whether we ultimately agree with their published material or not. At a minimum, that requires reading the article, if not subsequently researching a bit more. Beyond that, it requires crafting responses of your own that do right by the author who invested the time and effort to create an article worthy of your comment – not “worthy” because you agree but “worthy” because you bothered to respond. Boy, all this bother! Why bother?
This entire blog is a response not unlike what I suggest here – like anyone, I can’t cover it all in one go, but at the least, I can offer something more than a one-liner. The rest of you deserve that much, as I deserve likewise from the rest of you. So, in fact, it is about deserving: if one deserves, then all deserve – either no one is above any other, as far as it involves basic respect for human dignity, or we’re all of us bound for war, waged by all upon all.
For the record, this time I’m not trying to be ironic. This time, it’s all too serious.
If people considered article & blog comments as I’ve tried to frame them here, as a matter of respect for human dignity, then comments – and public discourse, altogether – could be a whole lot different, and probably more constructive. Instead of comments, maybe people would compose entire articles of their own, which I remember is what Internet apologists used to boast: “The platform of the Internet is the great equalizer!” and “The Internet gives everyone a voice!” and “The Internet is democracy at its finest!” …that sort of thing. Shame that so many decide, instead, to use it superficially, far beneath both its potential as well as their own.
So, please, follow up on your own, comment and post, publish and be responsible. Contribute constructively. Most importantly, be thoughtful and thorough because that’s respectful of everybody else’s time and effort and bother. No one can cover every single detail, and every person has two cents to add of their own. But don’t be fooled by that miniscule metaphor – two cents refers to humility, so please make an effort to offer more than a reactive outburst.
Thanks, everybody, for leaving whatever considered comments you might have, and don’t let the end of this post be the end of your opinion.
WARNING! This post is an analysis and celebration of Joseph Heller’s novel, Catch-22, and it DOES contain PLOT SPOILERS. If you wish to read the novel for the first time, do not read this post.
“Give the first twelve chapters a chance” has long been my advice to anyone who asks about Catch-22, Joseph Heller’s modernist masterpiece that critiques the absurdity of the military during wartime. If you haven’t read the book, I will hardly spoil things by explaining how eagerly we witless first-timers set out to read such a lauded modern classic, only to be confronted by what might be the most frustrating paragon of show-versus-tell in existence. (However, I will be discussing spoiler details from here on, so be warned.) From the seemingly disparate chapter titles to the disjointed narrative, which repeatedly folds back upon itself, from a maddeningly mirthful plot device, which tempts you to toss the book aside and deny its existence, to an irresolute closing – if you make it that far – the book continually challenges readers to deduce what’s happening and piece together what’s happened. Toss in what seems like an endless cadre of characters, ranging from odder to oddest to perhaps not so odd, the book is a challenge, no question.
For seven years, I assigned this book as summer reading for returning seniors. Oh, how the students complained about those twelve chapters – excessive! pointless! irritating! – only to feel more aggrieved at hearing, “Exactly,” my necessary reply. Once the venting subsided – usually at least half the first lesson – we’d begin discussing why Heller’s book could only be written this way as compared to some more conventional, accessible way.
For one thing, we need to meet the protagonist, Yossarian, and understand his circumstances so that, at appropriate upcoming times, which of course will have already occurred, we won’t criticise but will instead favour him. To this end, the entire story is told out-of-sequence, opening apparently in media res during Yossarian’s hospital stay. We have character introductions and letter censoring, foreshadowing how words and language will be manipulated while characters will be isolated, alienated, and demeaned. Subsequently, we learn the logic of Catch-22 from Doc Daneeka. And that Snowden dies. If we’ve navigated the twelve opening chapters and lived to tell about it, we learn that Yossarian, originally a young, excited airman, once needed two passes over a target in order to bomb it successfully, which gets his crewmember, Kraft, killed. Yossarian is further distressed upon returning when he receives a medal for the mission. Meanwhile, Milo opens his syndicate. The tension of tedium, the injustice of fortune. The folly of command, the depravity of humankind. Capping the story is the gruesome account of Snowden’s death, the key incident that incites Yossarian’s fear and lands him in hospital, where we first meet him – naturally, Heller waits until the end to tell us the beginning.
Heller writes with an absurd, illogical narrative style that characterises Yossarian’s internal eternal predicament, wending its way through isolation, alienation, discord, misery, paranoia, fear, senselessness, deception, vice, cruelty, even rape and murder. Catch-22 being what it is, its victims have zero-chance to overcome because the antagonists are permitted to do whatever the protagonists are unable to prevent. All along the way, Heller has Yossarian wanting out of the military (fly no missions = live), and he continually ups the ante between Yossarian and all the disturbing confrontations and contradictions that antagonise him, from his enemies and his commanders to his acquaintances and his comrades. But ultimately, and most potently, he has Yossarian suffering from his own self-interest. As the narrative flits and tumbles about, in its own progressive way, Yossarian’s self-interest evolves or, better to say, devolves. What does evolve, inversely to self-interest, is his compassion as he gradually grows more concerned for the men in his squadron, and which by Chapter 40, “Catch-22,” has extended to all innocent people beset by oppression, prejudice, and exploitation. So when Colonel Cathcart’s promised deal to send him home safely, definitely, comes ironically (fittingly!) at the expense of the squadron, Yossarian ultimately recovers enough self-reliance to overcome his personal anguish but not enough to remand himself to the cycle of absurdity. Given Heller’s dispersed timeline, describing Yossarian’s character development as a narrative arc or an evolution is less accurate than the piecing together of a jigsaw or the unveiling of a secret.
Perhaps unsurprisingly, Yossarian’s instinct for self-preservation is the source of his personal torment. His despondency and disgust over the preponderance of all human self-interest finally turn Yossarian’s decision to go AWOL, at criminal risk but personal safety. That such a climax works is because readers – like Yossarian – are no longer fighting back but giving in, yet even then Heller offers no respite – the story ends ambiguously, leaving readers to satisfy their own vexation. Even so, I suspect that Heller appreciated John Chancellor’s life-imitating-art intiative as one inspired by more than a spirit of fandom. So where some characters have been subjects of compassion, others agents of absurdity, readers’ resultant responses have also undergone a perfectly natural evolution, mirroring Yossarian’s character development and culminating with his terrifying walk through Rome. The horrors of “The Eternal City,” in this light, are not only an essential but an inevitable piece in Heller’s plan.
Yossarian’s shall-we-say militant decision to desert is borne of Snowden’s ugly death during the Avignon mission, only a week after the death of Kraft and the award of Yossarian’s medal. Seeing Snowden’s innards spilling rudely from his body nauseates Yossarian and haunts him throughout the entire (or, from Yossarian’s perspective, for the rest of) the story. Yossarian, inset by Heller on behalf of soldiers as a protagonist, has no way of making things better. His futile effort at comfort, “There, there” (p. 166), is comically insincere for its honest helplessness, an understated shriek from all soldiers continually sent to face death – not death without context but without resonance. However, for Yossarian and his comrades, the context of sacrifice is all too irrationally clear: thanks very much. Catch-22. Soldiers face the dilemma of following orders that entirely devalue their very existence.
Participation as a soldier offends Yossarian to the core, yet it also helps him to reconcile his fear over death: “… man is matter,” finite, mortal and – without spirit – simply “garbage.” In fact, this sentence sums human worth as a blunt statement: “The spirit gone, man is garbage” (p. 440). Six words of sad, harsh consequence, war, no longer wearing a comic mask. The absolute phrase, a terse syntactical effect, annuls man’s significance – spirited briefly, gone abruptly, an empty corporeal body left over, garbage. Garbage is a harsh image – rotting flesh, buzzing flies, scum, residue, stench. Pessimism, cynicism, worthlessness. On such terms, one wonders whether anyone might willingly die to save themselves, as it were, another troubling revelation engineered by a masterpiece of unprosaic illogic. Yet even on this point, Heller’s genius is flawless. Haunting though it is, Snowden’s death gradually reveals to Yossarian the very path to life and safety that he has pursued ever since the opening chapter in the hospital – which is to say, ever since Snowden’s death drove him there in the first place.
This is why Heller refers to Snowden’s death, specifically his entrails, as a “secret” because to reveal it any earlier would be to end the novel. And he calls it Snowden’s “grim secret” to illustrate Yossarian’s suppressed mental anguish. Heller has Yossarian recall Snowden a number of times, each admitting more detail, each growing more vivid, each driving him a little closer to his final resolution. Heller’s portrayal of Yossarian’s traumatised memories in this way suggests the nightmarish flashbacks that people, particularly soldiers, endure following the horrors of war. His final flashback in Chapter 41, “Snowden”, is prompted when Yossarian wards off the mysterious stranger in – where else? – the hospital. It’s most revelatory for Yossarian – and readers, by extension – because, here at the end of his patchy, appalling flashbacks, he is finally secure enough to divine for himself – or is it to admit to us? – the grim secret found in Snowden’s entrails. In the same way, the climax is most revelatory for readers who – at the mercy of Heller’s dispersed narrative structure – have been made to wait until the closing, when the time is finally ripe.
To get there, we are dragged unwittingly by Heller down a path of frustrating sympathy, illogical absurdity, and agonising anticipation. By the time Yossarian is introduced (in the opening chapter!) censoring letters and conniving a way to escape the war, he is that much nearer to desertion than we can yet know. Certainly, Snowden will convince us to desert as surely as he convinces Yossarian, but that will happen later, after Heller has aggravated our tolerance and mottled our innocence. Heller must drag us down Yossarian’s agonising path, or else he places us at risk of passing premature judgment upon not merely his protagonist but his entire message. Finally, when the moment arrives that we gather full appreciation of Snowden’s death, we have all we need to share in the vindication of Yossarian’s desertion.
So here is our way to grasp the grim secret behind the novel’s dissembling structure as restlessly and imperturbably as Yossarian does: the root of conflict, Snowden’s death, can only occur at the end of Heller’s narrative path, not Yossarian’s. The story simply works no other way.
Sometimes, the hardest part of teaching felt like finding a way to reach students when they just didn’t get it. But if there’s one thing I learned while teaching, it’s that it takes two. In fact, the hardest part of teaching was coming to realise it wasn’t them not getting it, it was me not getting them. In my own defense, I think we just never can know what another person’s motive truly is. It was times like that when I realised the true constructive value of respect and a good rapport. To have any hope of being open-minded, I intentionally needed to respect my students’ dignity, and I needed to be more self-aware as to how open- or closed-minded I was being. Humility has that way of being, well, humbling. These days I’m still fallible but a lot better off for knowing it. And, yes, humility’s easier said than done.
Over sixteen marvellous years teaching secondary English in a high school classroom, I learned that teaching is a relationship. Better still, it’s a rapport. I learned that it takes two, not just hearing and talking but listening and speaking in turn, and willingly. And, because bias is inescapable, I learned to consider a constructive question: what motives and incentives are driving anyone to listen and speak to anyone else? It has an admittedly unscrupulous undertone: what’s in it for me, what’s in it for them, who’s more likely to win out? The thought of incentives in high school likely evokes report cards, which is undeniable. But where listening (maybe speaking, too) to some degree means interpreting, what my students and I valued most was open-minded class discussion. With great respect for our rapport, we found the most positive approach was, “What’s in it for us?” The resulting back-and-forth was a continual quest for clarity, motivated on everyone’s behalf by incentives to want to understand – mutual trust and respect. Looking back, I’m pleased to say that tests and curricula seldom prevented us from pursuing what stimulated us most of all. We enjoyed very constructive lessons.
Of course, we studied through a lens of language and literature. Of particular interest to me was the construction of writing, by which I mean not just words but the grammar and punctuation that fit them together. My fascination for writing has been one of the best consequences of my own education, and I had encouraging – and one very demanding – writing teachers. In the classroom and on my own, I’ve always been drawn to structure as much as content, if not more so, which isn’t unorthodox although maybe not so common. The structure of writing gets me thinking on behalf of others: why has the writer phrased it this certain way? What other ways might be more or less well-suited for this audience? How might I have phrased something differently than this writer, and why? Most English teachers I know would agree that pondering such questions embodies a valuable constructive skill, these days trumpeted as critical thinking. I’d argue further that it’s even a pathway to virtue. Situated in context, such questions are inexhaustible, enabling a lifetime of learning, as literally every moment or utterance might be chosen for study.
In that respect, we loosely defined text beyond writing to include speech, body language, film, painting, music, architecture – literally any human interaction or endeavour. I’ll stick mostly with listening and speaking, reading and writing, just to simplify this discussion. The scope being so wide, really what our class sought to consider were aim and intention. So when students read a text for content, the WHAT, I’d ask them to consider choices made around vocabulary, syntax, arrangement, and so forth, the HOW. That inevitably posed further questions about occasion and motive, the WHY, which obliged varying degrees of empathy, humility, and discernment in reply: for a given writer, how best to write effectively on a topic while, for a given audience, what makes for skillful reading? What motives are inherent to each side of the dialogue? What incentives? These and others were the broader-based “BIG Question” objectives of my courses. They demanded detailed understanding of texts – heaven knows we did plenty of that. More importantly, the BIG Questions widened our context and appreciation even while they gave us focus. When times were frustrating, we had an answer for why studying texts mattered. Questions reflect motivation. Prior to exercising a constructive frame-of-mind, they help create one.
Questions, like everything else, also occur in a particular context. “Context is everything,” I would famously say, to the point where one class had it stencilled for me on a T-Shirt. So much packed into those three plain words – everything, I suppose. And that’s really my thesis here: if we aim to be constructive, and somehow do justice to that over-taxed concept, critical thinking, then we need to be actively considering what we hear and say or read and write alongside other people, and what it all makes us think for ourselves – especially when we disagree. (Is active thinking the same as critical thinking? I’m sure the phrase is hardly original, but I’ll consider the two kinds of thinking synonymous.) During my last 3-4 years in the classroom, all this came to be known by the rallying cry, “Raise the level of discourse!” These days, however, the sentiment is proving far more serious than something emblazoned on a T-Shirt.
I’m referring, of course, to the debacle that has been the 2016 U.S. Presidential election and its aftermath. Specifically, I have in mind two individual remarks, classic teachable moments inspired by current events. The first remark, from an NPR article by Brian Naylor on the fallout over the executive order banning Muslim immigrants, is attributed to the President. The second remark is a response in the comment section that follows Naylor’s article, representative of many commenters’ opinions. To begin, I’ll explain how something as detailed as grammar and punctuation can help raise the level of discourse, especially with such a divisive topic. From there, I’ll consider more broadly how and why we must always accept responsibility for this active language – sometimes correct grammar should matter not just to nit-pickers but to everybody.
In the article (February 8, 2017), Brian Naylor writes:
“Trump read parts of the statute that he says gives him authority to issue the ban on travel from seven predominantly Muslim nations, as well as a temporary halt in refugee admissions. ‘A bad high school student would understand this; anybody would understand this,’ he said.”
We all know the 45th U.S. President can be brusque, even bellicose, besides his already being a belligerent blundering buffoon. This comment was received in that light by plenty, me included. For instance, by classifying “bad” (versus “good”), the President appeals at once to familiar opposites: insecurity and self-worth. We’ve all felt the highs and lows of being judged by others, so “bad” versus “good” is an easy comparison and, thereby, a rudimentary emotional appeal. However, more to my point, his choice to compare high school students with lawyers, hyperbole or not, was readily construed as belittling since, rationally, everyone knows the difference between adult judges and teenaged students. That his ire on this occasion was aimed at U.S. District Judge James Robart is not to be misunderstood. Ironically, though, the President invokes the support of minors in a situation where they have neither legal standing nor professional qualification, rendering his remark not just unnecessarily divisive but inappropriate, and ignorant besides – although he must have known kids aren’t judges, right?
To be fair, here’s a slightly longer quotation of the President’s first usage of “bad student”:
“I thought, before I spoke about what we’re really here to speak about, I would read something to you. Because you could be a lawyer– or you don’t have to be a lawyer: if you were a good student in high school or a bad student in high school, you can understand this.”
Notice, in the first place, that I’ve transcribed and punctuated his vocal statement, having watched and listened to video coverage. As a result, I have subtly yet inevitably interpreted his intended meaning, whatever it actually was. Yet my punctuation offers only what I believe the President meant since they’re my punctuation marks.
So here’s another way to punctuate it, for anyone who feels this is what the President said:
“Because you could be a lawyer, or you don’t have to be a lawyer – if you were a good student in high school or a bad student in high school, you can understand this.”
“Because you could be a lawyer. Or you don’t have to be a lawyer. If you were a good student in high school or a bad student in high school, you can understand this.”
Finally, but not exhaustively, here’s another:
“Because you could be a lawyer… or you don’t have to be a lawyer; if you were a good student in high school or a bad student in high school, you can understand this.”
Other combinations are possible.
Rather than dismiss all this as pedantry, I’d encourage you to see where I’m coming from and consider the semantics of punctuation. I’m hardly the only one to make the claim, and I don’t just refer to Lynne Truss. Punctuation does affect meaning, both what was intended and what was perceived. To interpret the President’s tone-of-voice, or his self-interrupting stream-of-consciousness, or his jarring pattern-of-speech, or whatever else, is to partly infer what he had in mind while speaking. We interpret all the time, listening not only to words but tone and volume, and by watching body language and facial expression. None of that is typically written down as such, except perhaps as narrative prose in some novel. The point here is that, in writing, punctuation fills part of the interpretive gloss.
Note also where a number of newsheadlines have used the word “even” as an interpreted addition of a word the President did not actually say. Depending upon how we punctuate his statement, inclusive of everything from words to tone to gestures to previous behaviour, perhaps we can conclude that he did imply “even” or, more accurately, perhaps it’s okay to suggest that it’s what he intended to imply. But he didn’t say it.
If we’re going to raise the level of discourse to something constructive, we need to balance between accepting whatever the President intended to mean by his statement with what we’ve decided he intended to mean. In the classroom, I put it to students as such: “Ask yourself where his meaning ends and yours begins.” It’s something akin to the difference between assuming (based on out-and-out guesswork because, honestly, who besides himself could possibly know what the President is thinking) and presuming (based on some likelihood from the past because, heaven knows, this President has offered plenty to influence our expectations). Whatever he meant by referring to good and bad students might be enraging, humbling, enlightening – anything. But only if we consider the overlap, where his meaning ends and ours begins, are we any better off ourselves, as analysts. Effective communication takes two sides, and critical thinking accounts for both of them.
Effective, though, is sometimes up for debate, not merely defining it but even deciding why it matters. Anyway, can’t we all generally figure out what somebody means? Isn’t fussing over details like grammar more about somebody’s need to be right? I’d argue that taking responsibility for our language includes details like grammar precisely so that an audience is not left to figure things out, or at least so they have as little ambiguity to figure out as possible. Anything less from a speaker or writer is lazy and irresponsible.
In the Comments section following Naylor’s article, a reader responds as follows:
“Precisely describing Trump’s base…bad high school students who’s [sic] level of education topped out in high school, and poorly at that. This is exactly what Trump and the GOP want, a poorly educated populous [sic] that they can control with lies and bigoted rhetoric.”
Substantively, the commenter – let’s call him Joe – uses words that (a) oversimplify, blanketing his fellow citizens, and (b) presume, placing Joe inside the President’s intentions. Who knows, maybe Joe’s correct, but I doubt he’s clairvoyant or part of the President’s inner circle. On the other hand, we’re all free to draw conclusions, to figure things out. So, on what basis has Joe made his claims? At a word count of 42, what was he aiming to contribute? Some of his diction is charged, yet at a mere two sentences, it’s chancy to discern his motives or incentives, lest we be as guilty as he is by characterising him as he characterises the President. Even if I’m supportive of Joe, it’s problematic defending his remarks for the same reason – they leave such a gap to fill. At 42 words, where he ends is necessarily where the rest of us begin, and maybe I’m simply better off ignoring his comment and starting from scratch. Maybe that’s fine, too, since we should all have our own opinions. In any event, Joe has hardly lived up to any measure of responsibility to anybody, himself included – here I am parsing his words months later in another country. I’d even say Joe loses this fight since his inflammatory diction and sweeping fallacy play to his opponents, if they so choose. Unsurprisingly, Joe’s comment is not at all constructive.
For all its faults, his comment aptly demonstrates the two-way nature of dialogue. On the one side, responsibility falls to each reader or listener to bring their research and experience, then discern for themselves what was meant. In that regard, Joe has left us with a lot of work to do, if we’re motivated enough to bother. Yet I chose his particular comment as mere illustration – literally hundreds of others, just as brief and labour-intensive, scroll by below Naylor’s article… so much work for us to do, or else to dismiss, or perhaps to gain-say, if not insult. On that note, consider the other side: responsibility falls to the speaker or writer to offer substantive claims as well as the evidence that prompted them. In this instance, no matter the justification for offering something at all, what can a two-sentence comment add to issues as complex and long-standing as, say, Presidential politics? Whether or not on-line comments are democracy in action, certainly offering 42 words in two sentences struggles to promote a meaningful, substantive exchange of ideas.
I used to liken such on-line comments to my students as standing in line, debating with others while waiting for coffee, before returning to our cars or our lives, none the more informed except perhaps annoyed by some while appreciative of others. With the best intentions, we might excuse people, overlooking that we’re the ones who walked out and drove away – maybe we were late for work that day. We’ve been closed-minded to the degree that we haven’t sought to reach a thorough understanding, and certainly we’ve failed to raise the level of discourse. Would it have been better to just say nothing, grab our coffee, and leave?
Yes, I think so, which may not be easy to accept. Conversely, consider that reasoning from presumption and enthymeme is not reasoning at all. Further, consider that two sentences of 42 words or a few minutes spent chatting in the coffee line will barely scratch the surface. Who can say what motivates people to contribute so readily yet so sparsely? Recent times are emotional, growing more volatile, and potentially far more dangerous, as a result. We see in Joe’s comment, and so many others like it, that trust and respect are divisively encased in separate echo chambers. By virtue of us versus them, both sides are challenged to be open-minded.
Worse, the so-called era of “post-truth” impedes exactly the constructive dialogue we need right now, raising ire and diatribe in place of substance and equanimity. Satire compounds disagreement and grows that much more venomous, and ridicule has a way of locking closed doors. I don’t support proceeding from pretence or unfounded opinion – there’s nothing whatsoever to show for an exchange-of-opinion based on falsehood. The burden of post-truth is far too high. A bias and the truth can co-exist, and they do, guaranteed – one truth, objective, and one bias per person, subjective. Bias is an inevitable fact of existence. Left unchecked, bias obviates respect, which is why a constructive approach is so crucial. As I’ve said elsewhere, post-truth is anti-trust, at least for me, and, at its furthest extent, a threat to civil security, which sounds alarmist – good, let it. We need to attend to this. More than ever now, we need respect or, failing that, at least greater tolerance. That’s for starters.
Worse still, in this post-truth world, fictional claims face no arbiter but the other side so distrusted and maligned. The kind of polarised situation made infamous in Washington, DC is spreading, realised in a zillion on-line comments like Joe’s with every article published. Hopefully not this one, unless maybe someone hasn’t actually read this. On such a perilous path – facts in dispute, emotions enflamed – each side qualifies “open-minded” as unique to themselves and misappropriated by the rest. That’s significantly divisive and the recipe for unrest that I spy, and it sounds my alarm. In that divided state, in lieu of anything left to discuss, even as reality has its way of catching up, what damage might already be done? Especially when facing fellow citizens, whatever we choose now must accord with what we’re prepared to accept later. Let that sober thought sink to the core because the less we share common goals, the more we’re set to clash over unshared ones. But it’s within us to converse and to converge.
Let’s be willing to listen with empathy, understand with compassion, research with diligence, and respond with substance. Do your own investigation. Accept responsibility to inform yourself. Yes, take what you find with a grain of salt until you can believe to your own satisfaction what is right and trustworthy. Yet, even then, be tolerant if not respectful of others – too much salt is harmful. We all have our own motives and incentives for listening and participating, so let’s dig deeper than how pissed off we are with the other side: walking the high road with pride or smug assurance is really the low road and a path of hubris. It’s closed-minded, but not in the sense that we haven’t sought to reach a thorough understanding of the other side. It’s closed-minded to the degree that we haven’t sought to understand how and why the other side reached their position to begin with.
None of this is hard to understand. Once upon a time, we decided that education mattered, and it’s no accident that the trivium – grammar, rhetoric, dialectic – was given a central role. These days, its value in niche markets, notably private Christian education, is enough to switch some people off, which sadly exemplifies this entire discussion. I believe classical education is valuable for all. We’ve neglected it to our detriment, perhaps to our peril. We have a lot in common, more than we might credit, with our neighbours and fellow citizens. It’s not like they grew up on Mars. We’re not significantly different – hands up if you’re a human being. Start with that, some basic human dignity.
There’s a lot to be offered by rapport in our relationships, and little to expect without it. All we can do is understand the other person’s interpretation, and they ours, and go from there – or else not. And it’s easy to nod and say, “I already do that while others do not.” But reflect upon yourself anyway, in every conversation, debate, or exchange. Humility is a virtue, even when kept low-key. Everybody bears responsibility for their own participation. The more we live up to being respectful, even of those whom we oppose, the more progress we’re liable to make – however slowly it might happen.
As I said at the outset, yes, humility’s easier said than done. But by the same token, why write this essay if 42 words would do? We must neither hide ourselves away nor proceed prematurely. We must be able to discern flaws of reason, and we must be able to communicate with humility if we aim to deliver – and, more critically, if we hope to be received – from a place of thoughtfully considered understanding. Whether or not we truly trust one another, let’s help put the logos back in dialogue and accept our own responsibility to approach people with intentional self-awareness. Let’s seize the opportunity to be role-models – you just never know what somebody else is thinking. Let’s raise the level of discourse. And let’s remember that taking the high road must be open-hearted as well as open-minded.