These are all descriptors I’ve encountered for Canada, from one source or another. I can make of each one something contextual. Yet as each suggests a departure or break from something previous, that’s really just a subtle way of saying, “Here’s what we aren’t.”
Yet describing something with negative terminology is ultimately meaningless because it can end up becoming silly; for instance, “I am not a giant Godzilla-like dragon that breaths fire and enjoys sipping my iced coffee on Tuesdays.” We could literally imagine anything that isn’t the case and say as much, and we’re no further ahead knowing what actually is the case.
So when I see descriptors like these – for Canada but really for anything – I’m unclear and confused about what to think. It’s a concern for me, the citizen, because who I am and what I value have direct effect on you and everyone else, and me in return all over again.
Ignoring the post-modern fallacy, i.e. nothing is true other than the statement that confirms nothing is true, this description of Canadian identity also falls in line with the negative terminology and serves as the on-ramp to the freeway of silliness upon which no Godzillas sip their Tuesday coffee.
And where the link above was an American take on our Prime Minister’s interpretation of whom he leads, others have taken noted concern of his statement, too, among them some Canadians whom he leads…
On the other hand, and perhaps in response (?), the Government of Canada is now apparently reversing course, telling Canadians and would-be Canadians something awfully more specific about Canadian identity:
I admit, once more, to losing track as a “Canadian,” although at least this time the terminology is positive: “We are indeed ‘this’ and ‘that.’”
Some pretty specific stuff in this Global Affairs guide. For example…
“When lining up in a public place, the bank for instance, Canadians require at least 14 inches of space…”
Right down to the inch? Granted, I’m not the most social-media savvy citizen you could find, but I think a colloquial Canadian response to this – at least on-line – might be “WTF!!!”
Still, please don’t let me speak on your behalf. That said, the guide seems to have been compiled by one person in an interview format with a second person because it’s written with a first-person perspective: it’s uniquely Canadian, you might say.
Now, if your rejoinder is to excuse this guide as merely a helpful list of suggestions for what is “Canadian,” then I counter with the challenge to separate, in these suggestions, what are quintessential as compared to what are stereotypical descriptions. After all, what Canadian does NOT love beer and hockey and The Hip, just as they detest the gesturing of hands and public displays of affection?
We’re approaching another freeway on-ramp, this one a sloped and slippery freeway that circles and loops and arrives at no particular destination because at its terminus interminably works a construction crew, who build it out just a little further than before, apparently with no idea who they are, or what they do, or – perhaps worst of all – why they might want to reflect, with no small concern, upon the work they consider to be of national significance.
Seriously, am I the only one who’s concerned by this?
At the same school, in the same department, for so long… eventually I found what seemed to be some effective teaching strategies and stuck with those, but boy it took a while. There’s been more than one teacher to have offered something like an apology, half-joking, half rueful, to all those early students, who were basically guinea pigs while we figured ourselves out in the classroom. I mentioned this in a paper for a graduate course and earned the critique of “triumphalism” – feedback from the professor, which I took as a suggestion to go ahead and “problematize my assumptions,” to use the lingo. In the moment, I bristled, the new kid in town learning how to be part of the academy, wondering what exactly had prompted my professor to claim with such certainty the question of my certainty.
Maybe I’ll just mention, since I’ve brought it up… I’ve since found the academy has an endemic logical pitfall all its own, an oddly hypocritical veneer of uncertainty: “All knowledge is provisional.” Post-modernism at its finest? Indeed, who can really say.
In all seriousness, though, and fairness, I grant the aim of the sceptical outlook. Heck, I try to possess one – healthy scepticism, to guard against arrogance and narrow thinking (… and innovation too, come to think of it, although that one maybe for another time). I value Socratic humility, which I ultimately decided not to call Socratic ignorance, and try to model it although how successfully I can’t say – especially not *joking-slash-rueful* back in those early days. So when someone with expertise in curriculum and teaching theory lay triumphalism at my feet, I thought to myself, Well, at least I ought to consider it. And I did.
And I do, and I still am. That reflective side of critique, the side you get from being on the receiving end, it can help us spot our assumptions and our shortcomings. I suspect the whole point was simply to light a fire within me. And hey, I’ve gone and written this, haven’t I? And hey, if settling into some effective teaching strategies weren’t triumphalist and undesirable, that would probably encourage complacency among teachers, or possibly even stagnation. On the other hand, after so long teaching in the same department at the same school, I suspect there’s more than one teacher who’s ended up feeling like part of the woodwork. Certainly, for me, as I’m sure for the students, there was a marked difference between me, the new guy, and me five, ten, fifteen years later. Then again–
Looking back, now, at what I called “effective”… it rounds out as, well, effective because what happened happened that way – nothing’s perfect, but all considered, my students seemed broadly to have learned what they felt were some useful things. The classroom years I spent, developing as I did to reach the point I reached, came about from the feedback I received each day, each term, as students and I came together lesson upon lesson, class after class. Details along the way, course evaluations I asked students to complete each June, reports back from post-secondary adventuring… there are always issues to address along with encouragements to appreciate, and I admit: no grand theory did I have in mind, as though I were contributing to the historical record. I just wanted to make things better for kids the following year, which eventually I think I was able to do.
Where I gave thought to improving my teaching was (a) relative to myself, (b) on behalf of my students, (c) in the context of my school. At least, that was what I thought when I was teaching. In that respect, what can I possibly say now, looking back, as to what might have been apart from what did be? I had to do something. And my life was never going to be any less full or busy or complicated than it turned out to be, so in all sincerity I did what I could. Eventually, it seemed to work out pretty well. Effectively.
Look, if somebody did celebrate triumphantly, in the classroom, facing the students, day in day out… ? What an ass! As it was, for those students who did find my teaching effective in this way or that, or worse, for those who didn’t – did I leave them with some suggestion that I basked in triumphant glow? I hope not. Like I said, I eventually found and stuck with what I thought worked, and that took years. Meanwhile, that’s the job. Isn’t it?
For me, the professor’s criticism, in whatever light it was offered, reflects more upon her embrace of uncertainty (presumably the academic embrace I described above) than it does upon my curricular relationships when I was teaching. And I heed the lesson, not for the first time in my career, that sitting in judgment of others can be a difficult perch.
Perhaps above all I appreciate Meester’s nuanced intuition about the audiences who judge Curley’s wife which, beyond their relationships to the characters in the story, might suggest something about their own – our own – blind spots and hypocrisies. How often we live with daily nonchalance, oblivious to the interiority of those we encounter, and of those beyond. How much we rely on our affirmed belief of our selves.
If confronting ourselves is art’s great authenticity, then Meester’s perception is spot-on: in Curley’s wife, Steinbeck subverts our conceit – whether he intended to or not. Indeed, the best-laid schemes…
Time and energy… the one infinite, the other hardly so. The one an abstraction, the other all too real. But while time ticks ceaselessly onward, energy forever needs replenishing. We assign arbitrary limits to time, by calendar, by clock, and as the saying goes, there’s only so much time in a day. Energy, too, we can measure, yet often we equate both time and energy monetarily, if not by actual dollars and cents: we can pay attention, spend a day at the beach, save energy – the less you burn, the more you earn! And certainly, as with money, most people would agree that we just never seem to have enough time or energy.
Another way to frame time and energy is as an investment. We might invest our time and energy learning to be literate, or proficient with various tools, or with some device that requires skilful application. Everything, from a keyboard or a forklift or a tennis racquet to a paring knife or an elevator or a golf club to a cell phone or a self-serve kiosk or the new TV remote, everything takes some knowledge and practice. By that measure, there are all kinds of literacies – we might even say, one of every kind. But no matter what it is, or how long it takes to master, or why we’d even bother, we shall reap what we sow, which is an investment analogy I bet nobody expected.
Technology returns efficiency. In fact, like nothing else, it excels at creating surplus time and energy, enabling us to devote ourselves to other things and improve whichever so-called literacies we choose. The corollary, of course, is that some literacies fade as technology advances. Does this matter, with so many diverse interests and only so much time and energy to invest? How many of us even try everything we encounter, much less master it? Besides, for every technological advancement we face, a whole new batch of things must now be learned. So, for all that technological advancement aids our learning and creates surplus time and energy, we as learners remain the central determinant as to how to use our time and energy.
Enter the classroom what’s lately been called Artificial Intelligence (A.I.). Of course, A.I. has received plenty of enthusiasticattention, concern, and critique as a developing technological tool, for learning as well as plenty other endeavours and industries. A lengthy consideration from The New York Times offers a useful, broad overview of A.I.: a kind of sophisticated computer programming that collates, provides, and predicts information in real time. Silicon Valley designers aim to have A.I. work at least somewhat independently of its users, so they have stepped away from older, familiar input-output modes, what’s called symbolic A.I., a “top down” approach that demands tediously lengthy entry of preparatory rules and data. Instead, they are engineering “from the ground up,” building inside the computer a neural network that mimics a brain – albeit, a very small one, rivalling a mouse – that can teach itself via trial-and-error to detect and assess patterns found in the data that its computer receives. At these highest echelons, the advancement of A.I. is awe-inspiring.
Now for the polemic.
In the field of education, where I’m trained and most familiar, nothing about A.I. is nearly so clear. Typically, I’ve found classroom A.I. described cursorily, by function or task:
A.I. facilitates individualized learning
A.I. furnishes helpful feedback
A.I. monitors student progress
A.I. highlights possible areas of concern
A.I. lightens the marking load
On it goes… A.I., the panacea. Okay, then, so in a classroom, how should we picture what is meant by “A.I.”?
Specific examples of classroom A.I. are hard to come by, beyond top ten lists and othergeneralizeddescriptions. I remember those library film-strip projectors we used in Grade 1, with the tape decks attached. Pressing “Play,” “Stop,” and “Eject” was easy enough for my six year-old fingers, thanks to engineers who designed the machines and producers who made the film strips, even if the odd time the librarian had to load them for us. (At home, in a similar vein, how many parents ruefully if necessarily consider the T.V. a “babysitter” although, granted, these days it’s probably an iPad. But personification does not make for intelligence… does it? Didn’t we all understand that Max Headroom was just a cartoon?) There’s a trivia game app with the hand-held clickers, and there’s an on-line plagiarism detector – both, apparently, are A.I. For years, I had a Smart Board although I think that kind of branding is just so much capitalism and harshly cynical. Next to the Smart Board was a whiteboard, and I used to wonder if, someday, they’d develop some windshield wiper thing to clean it. I even wondered if someday I wouldn’t use it anymore. For the record, I like whiteboards. I use them, happily, all the time.
Look, I can appreciate this “ground-up” concept as it applies to e-machines. (I taught English for sixteen years, so metaphor’s my thing.) But intelligence? Anyway, there seems no clear definition of classroom A.I., and far from seeming intelligent to me, none of what’s out there even seems particularly dim-witted so much as pre-programmed. As far as I can tell, so-called classroom A.I. is stuff that’s been with us all along, no different these days than any tool we already know and use. So how is “classroom A.I.” A. I. of any kind, symbolic or otherwise?
Symbolic A.I., at least the basis of it, seems not too dissimilar to what I remember about computers and even some video arcade favourites from back in the day. Granted, integrated circuits and micro-processers are a tad smaller and faster these days compared to, say, 1982 (… technology benefitting from its own surplus?). Perhaps more germane to this issue is the learning curve, the literacy, demanded of something “intelligent.” Apparently, a robot vacuum learns the room that it cleans, which as I gather is the “ground-up” kind of Symbolic A.I. Now, for all the respect and awe I can muster for a vacuum cleaner—and setting all “ground-up” puns aside—I still expect slightly less from this robot than passing the written analysis section of the final exam. (I taught English for sixteen years, so written analysis is my thing.) It seems to me that a given tool can be no more effective than its engineering and usage, and for that, isn’t A.I.’s “intelligence” more indicative of its creator’s ingenuity or its user’s aptitude than of itself or its pre-programmed attributes?
By the same token, could proponents of classroom A.I. maybe just ease off a bit from their retcon appropriation of language? I appreciate getting caught up in the excitement, the hype—I mean, it’s 21st century mania out there, candy floss and roller coasters—but that doesn’t mean you can just go about proclaiming things as “A.I.” or, worse, proclaiming A.I. to be some burgeoning technological wonder of classrooms nationwide when… it’s really not. Current classroom A.I. is simply every device that has always already existed in classrooms for decades—that could include living breathing teachers, if the list of functions above is any guide. Okay then, hey! just for fun: if classroom tools can include teachers who live and breathe, by the same turn let’s be more inclusive and call A.I. a “substitute teacher.”
Another similarly common tendency I’ve noted in descriptions of classroom A.I. is to use words like “data,” “algorithm,” and “training” as anthropomorphic proxy for experience, decision-making, and judgment, i.e. for learning. Such connotations are applied as simply as we might borrow a shirt from our sibling’s closet, as liberally as we might shake salt on fries, and they appeal to the like-minded, who share the same excitement. To my mind, judicious intelligence is never so cavalier, and it doesn’t take much horse-sense to know that too much salt is bad for you, or that your sibling might be pissed off after they find their shirt missing. As for actually manufacturing some kind of machine-based intelligence, well… it sure is easy to name something “Artificial Intelligence,” much less bestow “intelligence” by simply declaring it! The kind of help I had back in the day, as I see it, was something I just now decided to call “S.I.”: sentient intelligence.
Facetiousness aside, I grant probably every teacher has spent some time flying on auto-pilot, and I’ve definitely had days that left me feeling like an android. And fair enough: something new shakes things up and may require some basic literacy. There’s no proper use of any tool, device, or interface without some learned practical foundation: pencil and paper, protractor, chalk slates, the abacus. How about books, or by ultimate extension, written language, itself? These are all teaching tools, and each has a learning curve. So is A.I. a tool, a device, an interface? All of the above? I draw the line where it comes to classroom tools that don’t coach the basketball team or have kids of their own to pick up by 5pm: the moniker, “A.I.,” seems more than a bit generous. And hey, one more thing, on that note: wouldn’t a truer account of A.I., the tool, honour its overt yet seemingly ignored tag, “artificial”? R2D2 and C-3PO may be the droids we’re looking for, but they’re still just science fiction.
Fantastic tales aside, technological advancements in what is called the field of A.I. have and will continue to yield useful, efficient innovation. And now I mean real Silicon Valley A.I., not retcon classroom A.I. But even so, to what ends? What specifically is this-or-that A.I. for? In a word: why? We’re headed down an ontological road, and even though people can’t agree on whether we can truly consider our self, we’re proceeding with A.I. in the eventual belief that it can. “It will,” some say. Not likely, I suspect. Not ever. But even if I’m wrong, why would anyone hope that A.I. couldthink for itself?
Hasn’t Heidegger presented us with enough of a challenge, as it is? Speaking of time and energy, let’s talk opportunity costs. Far greater minds than mine have lamented our ominous embrace with technology. Isn’t the time and energy spent on A.I.—every second, every joule of it—a slap-in-the-face of our young people and the investment that could have been made in them? It’s ironic that we teach them to develop the very technology that will eventually wash them away.
Except that it won’t. I may be out on a limb to say so, but I suspect we will sooner fall prey to the Twitterverse and screen-worship than A.I. will fulfil some sentient Rise of the Machines. The Borg make good villains, and even as I watch a lobby full of Senior Band students in Italy, staring at their iPhones, and fear assimilation and, yes, worry for humanity… I reconsider because the Borg are still just a metaphor (… sixteen years, remember?). Anyway, as a teacher I am more driven to reach my students with my own message than I am to snatch that blasted iPhone from their hands, much as I might like to. On the other hand, faced with a dystopian onslaught of Replicants, Westworld Gunslingers, and Decepticons, would we not find ourselves merely quivering under the bed, frantically reading up on Isaac Asimov while awaiting the arrival of Iron Man? Even Luke Skywalker proved susceptible to the Dark Side’s tempting allure of Mechanized Humanity; what possible response could we expect from a mere IB cohort of inquiry-based Grade 12 critical thinkers and problem-solvers?
At the very least, any interruption of learners by teachers with some classroom tool ought to be (i) preceded by a primer on its literacy, i.e. explaining how to use that particular tool in (ii) a meaningful context or future setting, i.e. explaining why to use that particular tool, before anybody (iii) begins rehearsing and/or mastering that particular tool, i.e. successfully executing whatever it does. If technology helps create surplus time and energy, then how and why and what had better be considered because we only have so much time and energy at our disposal. The what, the how, and the why are hardly new concepts, but they aren’t always fully considered or appreciated either. They are, however, a means of helpful focusing that few lessons should be without.
As a teacher, sure, I tend to think about the future. But that means spending time and paying attention to what we’re up to, here and now, in the present. To that end, I have an interest in protecting words like “learning” and “intelligence” from ambiguity and overuse. For all the 21st century hearts thumping over the Cinderella-transformation of ENIAC programmable computation to A.I., and the I.o.T., and whatever lies beyond, our meagre acknowledgement of the ugly step-sister, artificiality, is foreboding. Mimicry is inauthentic, but neither is it without consequence. Let’s take care that the tools we create as means don’t replace the ends we originally had in mind because if any one human trait can match the trumpeting of technology’s sky-high potential—for me at least, not sure for you—I’d say its hubris.
Another fantastic tale comes to mind: Frankenstein’s monster. Technological advancement can be as wonderful as horrifying, probably usually somewhere in between. However it’s characterised or defined, though, by those who create it, it will be realised in the end by those who use it, if not those who face it. For most people, the concept of cell phones in 1982 was hardly imagined. Four decades later, faces down and thumbs rapid-fire, the ubiquity of cell phones is hardly noticed.
Before introducing the moral pairing of right and wrong to my students, I actually began with selfish and selfless because I believe morality has a subjective element, even in the context of religion, where we tend to decide for ourselves whether or not we believe or ascribe to a faith.
As I propose them, selfish and selfless are literal, more tangible, even quantifiable: there’s me, and there’s not me. For this reason, I conversely used right and wrong to discuss thinking and bias. For instance, we often discussed Hamlet’s invocation of thinking: “… there is nothing good or bad, but thinking makes it so” (II, ii, 249-250). Good and bad, good and evil, right and wrong… while not exactly synonymous, these different pairings do play in the same ballpark. Still, as I often said to my students about synonyms, “If they meant the same thing, we’d use the same word.” So leaving good and bad to the pet dog, and good and evil to fairy tales, I presently consider the pairing of right and wrong, by which I mean morality, as a means to reconcile Hamlet’s declaration about thinking as some kind of moral authority.
My own thinking is that we have an innate sense of right and wrong, deriving in part from empathy, our capacity to stand in someone else’s shoes and identify with that perspective – look no further than storytelling itself. Being intrinsic and relative to others, empathy suggests an emotional response and opens the door to compassion, what we sometimes call the Golden Rule. Compassion, for Martha Nussbaum, is that means of “[hooking] our imaginations to the good of others… an invaluable way of extending our ethical awareness” (pp. 13-14). Of course, the better the storytelling, the sharper the hook, and the more we can relate; with more to go on, our capacity for empathy, i.e. our compassion, rises. Does that mean we actually will care more? Who knows! But I think the more we care about others, the more we tend to agree with them about life and living. If all this is so, broadly speaking, if our measure for right derives from empathy, then perhaps one measure for what is right is compassion.
And if we don’t care, or care less? After all, empathy’s no guarantee. We might just as reasonably expect to face from other people continued self-interest, deriving from “the more intense and ambivalent emotions of… personal life” (p. 14). Emotions have “history,” Nussbaum decides (p. 175), which we remember in our day-to-day encounters. They are, in general, multifaceted, neither a “special saintly distillation” of positive nor some “dark and selfish” litany of negative, to use the words of Robert Solomon (p. 4). In fact, Solomon claims that we’re not naturally selfish to begin with, and although I disagree with that, on its face, I might accept it with qualification: our relationships can supersede our selfishness when we decide to prioritise them. So if we accept that right and wrong are sensed not just individually but collectively, we might even anticipate where one could compel another to agree. Alongside compassion, then, to help measure right, perhaps coercion can help us to measure wrong: yes, we may care about other people, but if we care for some reason, maybe that’s why we agree with them, or assist them, or whatever. Yet maybe we’re just out to gain for ourselves. Whatever our motive, we treat other people accordingly, and it all gets variously deemed “right” or “wrong.”
I’m not suggesting morality is limited solely to the workings of compassion and coercion, but since I limited this discussion to right and wrong, I hope it’s helping illuminate why I had students begin first with what is selfish and selfless. That matters get “variously deemed,” as I’ve just put it, suggests that people seldom see any-and-all things so morally black and white as to conclude, “That is definitely wrong, and this is obviously right.” Sometimes, of course, but not all people always for all things. Everybody having an opinion – mine being mine, yours being yours, as the case may be – that’s still neither here nor there to the fact that every body has an opinion, mine being mine and yours being yours. On some things, we’ll agree while, on some things, we won’t.
At issue is the degree that I’m (un)able to make personal decisions about right and wrong, the degree that I might feel conspicuous, perhaps uneasy, even cornered or fearful – and wrong – as compared to feeling assured, supported, or proud, even sanctimonious – and right. Standing alone from the crowd can be, well… lonely. What’s more, having some innate sense of right and wrong doesn’t necessarily help me act, not if I feel alone, particularly not if I feel exposed. At that point, whether from peer pressure or social custom peering over my shoulder, the moral question about right and wrong can lapse into an ethical dilemma, the moral spectacle of my right confronted by some other right: would I steal a loaf of bread to feed my starving family? For me, morality is mediated (although not necessarily defined, as Hamlet suggests) by where one stands at that moment, by perspective, in which I include experience, education, relationships, and whatever values and beliefs one brings to the decisive moment. I’m implying what amounts to conscience as a personal measure for morality, but there’s that one more consideration that keeps intervening: community. Other people. Besides selfish me, everybody else. Selfless not me.
Since we stand so often as members of communities, we inevitably derive some values and beliefs from those pre-eminent opinions and long-standing traditions that comprise them. Yet I hardly mean to suggest that a shared culture of community is uniform – again, few matters are so black or white. Despite all that might be commonly held, individual beliefs comprising shared culture, if anything, are likely heterogeneous: it’s the proverbial family dinner table on election night. Even “shared” doesn’t rule out some differentiation. Conceivably, there could be as many opinions as people possessing them. What we understand as conscience, then, isn’t limited to what “I believe” because it still may not be so easy to disregard how-many-other opinions and traditions. Hence the need for discussion – to listen, and think – for mutual understanding, in order to determine right from wrong. Morality, in that sense, is concerted self-awareness plus empathy, the realised outcome of combined inner and outer influences, as we actively and intuitively adopt measures that compare how much we care about the things we face everyday.
Say we encounter someone enduring loss or pain. We still might conceivably halt our sympathies before falling too deeply into them: Don’t get too involved, you might tell yourself, you’ve got plenty of your own to deal with. Maybe cold reason deserves a reputation for callusing our decision-making, but evidently, empathy does not preclude our capacity to reason with self. On the other hand, as inconsistent as it might seem, one could not function or decide much of anything, individually, without empathy because, without it, we would have no measure. As we seem able to reason past our own feelings, we also wrestle echoing pangs of conscience that tug from the other side, which sometimes we call compassion or, other times, a guilt trip. Whatever to call it, clearly we hardly live like hermits, devoid of human contact and its resultant emotions. Right and wrong, in that respect, are socially individually determined.
One more example… there’s this argument that we’re desensitized by movies, video games, the TV news cycle, and so forth. For how-many-people, news coverage of a war-torn city warrants hardly more than the glance at the weather report that follows. In fact, for how-many-people, the weather matters more. Does this detachment arise from watching things once-removed, two-dimensionally, on a viewscreen? Surely, attitudes would be different if, instead of rain, it were shells and bombs falling on our heads from above. Is it no surprise, then, as easily as we’re shocked or distressed by the immediacy of witnessing a car accident on the way to our favourite restaurant, that fifteen minutes later we might conceivably feel more annoyed that there’s no parking? Or that, fifteen minutes later again, engrossed by a menu of appetizers and entrees and desserts, we’re exasperated because they’re out of fresh calamari. Are right and wrong more individually than socially determined? Have we just become adept at prioritising them, even diverting them, by whatever is immediately critical to individual well-being? That victim of the car accident isn’t nearly as worried about missing their dinner reservation.
Somewhat aside from all this, but not really… I partially accept the idea that we can’t control what happens, we can only control our response. By “partially” I mean that, given time, yes, we learn to reflect, plan, act, and keep calm carrying on like the greatest of t-shirts. After a while, we grow more accustomed to challenges and learn to cope. But sometimes what we encounter is so sudden, or unexpected, or shocking that we can’t contain a visceral response, no matter how accustomed or disciplined we may be. However, there is a way to take Hamlet’s remark about “thinking” that upends this entire meditation, as if to say our reaction was predisposed, even premeditated, like having a crystal ball that foresees the upcoming shock. Then we could prepare ourselves, rationalise, and control not what happens but our response to it while simply awaiting the playing-out of events.
Is Solomon wise to claim that we aren’t essentially or naturally selfish? Maybe he just travelled in kinder, gentler circles – certainly, he was greatly admired. Alas, though, poor Hamlet… troubled by jealousy, troubled by conscience, troubled by ignorance or by knowledge, troubled by anger and death. Troubled by love and honesty, troubled by trust. Troubled by religion, philosophy, troubled by existence itself. Is there a more selfish character in literature? He’s definitely more selfish than me! Or maybe… maybe Hamlet’s right, after all, and it really is all just how you look at things: good or bad, it’s really just a state of mind. For my part, I just can’t shake the sense that Solomon’s wrong about our innate selfishness, and for that, I guess I’m my own best example. So, for being unable to accept his claim, well, I guess that one’s on me.
Like a vast sea of experience is all that we know and learn and encounter every single day. We are but tiny ships bobbing and rolling upon its waves, its currents steering us here and there. How on earth do we discern and decide what we value, what we believe, in order to collaborate with others in meaningful curricular relationships? (I almost wish I could just be waylaid by pirates, or something.) For me, one way to decide is to consider our shared motives, and find incentives to collaborate from there. Notwithstanding the degree to which people are educated, or by whom, everybody has motives.
But we do not all necessarily have a particular destination or a future port-of-call. So the aim for curriculum appears to be that of shaping motives to coincide with the current state of affairs such that, in a broad sense, people can (a) function – a measure of the self-ful – and then (b) contribute – a measure of the selfless. Upon this vast sea, we are not so much bound for any one destination as we are bound to assist each other, each underway to wherever best suits our particular circumstances at that time – yours for you, and mine for me – and let the tangents direct us as they will.
Education, I have come to learn, is learning to have more than a destination or purpose of my own. It is to convoy with others and have faith that they do the same for others and for me, and putting in to decidedly worthwhile ports-of-call on the way. On the way, we chart our courses, but as similar as the ocean might look any given moment, wave after rolling wave, no two moments are ever exactly alike. To that degree, everyone must chart on their own. How intentionally we aid each other, how much or how little we trust, how sincerely we navigate, it is our shared curricula that will determine how effectively we undertake any particular decision we are ever likely to face, alongside whomever we find ourselves. The more we convoy in earnest, the safer we will be. With that kind of support, what is it that would sink us?
One final cautionary note: if and when some finally do make landfall somewhere, with certainty to their decision, we must acknowledge that their perspective will shift dramatically from those others who remain, however more or less certain to remain, at sea. Not everyone wants to remain at sea, and such variances our curricula are obliged to accommodate, if not fully comprehend or appreciate. There on that solid shore might be a tighter homogeneous culture that yields a more one-sided – or dogmatic? prejudiced? – communal certainty all its own. On that shore we might find a trade-off that sets the communal trustworthiness of the bobbing convoy against the stable individual footing of landfall. Yet somehow we all must sustain what we share, no matter the differences that may arise between sailor and landlubber – and why?
Because what remains the same amongst us – indeed, that which makes us who and what we are – is what we have in common. Common to all of us is being alive, being a person, being a human being, someone deserving of a basic respect for human dignity. Each of us, all of us, every one of us. We are all people. In this regard, really all that differs between us is where we are, and when. For people to think in any way differently than this about other people is narrow, delusional, perhaps cruel, and flat-out wrong. That may hardly feel a satisfactory closing, maybe even anti-climactic, but who ever said learning was meant to be entertainment? Learning’s the thing wherein we catch the conscience of each other.
 Forgive the invention, “self-ful.” I hesitated to use “selfish,” which tends to connote self-seeking and self-aggrandizing behaviour (in that colloquial sense of “No, you can’t have any of my ice cream”), and taking inspiration from the Bard, I just made up a word of my own. Likewise, I do not use “selfless” in some altruistic way so much as simply to counter “self-ful”; as a pair, I intend them to signify simply the notion of there being, for each of us, an intrinsic “me” and plenty of extrinsic “not me’s.” Further, with my students, I would liken self-fulness to each one’s academic efforts and scholarship, and selflessness to voluntary service and community stewardship of whatever kind. The longer-term idea was teaching students to balance these as required by kairos, by circumstance – an appropriate time for each, and the wisdom to know the difference.
 Or maybe, just maybe, there’s a curricular role for those gnarly amphibious surfers, after all.
For all this, what exactly does it mean to be educated? From the sole perspective – yours, mine, anybody’s – free thinking means freedom granted to individuals to believe and behave as they do, then investing proportionate faith that they continue to believe and behave as we do. Of course, anyone’s beliefs might vary, freely, from ours, as compared to everyone conforming to the same beliefs and behaviours. Imagine that world, where every inhabitant lived according to self-established morality. In such a world, how would there come about any rule of law? Even real, lived experience here in Canada is tenuous, relying on everyone to rely on everyone else. Whether out of respect for each other, out of gaining some advantage, out of fear for paying a fine or going to jail – on it goes, accountability, but the individual freedom we avouch is as ready to dissipate as the smoke of a powderkeg. For all its enlightenment, free-thinking is quicksand: shifting, uncertain, deceiving, solid ground by mere appearance. Is it any wonder that the liberty and reason of Enlightenment individuation has led us to Post-modernism, relativism, identity politics, and alternative facts? Be careful what you wish for. If there are any true binaries, to trust or not to trust must certainly be one. What need for faith when we trust that we are all alike, that all around is 100% certain?
Such a world is hardly plausible for me. I have learned not to trust everybody I meet. In the world I know, we need discernment and persuasive rhetorical skill to skirt potential conflicts and get others onside. And when others have discernment and persuasive rhetorical skill, too? Seen in that light, the curricular task is competitive, not cooperative. Even so, we might still argue that curriculum is collaborative, and it does not have to be belligerent. Curriculum falls within the scope of some given morality, morality being a question of right and wrong, positive opposing negative: to x, or not to x. However, curriculum itself is an ethical choice between alternatives and is, thereby, an empowering decision. We must therefore ask to x, or to y, which are positives, a question of competing rights, and not right competing against wrong.
And anywhere right does oppose wrong, curriculum should not permit a choice because wrong is simply wrong and not something that responsible choice can decide. Beyond simply learning about the freedom to think, curriculum is about learning how to make choices that are set within the scope of defined morality. Question the morality, compare it to another morality, and we are Hamlet: we are lost. But decide, and accept the morality, and question only those choices intrinsic to its milieu… now we are educating ourselves and others, however precisely or narrowly, for as long as we care to pursue whatever makes us curious.
For me, someone is educated who thinks, and discerns, and has aims. Admittedly, such aims could be countered or rationalised pragmatically or else, more perversely, aimed beyond oneself to harm others – thinking in itself, after all, is not inherently moral. So if morality is a thing to be taught and also learned, then an educated person, for me, is someone who learns generosity of some kind, hospitality. Being educated means learning to give of oneself, for others or on behalf of others, in positive, constructive ways. This belief, I suppose, reflects my learned morality, which I am as pleased in all caring as utility to pass along. Perhaps your morality differs. To that end, education, in itself, should intentionally be both constructive and benevolent in consideration of that sense of kairos, what is appropriate in the moment for teacher and learner, even as those moments accumulate over the passage of chronos-time, like endless waves upon the shore. Then again, who am I to anybody that the sole importance of my opinion should determine an education? If I am outnumbered, what is this sense of education that I describe but some solitary means of facing an existence nasty, brutish, and short? This thing called school will be the death of me!
See? Recruiting Hamlet’s cycle of misery seems all too easy “‘where the postmodern turn of mind appears to privilege the particular over the general’” (Roberts, 2003, p. 458). Frankly, I think our present culture regards the individual far too much. Naturally, the consequent short-changing of the bigger community picture has been playing out over chronos-time since, with every decision, there has been consequence. However, Roberts continues, “… ‘for Freire both [the particular and the general] depend on each other for their intelligibility’.” So perhaps a good education – by which I mean not just a moral one but an effectual one – is best measured with due consideration for its balance of the particular and the general, the heterogeneous and the homogenous, the certainty and the ambiguity, the inductive and the deductive. A little healthy scepticism, a little cloud for the silver lining. A little dram in the substance, to paraphrase Hamlet. “A little dab’ll do ya,” quips McMurphy. You can’t have one without the other, sings the primus inter pares.
We defy augury by flouting convention, even law, because we are free agents who do what we please. Some will have more courage than others, and some are just more foolhardy, but no one is literally predictable. We defy augury by being unpredictable, even inscrutable, although maybe the rest of you just never really knew me that well to begin with. Sometimes I even surprise myself. We defy augury by defying our senses, by not comprehending the world that we apprehend, which really is to say we see only what we want to see and recognise only what we already know. If there is special providence in the fall of a sparrow, what matter when we have spent all our time watching the chickadees? I cannot shake free from critiquing our cultural veneration of the individual: the less our shared beliefs converge and reciprocate a healthy community, the greater our insistence upon personal liberty to go our own way, then all the more do we miss the point of understanding exactly what freedom really is. True freedom results from having choices, and what creates choice is not the persuasive liberty of unequivocal individualism but discipline: to do ‘x’, or ‘y’, or ‘z’.
Shakespeare’s “Let…” statements are not so colloquial as to suggest the fatalism of c’est la vie, or the aimlessness of go with the flow – these, for me, amount to giving up, or else giving in. The tragedy of Hamlet is that the curriculum he really needed – the people he could trust, who would be willing to help him – they were already there, at his side the whole time, as ready and willing as ever, so long as he gave a little back, so long as he offered just a dram of willingness to coincide with their beliefs – to his own scandal, maybe, but who in the real world is so selfish as they might expect to have their cake and eat it, too? As compared to going it alone, Hamlet might have humbled himself and cast his lot with those to whom he is closest. His education from Wittenberg proved sufficient to challenge his upbringing in Elsinore, amply suggested by his continued trust to enlist and confide in Horatio throughout the play; as far as that went, the rest of us would do well to heed his lesson with due respect: if only Hamlet had not divided his loyalty but decided, once and finally, exactly who he was and whom he trusted, then lived up to his declaration with discipline. With integrity.
The most common criticism aimed his way by my students was essentially, “Get over yourself, and grow up!” Make a decision with the discipline to accept the consequences, which is to say, accept your personal responsibility. To be fair, Hamlet finally, triumphantly, does place his faith in Horatio, whom he entrusts to tell his story. Granted, he only asks once he is terminally poisoned but hey, better to ask while alive to breathe the words than come back and haunt Horatio as the next in a line of Ghosts. As for Shakespeare, whatever exactly it was that he saw in us, this ethical curricular dilemma, evidently he felt its redemptive quality was worth its cost, as Horatio makes known – or will do – for pledging to tell his dying friend’s tale to Fortinbras. Shakespeare’s appeal by way of Hamlet is not one of giving up or giving in. It is one of giving over, to something bigger than ourselves, to something in which faith placed is faith assured, and “attuned” (Pinar, 2017b, p. 1), and certain beyond our own devices.
What that object of faith might be… perhaps it comes as no surprise, but Shakespeare has a “Let…” statement for that, too: “… let your own discretion be your tutor” (3.2.17). I never included this one in the list for my students because, until writing this essay, I had never fit it in as such a central constituent. Hamlet delivers the line, as any nervous director might do opening night, during the aforementioned lecture to the Players before the Mousetrap performance. All the more ironic, of course, is that his lecture hardly exemplifies the statement, which would be fine if Hamlet, the director, did not assume the stage during the performance but let the actors get on with their craft. Hamlet, by contrast, twice assumes the stage to augment the performance. (Ahh, what to do about such insecurity! At least he sells tickets, you may remember.) Anxious or not, the wisdom of his advisement, taken for all, is easy for a lay audience to misinterpret, particularly as it comes buried within lines of such mundane theatrical detail. Shakespeare does not suggest that we give in to our discretion, carte blanche. He suggests that we give over to our discretion as a kind of teacher-student relationship.
Let curriculum be to trust your own better judgment, to search your feelings, yet to grant with humility that more may exist than meets the eye. Let discretion be a “tutor,” yet while you let it, also think before you act – and think during and after, too – because “… the purpose of playing… was and is, to hold… the mirror up to nature” (3.2.17-23). Whether this amounts to something esoteric or spiritual is down to the beholder, yet if that is true for any one of us, it must be true for all of us. Each one of us is finite and individual, and curriculum is composite, a sum greater than the whole of its parts, as in all of us, transcending time and space. As a force of faith, curriculum is vast indeed.
Click here to read the closing reflection to “A Kind of Certainty”: Pt V. Fleeting Uncertainty
 How often I referred students to Canadian Liberal MP Stephen Owen’s definition for democracy: “the pluralistic respect for citizens empowered to self-govern within the rule of law.” Democracy, so often simplified as “majority rule,” is more accurately understood (in my opinion) as entirely dependent upon its constituents. Democracy works because we all agree to make it work. Every member therefore has a personal responsibility to respect and live up to the standard of the law on behalf of every other member. One disobedient person weakens the system and places everybody, including themselves, at risk. Either we set that person straight, or we jail them, but unless we protect the system, we are only certain to lose it.
 *Sigh… culture precedes law, I would argue, and we endlessly debate and litigate what should be right versus what should be wrong. This is politics and the justice system at work, issue by issue, and with enough lobbying and / or civil disobedience, any given topic might be up for consideration.
 Okay, so I did find a way to toss in some surf.
 aka the Chairman of the Board, aka Ol’ Blue Eyes
 In Canada, we might say that Shakespeare’s appeal to “let go” means don’t grip the stick too tight. “Hold on loosely,” as Donnie Van Zant would sing, or “Give a little bit,” from Roger Hodgson. None fully clarifies the expression, as I gather Shakespeare intended it, but the notion of giving way in deference to others is helpful, for a start.
 Of course, the best rejoinder here would be, “He who dies with the most toys wins,” to which I would reply, “You can’t take it with you.” But dialectical bumper-stickers were never my strong suit, and I digress, even for end-notes.
On second thought, the best rejoinder is to say Hamlet is fictional, not of the real world. All the more reason to admire him as perhaps Shakespeare’s best creative feat, so life-like are he and the rest of the characters who populate the play.
 Between Opheila and Horatio, he nearly does so twice, and even towards Gertrude he aims some meager hope and sympathy. Alas, yet another essay…
 Shakespeare includes numerous allusions throughout the play to the theatre milieu, its characters and culture, and its place in Elizabethan society, many of which can be construed as humorous and even as insider jokes shared amongst his theatre company and his regular audience.
 I learned, for my own spiritual belief, to distinguish between what many religions have people do, as compared to what God through Christ has already done. The primary reference, here, is to the Resurrection and what Christ has done for all. Whether one chooses to believe or not is up to them, and should be, which is the essence of my belief: what comes down to a matter of personal choice is to believe, or not to believe. Consider Ephesians 2:8-9, for example, in which Paul explains that we are saved not by works but by grace, so that none can boast: justification by grace through faith in God is the essence of Christianity, and I emphasise that part of it left up to us, to have faith in God. Some consider this ridiculous, and that is neither here nor there to me although I wish no ill upon anyone. Upon believing, upon faith, one can grasp how a selfless attitude of giving – giving of oneself – matters as compared to more selfish concerns over what is given or how much is given.
Such concerns do arise since, as I believe, all inherit Original Sin, a concept that one must accept before anything else in Christian doctrine of any stripe will make sense: we all have inherited an imperfection to believe and have faith in our selves, apart from the God who created us; to go our own way; to obey our own inclinations and not His. This pride-of-self, set in motion by the conniving serpent’s lure that whetted Eve’s curiosity, then Adam’s, enough for them to disobey one simple command… this original “missing of the mark” prompted Adam, Eve, and all their offspring to realise within themselves what had never before even appeared on their radar screens: that obedience was only appreciable once disobedience had been tried. It’s the same binary idea as saying, “You only really understand peace once you experience war,” and so forth. So, for instance, in offering to God (Genesis 4:3-4), where Cain brings some, Abel brings the choicest; yes, each still gives, yet Cain is furious upon seeing the difference in God’s response between their offerings. The sense is that Abel gives in faithful obedience what Cain withholds for himself, Abel trusting God, in a way that Cain does not, that God will give back and look after him. Cain trusts in what he can manage and control for himself; evidently, he does not trust like his brother that God will give back. Perhaps he does not even believe that God created them although, if he does believe this, how much worse his distrust.
Avenging his own honour by killing his brother is a choice Cain makes, entirely selfish and sinfully predictable. This, for me, opens explanation as to why God allows evil to prosper: He gave us free will, in His image, out of love, to choose or to not choose His gift of salvation; to believe or not to believe in His Gospel, as a matter of faith; to trust Him or to trust something else. In either case, we, the people, are answerable for all we do. As I say, back then, Cain perhaps did or didn’t know he was God’s creation – he is left to his own account for that. These days, though, how many people hardly even consider God as real, much less as Creator or Benefactor? However, if God offered us no doubt of His existence, then what would necessitate faith? Were He to provide 100% certainty, anyone then would have no choice but to believe, of necessity, or else be a fool not to believe and delude themselves in spite of the certainty. As it is, some think believers are deluded; truly, you can’t convince all the people all the time, and you definitely should not force belief. All this, for me, is consistent with a caring God who has conferred free will. So, where some condemn believers as guilty of the crimes and evils committed in the name of Christianity (or religions altogether), in fact, I fully agree: hateful beliefs and violent acts are an abomination of how God would have us treat each other.
But, again, he has bestowed upon us the free will to decide and behave, and I argue that all such crimes and evils, whether in the name of religions or not, reflect Original Sin, our turning-away from God; they do not reflect God. They cannot reflect the character of God, whose nature is neither criminal nor evil; rather, they reflect the character of our selves, who are selfishly proud. People are responsible for bastardising and usurping doctrine in order to gain for themselves, something akin to Cain, so blatantly transparently selfish. Further, as that kind of belief and behaviour continues, it roots until generations have perhaps forgotten or lost any other way to believe and behave. We are human, taken for all, and finite in power and awareness. We can do no other than we continue to prove ourselves capable of doing – and in this I include both good and evil that we do – and this, truly, is why we’re in need of salvation. So much gets lost in scriptural debate over details – details that warrant discussion yet, being details, they are also prone to misinterpretation and thereby require careful, long-studied contextual understanding – but the basic doctrine and the loving character of God I find rather straightforward. It’s people who complicate and screw it up, not God. And I’m as guilty, neither better nor worse but just plain equal to every other person trying to live under our circumstances. So I try my best to respect peoples’ dignity, everyone’s.
My choice has been to believe based on the preponderance of evidence that I’ve learned and studied for many years – the careful, long-studied contextual understanding I mention above. I have plenty more to learn, but my point is that I did have to learn, to begin with. I did not just suddenly have some nuanced supreme understanding of Christian doctrine – indeed, I’m wary that superficial knowledge is so frequently the cause of the crimes and evils people commit in the name of religion. I consider myself blessed to have had the freedom to choose what to study without duress and to have had an education provided by good teachers who understood what makes for good curriculum. I have never felt assaulted or oppressed as far as my education is concerned – or my life, for that matter – and, furthermore, I achingly, mournfully recognise that so so many others cannot agree. Why not me, I can’t say, but I count myself as blessed for this, if for no other reason in my existence. I know so well that not everyone has enjoyed such Providence.
There is so much abuse and violence out there, person-upon-person, and I suggest that I, or you or anyone, ought to be enabled to read, search, and decide for ourselves whether or not to believe something. And never forced, and never judged. Personally, I’m not a big church-goer – I have done, but I don’t much anymore. But I still quietly personally maintain my faith. Even offering this endnote struck me as bold, but I wanted this post to be thorough and honest. I believe evidence exists – we have only to look for it: “Knock, and the door shall be opened” is God’s encouragement, to be proactive and search for Him rather than sitting idly by awaiting, or else ignoring, His imminent return. Nonsense, this, for some. And I can comprehend the doubt. But I don’t share it. By the same token, I offer my testimony, but I don’t impose it. People today who demand to see evidence – God performing miracles, say – are asking Him to lay foundations all over again. But, by analogy, a building only needs one foundation, so why would God repeat that process? Enough evidence has been documented over time, for me, that I now readily believe and join the church being built on the existing foundation. Again, as I opened this rather long endnote, what matters most is what He has already done: we have only to believe, with no further need to see more miracles, which is really what having faith is all about.