Peter Benson’s article, “Francis Fukuyama and the Perils of Identity,” in Philosophy Now (Issue 136), got me thinking again about multiculturalism. I’ve had plenty to say about multiculturalism – seldom positive – although none of it here on The Rhetorical Why.
If you haven’t yet leaped to conclusions about me, I’ll point out that the full title of this post is “The Bridge to Hell is Paved with Problematic Intention.” It’s meant to be a little satirical, a little disparaging. It’s a wordy mash-up of axioms, cultural and academic. I’m okay with wordy this time.
I’m also conscious of the juxtaposition of my crude title and the wisdom of Dr. King. I’m less okay with this but felt the contrast worth any shock, ambiguity, or misapprehension.
Read on, and take issue as you must or as you will.
The Bridge to Hell is Paved with Problematic Intention
Has the trumpeting of multiculturalism taken itself so literally that even individualism (… multipersonism?) is insufficient?
Taking itself, as I say, more literally, multiculturalism sets one culture at equal stature with the rest – seems fair enough – apparently, a shift in meaning from diversity to inclusion, which implies that diversity wasn’t working on account of exclusion.
So, within Culture X or Culture Y, as we might imagine an individual being equal alongside other members, we can imagine across the two cultures potential impasse: “… unresolvable conflicts between mutually exclusive viewpoints [that] dominate the political landscape” (Benson, 2020). I still grant here individual differences, but I have in mind some divide between distinct communities of individuals, i.e. a divide between cultures.
In relation to Culture Y, for example, Culture X might deem its equality mere lip-service and feel de facto unequal: “How are we in Culture X obliged to consider those in Culture Y as ‘equal’ if our culture is not equal to theirs?
“How can we treat them as equals, much less be treated as equals, if our larger culture is not equal – that is, if Culture Y does not accept us on equal terms?” Culture Y might declare all individuals equal to begin with and counter that Culture X only perceives inequality. Yet this simply compounds the same injustice for Culture X, who will hardly waive their due consideration.
In any case, equality of cultures seems not the same thing and unable to play out to the same effect as equality of individuals – even more so since an individual who identifies with more than one culture might feel strewn across their own intersections. (Curiously, this assumes one’s identity to be chosen as much as bestowed, which echoes individualism as much as collectivism.) In fact, if equating cultures equates individuals, then equality rests further upon equity, a mantle of justice issuing from a superior authority.
Perhaps Culture Y lives by some unproblematic axiom, such as ‘might makes right’, ‘stay the course’, or even just ‘common sense’ while Culture X lives by ‘power to the people’, ‘diversity is strength’, or ‘revolution is no dinner party’. Can they bridge their divide? Is one culture responsible to reach across, as it were, halfway? We might define an obligation to come any distance according to power of authority. To be sure, imbalanced authority does seem a constant throughout history; for exactly this reason, though, would we expect the side with authority to yield?
I turn to Dr. King. In his time, a generation or two before mine, Dr. King sought and fought for equality and “the cause of peace and brotherhood,” there surely being little more equal than “a single garment of destiny” (King, 1963). As we are all, he claimed, paradoxically yet beautifully this makes us one. Standing upon the authority of centuries, of historical proclamation and practice, and there resting in long studied philosophy and lived experiences of spiritual belief, Dr. King challenged his brothers to bear witness upon themselves. Such authority remains as stable for those to come as for those preceding – that is, unless or until those to come decide to rest authority someplace else.
In our time, justice supersedes civility, and restitution tinges redress. The zeitgeist these days is emotional, distinctly angry. Individuals possess rights, and cultures bear responsibilities. “The politics of identity,” Benson says, “multiply conflicts and divisions.” As we ostensibly advocate for the equality ofall individuals, identity politics fights a culture war, a battle for equity acrosscultures-of-particular-individuals, which actually precludes a wider equity. Cultural equality has supplanted individual equality because, where there is axiomatic ‘strength in numbers’, multiculturalism can only ever be ‘us vs them’. If so, is it still defensible? Is multiculturalism a way to ensure that our outcomes match our aims? Or are the aims of those with authority forever destined to pre-empt the aims of those without it? Indeed, what is the way to ensure that no one of all will ever be marginalised?
For one final point I turn to Benson (2020), not in comparison to Dr. King but out of respect for all being one: “Only when we stop having identities in the group-defined sense can we return to being individuals” (original emphasis). We may discover too late the folly of burning a bridge-too-far while crossing it.
These are all descriptors I’ve encountered for Canada, from one source or another. I can make of each one something contextual. Yet as each suggests a departure or break from something previous, that’s really just a subtle way of saying, “Here’s what we aren’t.”
Yet describing something with negative terminology is ultimately meaningless because it can end up becoming silly; for instance, “I am not a giant Godzilla-like dragon that breaths fire and enjoys sipping my iced coffee on Tuesdays.” We could literally imagine anything that isn’t the case and say as much, and we’re no further ahead knowing what actually is the case.
So when I see descriptors like these – for Canada but really for anything – I’m unclear and confused about what to think. It’s a concern for me, the citizen, because who I am and what I value have direct effect on you and everyone else, and me in return all over again.
Ignoring the post-modern fallacy, i.e. nothing is true other than the statement that confirms nothing is true, this description of Canadian identity also falls in line with the negative terminology and serves as the on-ramp to the freeway of silliness upon which no Godzillas sip their Tuesday coffee.
And where the link above was an American take on our Prime Minister’s interpretation of whom he leads, others have taken noted concern of his statement, too, among them some Canadians whom he leads…
On the other hand, and perhaps in response (?), the Government of Canada is now apparently reversing course, telling Canadians and would-be Canadians something awfully more specific about Canadian identity:
I admit, once more, to losing track as a “Canadian,” although at least this time the terminology is positive: “We are indeed ‘this’ and ‘that.’”
Some pretty specific stuff in this Global Affairs guide. For example…
“When lining up in a public place, the bank for instance, Canadians require at least 14 inches of space…”
Right down to the inch? Granted, I’m not the most social-media savvy citizen you could find, but I think a colloquial Canadian response to this – at least on-line – might be “WTF!!!”
Still, please don’t let me speak on your behalf. That said, the guide seems to have been compiled by one person in an interview format with a second person because it’s written with a first-person perspective: it’s uniquely Canadian, you might say.
Now, if your rejoinder is to excuse this guide as merely a helpful list of suggestions for what is “Canadian,” then I counter with the challenge to separate, in these suggestions, what are quintessential as compared to what are stereotypical descriptions. After all, what Canadian does NOT love beer and hockey and The Hip, just as they detest the gesturing of hands and public displays of affection?
We’re approaching another freeway on-ramp, this one a sloped and slippery freeway that circles and loops and arrives at no particular destination because at its terminus interminably works a construction crew, who build it out just a little further than before, apparently with no idea who they are, or what they do, or – perhaps worst of all – why they might want to reflect, with no small concern, upon the work they consider to be of national significance.
Seriously, am I the only one who’s concerned by this?
Time and energy… the one infinite, the other hardly so. The one an abstraction, the other all too real. But while time ticks ceaselessly onward, energy forever needs replenishing. We assign arbitrary limits to time, by calendar, by clock, and as the saying goes, there’s only so much time in a day. Energy, too, we can measure, yet often we equate both time and energy monetarily, if not by actual dollars and cents: we can pay attention, spend a day at the beach, save energy – the less you burn, the more you earn! And certainly, as with money, most people would agree that we just never seem to have enough time or energy.
Another way to frame time and energy is as an investment. We might invest our time and energy learning to be literate, or proficient with various tools, or with some device that requires skilful application. Everything, from a keyboard or a forklift or a tennis racquet to a paring knife or an elevator or a golf club to a cell phone or a self-serve kiosk or the new TV remote, everything takes some knowledge and practice. By that measure, there are all kinds of literacies – we might even say, one of every kind. But no matter what it is, or how long it takes to master, or why we’d even bother, we shall reap what we sow, which is an investment analogy I bet nobody expected.
Technology returns efficiency. In fact, like nothing else, it excels at creating surplus time and energy, enabling us to devote ourselves to other things and improve whichever so-called literacies we choose. The corollary, of course, is that some literacies fade as technology advances. Does this matter, with so many diverse interests and only so much time and energy to invest? How many of us even try everything we encounter, much less master it? Besides, for every technological advancement we face, a whole new batch of things must now be learned. So, for all that technological advancement aids our learning and creates surplus time and energy, we as learners remain the central determinant as to how to use our time and energy.
Enter the classroom what’s lately been called Artificial Intelligence (A.I.). Of course, A.I. has received plenty of enthusiasticattention, concern, and critique as a developing technological tool, for learning as well as plenty other endeavours and industries. A lengthy consideration from The New York Times offers a useful, broad overview of A.I.: a kind of sophisticated computer programming that collates, provides, and predicts information in real time. Silicon Valley designers aim to have A.I. work at least somewhat independently of its users, so they have stepped away from older, familiar input-output modes, what’s called symbolic A.I., a “top down” approach that demands tediously lengthy entry of preparatory rules and data. Instead, they are engineering “from the ground up,” building inside the computer a neural network that mimics a brain – albeit, a very small one, rivalling a mouse – that can teach itself via trial-and-error to detect and assess patterns found in the data that its computer receives. At these highest echelons, the advancement of A.I. is awe-inspiring.
Now for the polemic.
In the field of education, where I’m trained and most familiar, nothing about A.I. is nearly so clear. Typically, I’ve found classroom A.I. described cursorily, by function or task:
A.I. facilitates individualized learning
A.I. furnishes helpful feedback
A.I. monitors student progress
A.I. highlights possible areas of concern
A.I. lightens the marking load
On it goes… A.I., the panacea. Okay, then, so in a classroom, how should we picture what is meant by “A.I.”?
Specific examples of classroom A.I. are hard to come by, beyond top ten lists and othergeneralizeddescriptions. I remember those library film-strip projectors we used in Grade 1, with the tape decks attached. Pressing “Play,” “Stop,” and “Eject” was easy enough for my six year-old fingers, thanks to engineers who designed the machines and producers who made the film strips, even if the odd time the librarian had to load them for us. (At home, in a similar vein, how many parents ruefully if necessarily consider the T.V. a “babysitter” although, granted, these days it’s probably an iPad. But personification does not make for intelligence… does it? Didn’t we all understand that Max Headroom was just a cartoon?) There’s a trivia game app with the hand-held clickers, and there’s an on-line plagiarism detector – both, apparently, are A.I. For years, I had a Smart Board although I think that kind of branding is just so much capitalism and harshly cynical. Next to the Smart Board was a whiteboard, and I used to wonder if, someday, they’d develop some windshield wiper thing to clean it. I even wondered if someday I wouldn’t use it anymore. For the record, I like whiteboards. I use them, happily, all the time.
Look, I can appreciate this “ground-up” concept as it applies to e-machines. (I taught English for sixteen years, so metaphor’s my thing.) But intelligence? Anyway, there seems no clear definition of classroom A.I., and far from seeming intelligent to me, none of what’s out there even seems particularly dim-witted so much as pre-programmed. As far as I can tell, so-called classroom A.I. is stuff that’s been with us all along, no different these days than any tool we already know and use. So how is “classroom A.I.” A. I. of any kind, symbolic or otherwise?
Symbolic A.I., at least the basis of it, seems not too dissimilar to what I remember about computers and even some video arcade favourites from back in the day. Granted, integrated circuits and micro-processers are a tad smaller and faster these days compared to, say, 1982 (… technology benefitting from its own surplus?). Perhaps more germane to this issue is the learning curve, the literacy, demanded of something “intelligent.” Apparently, a robot vacuum learns the room that it cleans, which as I gather is the “ground-up” kind of Symbolic A.I. Now, for all the respect and awe I can muster for a vacuum cleaner—and setting all “ground-up” puns aside—I still expect slightly less from this robot than passing the written analysis section of the final exam. (I taught English for sixteen years, so written analysis is my thing.) It seems to me that a given tool can be no more effective than its engineering and usage, and for that, isn’t A.I.’s “intelligence” more indicative of its creator’s ingenuity or its user’s aptitude than of itself or its pre-programmed attributes?
By the same token, could proponents of classroom A.I. maybe just ease off a bit from their retcon appropriation of language? I appreciate getting caught up in the excitement, the hype—I mean, it’s 21st century mania out there, candy floss and roller coasters—but that doesn’t mean you can just go about proclaiming things as “A.I.” or, worse, proclaiming A.I. to be some burgeoning technological wonder of classrooms nationwide when… it’s really not. Current classroom A.I. is simply every device that has always already existed in classrooms for decades—that could include living breathing teachers, if the list of functions above is any guide. Okay then, hey! just for fun: if classroom tools can include teachers who live and breathe, by the same turn let’s be more inclusive and call A.I. a “substitute teacher.”
Another similarly common tendency I’ve noted in descriptions of classroom A.I. is to use words like “data,” “algorithm,” and “training” as anthropomorphic proxy for experience, decision-making, and judgment, i.e. for learning. Such connotations are applied as simply as we might borrow a shirt from our sibling’s closet, as liberally as we might shake salt on fries, and they appeal to the like-minded, who share the same excitement. To my mind, judicious intelligence is never so cavalier, and it doesn’t take much horse-sense to know that too much salt is bad for you, or that your sibling might be pissed off after they find their shirt missing. As for actually manufacturing some kind of machine-based intelligence, well… it sure is easy to name something “Artificial Intelligence,” much less bestow “intelligence” by simply declaring it! The kind of help I had back in the day, as I see it, was something I just now decided to call “S.I.”: sentient intelligence.
Facetiousness aside, I grant probably every teacher has spent some time flying on auto-pilot, and I’ve definitely had days that left me feeling like an android. And fair enough: something new shakes things up and may require some basic literacy. There’s no proper use of any tool, device, or interface without some learned practical foundation: pencil and paper, protractor, chalk slates, the abacus. How about books, or by ultimate extension, written language, itself? These are all teaching tools, and each has a learning curve. So is A.I. a tool, a device, an interface? All of the above? I draw the line where it comes to classroom tools that don’t coach the basketball team or have kids of their own to pick up by 5pm: the moniker, “A.I.,” seems more than a bit generous. And hey, one more thing, on that note: wouldn’t a truer account of A.I., the tool, honour its overt yet seemingly ignored tag, “artificial”? R2D2 and C-3PO may be the droids we’re looking for, but they’re still just science fiction.
Fantastic tales aside, technological advancements in what is called the field of A.I. have and will continue to yield useful, efficient innovation. And now I mean real Silicon Valley A.I., not retcon classroom A.I. But even so, to what ends? What specifically is this-or-that A.I. for? In a word: why? We’re headed down an ontological road, and even though people can’t agree on whether we can truly consider our self, we’re proceeding with A.I. in the eventual belief that it can. “It will,” some say. Not likely, I suspect. Not ever. But even if I’m wrong, why would anyone hope that A.I. couldthink for itself?
Hasn’t Heidegger presented us with enough of a challenge, as it is? Speaking of time and energy, let’s talk opportunity costs. Far greater minds than mine have lamented our ominous embrace with technology. Isn’t the time and energy spent on A.I.—every second, every joule of it—a slap-in-the-face of our young people and the investment that could have been made in them? It’s ironic that we teach them to develop the very technology that will eventually wash them away.
Except that it won’t. I may be out on a limb to say so, but I suspect we will sooner fall prey to the Twitterverse and screen-worship than A.I. will fulfil some sentient Rise of the Machines. The Borg make good villains, and even as I watch a lobby full of Senior Band students in Italy, staring at their iPhones, and fear assimilation and, yes, worry for humanity… I reconsider because the Borg are still just a metaphor (… sixteen years, remember?). Anyway, as a teacher I am more driven to reach my students with my own message than I am to snatch that blasted iPhone from their hands, much as I might like to. On the other hand, faced with a dystopian onslaught of Replicants, Westworld Gunslingers, and Decepticons, would we not find ourselves merely quivering under the bed, frantically reading up on Isaac Asimov while awaiting the arrival of Iron Man? Even Luke Skywalker proved susceptible to the Dark Side’s tempting allure of Mechanized Humanity; what possible response could we expect from a mere IB cohort of inquiry-based Grade 12 critical thinkers and problem-solvers?
At the very least, any interruption of learners by teachers with some classroom tool ought to be (i) preceded by a primer on its literacy, i.e. explaining how to use that particular tool in (ii) a meaningful context or future setting, i.e. explaining why to use that particular tool, before anybody (iii) begins rehearsing and/or mastering that particular tool, i.e. successfully executing whatever it does. If technology helps create surplus time and energy, then how and why and what had better be considered because we only have so much time and energy at our disposal. The what, the how, and the why are hardly new concepts, but they aren’t always fully considered or appreciated either. They are, however, a means of helpful focusing that few lessons should be without.
As a teacher, sure, I tend to think about the future. But that means spending time and paying attention to what we’re up to, here and now, in the present. To that end, I have an interest in protecting words like “learning” and “intelligence” from ambiguity and overuse. For all the 21st century hearts thumping over the Cinderella-transformation of ENIAC programmable computation to A.I., and the I.o.T., and whatever lies beyond, our meagre acknowledgement of the ugly step-sister, artificiality, is foreboding. Mimicry is inauthentic, but neither is it without consequence. Let’s take care that the tools we create as means don’t replace the ends we originally had in mind because if any one human trait can match the trumpeting of technology’s sky-high potential—for me at least, not sure for you—I’d say its hubris.
Another fantastic tale comes to mind: Frankenstein’s monster. Technological advancement can be as wonderful as horrifying, probably usually somewhere in between. However it’s characterised or defined, though, by those who create it, it will be realised in the end by those who use it, if not those who face it. For most people, the concept of cell phones in 1982 was hardly imagined. Four decades later, faces down and thumbs rapid-fire, the ubiquity of cell phones is hardly noticed.