The Conceit of A. I.


From a technological perspective, I can offer a lay opinion of A.I. But check out some more technical opinions than mine, too:

MIT: The Seven Deadly Sins

Edge: The Myth of AI

The Guardian: The Discourse is Unhinged

NYT: John Markoff

Futurism: You Have No Idea…

IEET: Is AI a Myth?

Open Mind: Provably Beneficial Artificial Intelligence

Medium: A Critical Reading List

AdWeek: Burger King


The Conceit of A.I.

Time and energy… the one infinite, the other hardly so. The one an abstraction, the other all too real. But while time ticks ceaselessly onward, energy forever needs replenishing. We assign arbitrary limits to time, by calendar, by clock, and as the saying goes, there’s only so much time in a day. Energy, too, we can measure, yet often we equate both time and energy monetarily, if not by actual dollars and cents: we can pay attention, spend a day at the beach, save energy – the less you burn, the more you earn! And certainly, as with money, most people would agree that we just never seem to have enough time or energy.

Another way to frame time and energy is as an investment. We might invest our time and energy learning to be literate, or proficient with various tools, or with some device that requires skilful application. Everything, from a keyboard or a forklift or a tennis racquet to a paring knife or an elevator or a golf club to a cell phone or a self-serve kiosk or the new TV remote, everything takes some knowledge and practice. By that measure, there are all kinds of literacies – we might even say, one of every kind. But no matter what it is, or how long it takes to master, or why we’d even bother, we shall reap what we sow, which is an investment analogy I bet nobody expected.

Technology returns efficiency. In fact, like nothing else, it excels at creating surplus time and energy, enabling us to devote ourselves to other things and improve whichever so-called literacies we choose. The corollary, of course, is that some literacies fade as technology advances. Does this matter, with so many diverse interests and only so much time and energy to invest? How many of us even try everything we encounter, much less master it? Besides, for every technological advancement we face, a whole new batch of things must now be learned. So, for all that technological advancement aids our learning and creates surplus time and energy, we as learners remain the central determinant as to how to use our time and energy.

Enter the classroom what’s lately been called Artificial Intelligence (A.I.). Of course, A.I. has received plenty of enthusiasm, attention, concern, and critique as a developing technological tool, for learning as well as plenty other endeavours and industries. A lengthy consideration from The New York Times offers a useful, broad overview of A.I.: a kind of sophisticated computer programming that collates, provides, and predicts information in real time. Silicon Valley designers aim to have A.I. work at least somewhat independently of its users, so they have stepped away from older, familiar input-output modes, what’s called symbolic A.I., a “top down” approach that demands tediously lengthy entry of preparatory rules and data. Instead, they are engineering “from the ground up,” building inside the computer a neural network that mimics a brain – albeit, a very small one, rivalling a mouse – that can teach itself via trial-and-error to detect and assess patterns found in the data that its computer receives. At these highest echelons, the advancement of A.I. is awe-inspiring.

Now for the polemic.

In the field of education, where I’m trained and most familiar, nothing about A.I. is nearly so clear. Typically, I’ve found classroom A.I. described cursorily, by function or task:

  • A.I. facilitates individualized learning
  • A.I. furnishes helpful feedback
  • A.I. monitors student progress
  • A.I. highlights possible areas of concern
  • A.I. lightens the marking load

On it goes… A.I., the panacea. Okay, then, so in a classroom, how should we picture what is meant by “A.I.”?

Mr. Dukane
“Anybody remember Mr. Dukane?”

Specific examples of classroom A.I. are hard to come by, beyond top ten lists and other generalized descriptions. I remember those library film-strip projectors we used in Grade 1, with the tape decks attached. Pressing “Play,” “Stop,” and “Eject” was easy enough for my six year-old fingers, thanks to engineers who designed the machines and producers who made the film strips, even if the odd time the librarian had to load them for us. (At home, in a similar vein, how many parents ruefully if necessarily consider the T.V. a “babysitter” although, granted, these days it’s probably an iPad. But personification does not make for intelligence… does it? Didn’t we all understand that Max Headroom was just a cartoon?) There’s a trivia game app with the hand-held clickers, and there’s an on-line plagiarism detector – both, apparently, are A.I. For years, I had a Smart Board although I think that kind of branding is just so much capitalism and harshly cynical. Next to the Smart Board was a whiteboard, and I used to wonder if, someday, they’d develop some windshield wiper thing to clean it. I even wondered if someday I wouldn’t use it anymore. For the record, I like whiteboards. I use them, happily, all the time.

Look, I can appreciate this “ground-up” concept as it applies to e-machines. (I taught English for sixteen years, so metaphor’s my thing.) But intelligence? Anyway, there seems no clear definition of classroom A.I., and far from seeming intelligent to me, none of what’s out there even seems particularly dim-witted so much as pre-programmed. As far as I can tell, so-called classroom A.I. is stuff that’s been with us all along, no different these days than any tool we already know and use. So how is “classroom A.I.” A.I. of any kind, symbolic or otherwise?

"... so whose the Sub?"
“Hey, so who’s the Sub today?”

Symbolic A.I., at least the basis of it, seems not too dissimilar to what I remember about computers and even some video arcade favourites from back in the day. Granted, integrated circuits and micro-processers are a tad smaller and faster these days compared to, say, 1982 (… technology benefitting from its own surplus?) Perhaps more germane to this issue is the learning curve, the literacy, demanded of something “intelligent.” Apparently, a robot vacuum learns the room that it cleans, which as I gather is the “ground-up” kind of A.I. Now, for all the respect and awe I can muster for a vacuum cleaner—and setting all “ground-up” puns aside—I still expect slightly less from this robot than passing the written analysis section of the final exam. (I taught English for sixteen years, so written analysis is my thing.) It seems to me that a given tool can be no more effective than its engineering and usage, and for that, isn’t A.I.’s “intelligence” more indicative of its creator’s ingenuity or its user’s aptitude than of itself or its pre-programmed attributes?

Press Any Key to Begin

By the same token, could proponents of classroom A.I. maybe just ease off a bit from their retcon appropriation of language? I appreciate getting caught up in the excitement, the hype—I mean, it’s 21st century mania out there, candy floss and roller coasters—but that doesn’t mean you can just go about proclaiming things as “A.I.” or, worse, proclaiming A.I. to be some burgeoning technological wonder of classrooms nationwide when… it’s really not. Current classroom A.I. is simply every device that has always already existed in classrooms for decades—that could include living breathing teachers, if the list of functions above is any guide. Okay then, hey! just for fun: if classroom tools can include teachers who live and breathe, by the same turn let’s be more inclusive and call A.I. a “substitute teacher.”

Another similarly common tendency I’ve noted in descriptions of classroom A.I. is to use words like “data,” “algorithm,” and “training” as anthropomorphic proxy for experience, decision-making, and judgment, i.e. for learning. Such connotations are applied as simply as we might borrow a shirt from our sibling’s closet, as liberally as we might shake salt on fries, and they appeal to the like-minded, who share the same excitement. To my mind, judicious intelligence is never so cavalier, and it doesn’t take much horse-sense to know that too much salt is bad for you, or that your sibling might be pissed off after they find their shirt missing. As for actually manufacturing some kind of machine-based intelligence, well… it sure is easy to name something “Artificial Intelligence,” much less bestow “intelligence” by simply declaring it! The kind of help I had back in the day, as I see it, was something I just now decided to call “S.I.”: sentient intelligence.

Facetiousness aside, I grant probably every teacher has spent some time flying on auto-pilot, and I’ve definitely had days that left me feeling like an android. And fair enough: something new shakes things up and may require some basic literacy. There’s no proper use of any tool, device, or interface without some learned practical foundation: pencil and paper, protractor, chalk slates, the abacus. How about books, or by ultimate extension, written language, itself? These are all teaching tools, and each has a learning curve. So is A.I. a tool, a device, an interface? All of the above? I draw the line where it comes to classroom tools that don’t coach the basketball team or have kids of their own to pick up by 5pm: the moniker “A.I.” seems more than a bit generous. And hey, one more thing, on that note: wouldn’t a truer account of A.I., the tool, honour its overt yet seemingly ignored tag, “artificial”? R2D2 and C-3PO may be the droids we’re looking for, but they’re still just science fiction.

Fantastic tales aside, technological advancements in what is called the field of A.I. have and will continue to yield useful, efficient innovation. And now I mean real Silicon Valley A.I., not retcon classroom A.I. But even so, to what ends? What specifically is this-or-that A.I. for? In a word: why? We’re headed down an ontological road, and even though people can’t agree on whether we can truly consider our self, we’re proceeding with A.I. in the eventual belief that it can. “It will,” some say. Not likely, I suspect. Not ever. But even if I’m wrong, why would anyone hope that A.I. could think for itself?

Artificial Intelligence
10. Be “A.I.”    20. Go to 10     Run

Hasn’t Heidegger presented us with enough of a challenge, as it is? Speaking of time and energy, let’s talk opportunity costs. Far greater minds than mine have lamented our ominous embrace with technology. Isn’t the time and energy spent on A.I.—every second, every joule of it—a slap-in-the-face of our young people and the investment that could have been made in them? It’s ironic that we teach them to develop the very technology that will eventually wash them away.

Except that it won’t. I may be out on a limb to say so, but I suspect we will sooner fall prey to the Twitterverse and screen-worship than A.I. will fulfil some sentient Rise of the Machines. The Borg make good villains, and even as I watch a lobby full of Senior Band students in Italy, staring at their iPhones, and fear assimilation and, yes, worry for humanity… even then I reconsider because the Borg are still just a metaphor (… sixteen years, remember?) As a teacher I am more driven to reach my students with my own message than I am to snatch that blasted iPhone from their hands, much as I might like to. On the other hand, faced with a dystopian onslaught of Replicants, Westworld Gunslingers, and Decepticons, would we not find ourselves merely quivering under the bed, frantically reading up on Isaac Asimov while awaiting the arrival of Iron Man? Even Luke Skywalker proved susceptible to the Dark Side’s tempting allure of Mechanized Humanity; what possible response could we expect from a mere IB cohort of inquiry-based Grade 12 critical thinkers and problem-solvers?

The Borg
“Resistance is futile.”

At the very least, any interruption of learners by teachers with some classroom tool ought to be…

  1. preceded by a primer on its literacy,
    • i.e. explaining how to use that particular tool in…
  2. a meaningful context or future setting,
    • i.e. explaining why to use that particular tool, before anybody…
  3. begins rehearsing and/or mastering that particular tool,
    • i.e. successfully executing whatever it does

If technology helps create surplus time and energy, then how and why and what had better be considered because we only have so much time and energy at our disposal. The what, the how, and the why are hardly new concepts, but they aren’t always fully considered or appreciated either. They are, however, a means of helpful focusing that few lessons should be without.

As a teacher, sure, I tend to think about the future. But that means spending time and paying attention to what we’re up to, here and now, in the present. To that end, I have an interest in protecting words like “learning” and “intelligence” from ambiguity and overuse. For all the 21st century hearts thumping over the Cinderella-transformation of ENIAC programmable computation to A.I., and the I.o.T., and whatever lies beyond… for all that, our meagre acknowledgement of the ugly step-sister, artificiality, is foreboding. Mimicry is inauthentic, but neither is it without consequence. Let’s take care that the tools we create as means don’t replace the ends we originally had in mind because if any one human trait can match the trumpeting of technology’s sky-high potential—for me at least, not sure for you—I’d say its hubris.

Another fantastic tale comes to mind: Frankenstein’s monster. Technological advancement can be as wonderful as horrifying, probably usually somewhere in between. However it’s characterised or defined, though, by those who create it, it will be realised in the end by those who use it, if not those who face it. For most people, the concept of cell phones in 1982 was hardly imagined. Four decades later, faces down and thumbs rapid-fire, the ubiquity of cell phones is hardly noticed.

I May Be Wrong About This, But…

Before introducing the moral pairing of right and wrong to my students, I actually began with selfish and selfless because I believe morality has a subjective element, even in the context of religion, where we tend to decide for ourselves whether or not we believe or ascribe to a faith.

As I propose them, selfish and selfless are literal, more tangible, even quantifiable: there’s me, and there’s not me. For this reason, I conversely used right and wrong to discuss thinking and bias. For instance, we often discussed Hamlet’s invocation of thinking: “… there is nothing good or bad, but thinking makes it so” (II, ii, 249-250). Good and bad, good and evil, right and wrong… while not exactly synonymous, these different pairings do play in the same ballpark. Still, as I often said to my students about synonyms, “If they meant the same thing, we’d use the same word.” So leaving good and bad to the pet dog, and good and evil to fairy tales, I presently consider the pairing of right and wrong, by which I mean morality, as a means to reconcile Hamlet’s declaration about thinking as some kind of moral authority.

My own thinking is that we have an innate sense of right and wrong, deriving in part from empathy, our capacity to stand in someone else’s shoes and identify with that perspective – look no further than storytelling itself. Being intrinsic and relative to others, empathy suggests an emotional response and opens the door to compassion, what we sometimes call the Golden Rule. Compassion, for Martha Nussbaum, is that means of “[hooking] our imaginations to the good of others… an invaluable way of extending our ethical awareness” (pp. 13-14). Of course, the better the storytelling, the sharper the hook, and the more we can relate; with more to go on, our capacity for empathy, i.e. our compassion, rises.

Does that mean we actually will care more? Who knows! But I think the more we care about others, the more we tend to agree with them about life and living. If all this is so, broadly speaking, if our measure for right derives from empathy, then perhaps one measure for what is right is compassion.

And if we don’t care, or if we care less? After all, empathy’s no guarantee. We might just as reasonably expect to face from other people their continued self-interest, deriving from “the more intense and ambivalent emotions of… personal life” (p. 14). Emotions have “history,” Nussbaum decides (p. 175), which we remember in our day-to-day encounters. They are, in general, multifaceted, neither a “special saintly distillation” of positive nor some “dark and selfish” litany of negative, to use the words of Robert Solomon (p. 4). In fact, Solomon claims that we’re not naturally selfish to begin with, and although I disagree with that, on its face, I might accept it with qualification: our relationships can supersede our selfishness when we decide to prioritise them.

So if we accept that right and wrong are sensed not just individually but collectively, we might even anticipate where one could compel another to agree. Alongside compassion, then, to help measure right, perhaps coercion can help us to measure wrong: yes, we may care about other people, but if we care for some reason, maybe that would be why we agree with them, or assist them, or whatever. Yet maybe we’re just out to gain for ourselves. Whatever our motive, we treat other people accordingly, and it all gets variously deemed “right” or “wrong.”

I’m not suggesting morality is limited solely to the workings of compassion and coercion, but since I limited this discussion to right and wrong, I hope it’s helping illuminate why I had students begin first with what is selfish and selfless. That matters get “variously deemed,” as I’ve just put it, suggests that people seldom see any-and-all things so morally black and white as to conclude, “That is definitely wrong, and this is obviously right.” Sometimes, I suppose, but not all people always for all things.

Everybody having an opinion – mine being mine, yours being yours, as the case may be – that’s still neither here nor there to the fact that every body has an opinion, mine being mine and yours being yours. On some things, we’ll agree while, on some things, we won’t.

At issue is the degree that I’m (un)able to make personal decisions about right and wrong, the degree that I might feel conspicuous, perhaps uneasy, even cornered or fearful – and wrong – as compared to feeling assured, supported, or proud, even sanctimonious – and right. Standing alone from the crowd can be, well… lonely. What’s more, having some innate sense of right and wrong doesn’t necessarily help me act, not if I feel alone, particularly not if I feel exposed. At that point, whether from peer pressure or social custom peering over my shoulder, the moral question about right and wrong can lapse into an ethical dilemma, the moral spectacle of my right confronted by some other right: would I steal a loaf of bread to feed my starving family?

For me, morality is mediated (although not necessarily defined, as Hamlet suggests) by where one stands at that moment, by perspective, in which I include experience, education, relationships, and whatever values and beliefs one brings to the decisive moment. I’m implying what amounts to conscience as a personal measure for morality, but there’s that one more consideration that keeps intervening: Community. Other people. Besides selfish me, everybody else. Selfless not me.

Since we stand so often as members of communities, we inevitably derive some values and beliefs from those pre-eminent opinions and long-standing traditions that comprise them. Yet I hardly mean to suggest that a shared culture of community is uniform – again, few matters are so black or white. If anything, the individual beliefs that comprise shared culture – despite all that might be commonly held – are likely heterogeneous: it’s the proverbial family dinner table on election night.

Even “shared” doesn’t rule out some differentiation. Conceivably, there could be as many opinions as people possessing them. What we understand as conscience, then, isn’t limited to what “I believe” because it still may not be so easy to disregard how-many-other opinions and traditions. Hence the need for discussion – to listen, and think – for mutual understanding, in order to determine right from wrong. Morality, in that sense, is concerted self-awareness plus empathy, the realised outcome of combined inner and outer influences, as we actively and intuitively adopt measures that compare how much we care about the things we face everyday.

Say we encounter someone enduring loss or pain. We still might conceivably halt our sympathies before falling too deeply into them: Don’t get too involved, you might tell yourself, you’ve got plenty of your own to deal with. Maybe cold reason deserves a reputation for callusing our decision-making, but evidently, empathy does not preclude our capacity to reason with self. On the other hand, as inconsistent as it might seem, one could not function or decide much of anything, individually, without empathy because, without it, we would have no measure.

As we seem able to reason past our own feelings, we also wrestle echoing pangs of conscience that tug from the other side, which sometimes we call compassion or, other times, a guilt trip. Whatever to call it, clearly we hardly live like hermits, devoid of human contact and its resultant emotions. Right and wrong, in that respect, are socially individually determined.

One more example… there’s this argument that we’re desensitized by movies, video games, the TV news cycle, and so forth. For how-many-people, news coverage of a war-torn city warrants hardly more than the glance at the weather report that follows. In fact, for how-many-people, the weather matters more. Does this detachment arise from watching things once-removed, two-dimensionally, on a viewscreen? Surely, attitudes would be different if, instead of rain, it were shells and bombs falling on our heads from above.

Is it no surprise, then, as easily as we’re shocked or distressed by the immediacy of witnessing a car accident on the way to our favourite restaurant, that fifteen minutes later we might conceivably feel more annoyed that there’s no parking? Or that, fifteen minutes later again, engrossed by a menu of appetizers and entrees and desserts, we’re exasperated because they’re out of fresh calamari. Are right and wrong more individually than socially determined? Have we just become adept at prioritising them, even diverting them, by whatever is immediately crucial to individual well-being? That victim of the car accident isn’t nearly as worried about missing their dinner reservation.

Somewhat aside from all this, but not really… I partially accept the idea that we can’t control what happens, we can only control our response. By “partially” I mean that, given time, yes, we learn to reflect, plan, act, and keep calm carrying on like the greatest of t-shirts. After a while, we grow more accustomed to challenges and learn to cope. But sometimes what we encounter is so sudden, or unexpected, or shocking that we can’t contain a visceral response, no matter how accustomed or disciplined we may be. However, there is a way to take Hamlet’s remark about “thinking” that upends this entire meditation, as if to say our reaction was predisposed, even premeditated, like having a crystal ball that foresees the upcoming shock. Then we could prepare ourselves, rationalise, and control not what happens but our response to it while simply awaiting the playing-out of events.

Is Solomon wise to claim that we aren’t essentially or naturally selfish? Maybe he just travelled in kinder, gentler circles – certainly, he was greatly admired. Alas, though, poor Hamlet… troubled by jealousy, troubled by conscience, troubled by ignorance or by knowledge, troubled by anger and death. Troubled by love and honesty, troubled by trust. Troubled by religion, philosophy, troubled by existence itself. Is there a more selfish character in literature? He’s definitely more selfish than me! Or maybe… maybe Hamlet’s right, after all, and it really is all just how you look at things: good or bad, it’s really just a state of mind.

For my part, I just can’t shake the sense that Solomon’s wrong about our innate selfishness, and for that, I guess I’m my own best example. So, for being unable to accept his claim, well, I guess that one’s on me.

Teaching Open-Mindedly in the Post-Truth Era

[Originally published June 16, 2017]

A year on, and this one, sadly, only seems more relevant…


I had brilliant students, can’t say enough about them, won’t stop trying. I happened to be in touch with one alumna – as sharp a thinker as I’ve ever met, and a beautiful writer – in the wake of the 2016 U.S. election campaign and wrote the following piece in response to a question she posed:

How do you teach open-mindedly in the post-truth era?

I was pleased that she asked, doubly so at having a challenging question to consider. And I thoroughly enjoyed the chance to compose a thoughtful reply.

I’ve revised things a little, for a broader audience, but the substance remains unchanged.


How do you teach open-mindedly in the post-truth era?

Good heavens. Hmm… with respect for peoples’ dignity, is my most immediate response. But such a question.

Ultimately, it takes two because any kind of teaching is a relationship – better still, a rapport, listening and speaking in turn, and willingly. Listening, not just hearing. But if listening (and speaking) is interpreting, then bias is inescapable, and there needs to be continual back-and-forth efforts to clarify, motivated by incentives to want to understand: that means mutual trust and respect, and both sides openly committed. So one question I’d pose back to this question pertains to the motives and incentives for teaching (or learning) ‘X’ in the first place. Maybe this question needs a scenario, to really illustrate details, but trust and respect seem generally clear enough.

Without trust and respect, Side ‘A’ is left to say, “Well, maybe some day they’ll come around to our way of thinking” (… that being a kind portrayal) and simply walks away. This, I think, is closed-minded to the degree that ‘A’ hasn’t sought to reach a thorough understanding (although maybe ‘A’ has). Whatever the case, it’s not necessarily mean-spirited that someone might say this. With the best intentions, ‘A’ might conclude that ‘B’ is just not ready for the “truth.” More broadly, I’d consider ‘A’s attitude more akin to quitting than teaching, which is to say a total failure to “teach”, as far as I define it from your question. It would differ somewhat if ‘A’ were the learner saying this vs being the teacher. In that case, we might conclude that the learner lacked motivation or confidence, for some reason, or perhaps felt alone or unsupported, but again… scenarios.

Another thing to say is, “Well, you just can’t argue with stupid,” as in we can’t even agree on facts, but saying this is certainly passing judgment on ol’ stupid over there, and perhaps also less than open-minded. To be clear… personally, I’d never say bias precludes truth, only that we’ll never escape our biases. The real trouble is having bias at all, which I think is what necessitates trust and respect because the less of these is all the more turmoil. I figure any person’s incentive to listen arises from whatever they think will be to their own benefit for having listened. But “benefit” you could define to infinity, and that’s where the post-truth bit is really the troublesome bit because all you have is to trust the other person’s interpretation, and they yours, or else not. The more ‘truth’ gets tailored or personalised, the more quickly we run out of things to talk about.

Yeah, I see “post-truth” as “anti-trust,” and that’s a powderkeg, the most ominous outcome arisen of late. People need incentives to listen, but if treating them with dignity and respect isn’t reaching them, then a positive relationship with me wasn’t likely what they wanted to begin with. That’s telling of the one side, if not both sides, which in your question means ‘the teacher’ and ‘the learner’. At the same time, it’s harder to say in my experience that students have no incentives to listen or that, on account of some broader post-truth culture, they don’t trust teachers – that might be changing, who knows, but I hope not.

But I’m leaving some of your question behind, and I don’t want to lose sight of where it’s directed more towards the person doing the teaching (you asked, how do you teach open-mindedly…).

That part of the question was also in my immediate reaction: respect peoples’ dignity. For me, when I’m teaching, if I’m to have any hope of being open-minded, I intentionally need to respect the other person’s dignity. I need to be more self-aware, on a sliding scale, as to how open- or closed-minded I’m being just now, on this-or-that issue. So even while that’s empathy, it’s also self aware, and it’s intentional. It’s not “me” and “the other.” It’s “us.”

Me being me, though – irony intended – I’d still be the realist and say you just can never really know what that other person’s motive truly is – whether it’s a pre-truth or post-truth world doesn’t matter. But whether or not you trust the other, or they you, the real valuable skill is being able to discern flaws of reason, which is what I always said about you – you’ve always been one to see through the bull shit and get to the core of something. I’m no guru or icon, I’m just me, but as I see it just now, the zeitgeist is an emotional one more than a rational one. And there’s plenty to understand why that might be the case. And given that emotional dominance, I do think post-truth makes the world potentially far more dangerous, as a result.

Whichever incentives people are identifying for themselves, these days, are pretty distinct, and that’s a hard one for unity. That saying about partisan politics – “We want the same things; we just differ how to get there” – that doesn’t apply as widely right now. So, by virtue of the other side being “the other” side, neither side’s even able to be open-minded beyond themselves because trust and respect are encased in the echo chambers. More than I’ve ever known, things have become distinctly divisive – partisan politics, I mean – and I wonder how much more deeply those divisions have room to cut. Selfish incentives cut the deepest. Trust and respect guard us from deep cuts.

So, for instance, lately I find with my Dad that I listen and may not always agree, but where I don’t always agree, he’s still my Dad, and I find myself considering what he says based on his longevity – he’s seen the historic cycle, lived through history repeating itself. And I obviously trust and respect my Dad, figuring, on certain issues, that he must know more than me. On other issues, he claims to know more. On others still, I presume he does. Based on trust and respect, I give him the benefit of the doubt, through and through. One of us has to give, when we disagree, or else we’d just continually argue over every disagreement. If you want peace, someone has to give, right? Better that both share it, but eventually one must acquiesce to their “doubt” and make their own “benefit” finite, stop the cutting, compromise themselves, if they’re to see an end to the debate. Be bigger by making yourself smaller. So should I trust my Dad? I respect him because he’s given me plenty good reason after such a long time. Certainly I’m familiar with his bias, grown accustomed to it – how many times over my life have I simply taken his bias for granted? Too bad the rest of the world don’t get along as well as my Dad and I do.

I see it even more clearly with my daughter, now, who trusts me on account of (i) her vulnerability yet (ii) my love. The more she lives and learns alongside me, as time passes by, the more cyclically her outlook is reiterated, a bit like self-fulfilling prophecy. Other parents have warned me that the day’s coming when she’ll become the cynical teenager, and I’m sure it will – I remember going through it, myself. But I’m older, now, and back to respecting my Dad, so at least for some relationships, the benefit of the doubt returns. My Dad preceded me, kept different circles than me, and lived through two or three very different generations than me. Even as we see the same world, we kind of don’t. So this is what I wonder about that deep cut of division, reaching the level of family – and, further than one given family, right across the entire population. Do I fact-check my Dad, or myself, or maybe both? Should I? Even if I do, neither one of us is infallible, and we’re only as trustworthy as our fact-checking proficiency.

Anyway, the child of the parent, it’s as good an example as I can think of for questioning what it means to learn with an open mind because there’s no such thing as “unbiased.” Yet love, trust, and respect are hardly what we’d call “closed-minded,” except that they are, just in a positive way. Love, trust, and respect leave no room for scepticism, wariness, and such traits as we consider acceptable in healthy proportions (for reasons about motive that I explained above).

But “teaching” with an open-mind takes on so much more baggage, I think, because the teacher occupies the de facto as well as the de jure seat-of-power, at least early on – school is not a democracy (although that now seems to be changing, too). Yet teachers are no more or less trustworthy on the face of it than any other person. That’s probably most of all why I reduce my response to respecting human dignity because where it’s closed-minded, for all its “positive,” it’s also a do-no-harm approach.

That jibes with everything I’ve learned about good teaching, as in good teaching ultimately reduces to strong, healthy relationships. Short-term fear vs long-term respect – it’s obvious which has more lasting positive influence. And since influencing others with our bias is inevitable, we ought to take responsibility for pursuing constructive outcomes, or else it’s all just so much gambling. At the core, something has to matter to everybody, or we’re done.