On Free Speech: III. Craft Displacement

Remembering the Information Superhighway… next stop: Democracy!

Featured Image by Radek Kilijanek on Unsplash

Click here to read Pt II. The Speech of Free Speakers – “A Delusion of Certitude”?

On Free Speech III. Craft Displacement

“I think what we’re learning is that, particularly when they get a choice, a lot of people decide to believe what’s more comfortable for them, even if it’s not the truth.”

– Ellis Cose

In his interview, Ellis Cose attributes greater allotment of free speech to those with media access and financial clout, such as politicians, corporations, and individuals who may control either or both of these. Where something can’t be more free than “free,” let’s take his point to mean that more free speech is more opportunity, more prominence, a wider audience – “more” essentially being more accessibility. We might expect more accessibility to translate into more impact, simply by sheer weight of volume if not vetted credibility. Into the larger consideration of free speech Cose offers this nuance of accessibility against an historical standard, below, by which free speech, being free, is a great equalizer:

“Speech may be fought with speech. Falsehoods and fallacies must be exposed, not suppressed, unless there is not sufficient time to avert the evil consequences of noxious doctrine by argument and education. That is the command of the First Amendment.”

American Communications Assn. v. Douds, 339 U.S. 382 (1950)

Against this notion, Cose points particularly at the Internet and its associated media. A consideration of its global spread and interminable flow, put to use by nearly everyone – particularly by those politicians, corporations, and controlling individuals – ought to give everyone pause:

  • voices previously unheard can today access audiences previously unreachable
  • page views can be drummed up algorithmically, or simply go viral all their own; in either circumstance…
  • the potential for rapid profit now tempts an irresponsible publisher toward more revenue-generating click-bait; thus…
  • nothing less than spectacle, vitriol, or fill-in-the-blank will do
  • all this occurs, as anything must, in the zeitgeist of the times, which these days is decidedly emotional and specifically angry

Let me digress a moment and open a new window on financial incentives, algorithmic or otherwise… the flipside for publishers, media, and really any private business is that something unpopular corresponds to lost revenue. And if that’s an equal yet opposite incentive, it’s also just as mercenary. Not to be forgotten, either, is what the “free” in speech really means according to the First Amendment, specifically that government can take no action, outside a few negotiated exceptions, to deny people their public voice. This defines the boundaries of the freedom to speak in the public sphere. And what defines the boundaries of the public sphere? Questions, questions.

Meanwhile, in the private sphere, there are laws apart from the First Amendment that prohibit injurious and obscene forms of expression. Although, as in the public sphere, what’s injurious and obscene these days is up for negotiation, whether in court or, more and more commonly, pretty much anywhere and everywhere. And meanwhile, what even counts as “the private sphere”? Evidently, that’s subject to debate. Lawsuits, lawsuits. Oh, what a tangled web we’ve woven (… and, incidentally, it’s Freund’s book that provides the subtitles for Part I and Part II of this series).

In any case, despite a somewhat different standard, we still find within this wider scope of the rule of law a context for understanding the accountability of private media and publishing companies. Maybe, being as market-driven as anything else, we could consider the incentive to proffer appropriate free speech more wryly as “profit speech” – nothing less than popular, trendy flavour-of-the-month will do. I say maybe because not every company has a stellar record of accountability, which is a topic for another day but does implicate all that access and clout. On that score, since some private corporations and individuals have been known to bear an influence on politicians, we may also question how this conflation of public and private spheres affects free speech in either one.

For that, let’s go back to the Internet, which has massively amplified and accelerated all that access and clout, on top of the slew of details already mentioned. Engineered for uncomplicated access, rapid dissemination, unprecedented reach, and ubiquitous spread, the worldwide web has since become a relatively lawless e-zone, still a little beyond government regulatory control and lying in the hands of various… privateers? who are open for business. Once upon a time, a privateer was commissioned by a ruling power; today we might argue the reverse or, if we simply eliminate the state, as the Internet has arguably done, we could say that privateers are the ruling power. I’m not so sure they ever really weren’t.

Whatever… we can argue the Internet’s historical precedents. There’s even one vestige that remains a notable rival: the influence of talk radio may not have on-line profusion but, spanning decades and geography, it was making waves as the local toxic underbelly long before on-line Comments ever floated to the surface. Talk radio is fully immersed in access and clout as well as, in recent times, free speech – and, while we’re on the topic, how about complicity? Move over, financial incentives, now there’s something meaner: legal exposure. With that said, if you think talk radio’s strictly a conservative platform, let me assure you the most dominant station in these parts has long been a news-talk format that is today unabashedly liberal.

But again, I digress, again. Where was I?

Of course, the Internet. Free speech, other people. Curiously, what Cose offers about the Internet in a free speech context is all the more ironic since, once upon a time, the Internet was the great democratic equalizer. I suppose it still is, or else it can be, though like any tool, its effective usage takes some bit of skill.

Who ever thought, driving the ol’ information superhighway, we’d need winter tires? The worldwide web comes at great cost of responsibility as well as consequence, not only for the unprepared but for everyone alongside them.
Image by pasja1000 from Pixabay

For its accessibility and scope, on-line media amplifies and accelerates all our published speech as never we’ve known before: be it truth or falsehood, correct or misleading, accurate or mistaken, it’s all there, instagrammatically. And, apparently, we haven’t really been growing into the role of mastercrafting this tool, even while learning on the job – not building the plane while flying, to use the stale phrase.

Maybe that’s because what this tool we call the Internet imparts, as much as anything, is disembodiment. If there can be a divide between free speakers and the audience in the same room, what on earth could we expect in a chat room? Yet this consequence, like any other, is there to be understood and reckoned as we will, or as we won’t. Hard to blame the tool when it’s the craft.

Click here to read Pt IV. Grounding Movement Control

The Conceit of A. I.


From a technological perspective, I can offer a lay opinion of A.I. But check out some more technical opinions than mine, too:

MIT: The Seven Deadly Sins

Edge: The Myth of AI

The Guardian: The Discourse is Unhinged

NYT: John Markoff

Futurism: You Have No Idea…

IEET: Is AI a Myth?

Open Mind: Provably Beneficial Artificial Intelligence

Medium: A Critical Reading List

AdWeek: Burger King


The Conceit of A.I.

Time and energy… the one infinite, the other hardly so. The one an abstraction, the other all too real. But while time ticks ceaselessly onward, energy forever needs replenishing. We assign arbitrary limits to time, by calendar, by clock, and as the saying goes, there’s only so much time in a day. Energy, too, we can measure, yet often we equate both time and energy monetarily, if not by actual dollars and cents: we can pay attention, spend a day at the beach, save energy – the less you burn, the more you earn! And certainly, as with money, most people would agree that we just never seem to have enough time or energy.

Another way to frame time and energy is as an investment. We might invest our time and energy learning to be literate, or proficient with various tools, or with some device that requires skilful application. Everything, from a keyboard or a forklift or a tennis racquet to a paring knife or an elevator or a golf club to a cell phone or a self-serve kiosk or the new TV remote, everything takes some knowledge and practice. By that measure, there are all kinds of literacies – we might even say, one of every kind. But no matter what it is, or how long it takes to master, or why we’d even bother, we shall reap what we sow, which is an investment analogy I bet nobody expected.

Technology returns efficiency. In fact, like nothing else, it excels at creating surplus time and energy, enabling us to devote ourselves to other things and improve whichever so-called literacies we choose. The corollary, of course, is that some literacies fade as technology advances. Does this matter, with so many diverse interests and only so much time and energy to invest? How many of us even try everything we encounter, much less master it? Besides, for every technological advancement we face, a whole new batch of things must now be learned. So, for all that technological advancement aids our learning and creates surplus time and energy, we as learners remain the central determinant as to how to use our time and energy.

Enter the classroom what’s lately been called Artificial Intelligence (A.I.). Of course, A.I. has received plenty of enthusiastic attention, concern, and critique as a developing technological tool, for learning as well as plenty other endeavours and industries. A lengthy consideration from The New York Times offers a useful, broad overview of A.I.: a kind of sophisticated computer programming that collates, provides, and predicts information in real time. Silicon Valley designers aim to have A.I. work at least somewhat independently of its users, so they have stepped away from older, familiar input-output modes, what’s called symbolic A.I., a “top down” approach that demands tediously lengthy entry of preparatory rules and data. Instead, they are engineering “from the ground up,” building inside the computer a neural network that mimics a brain – albeit, a very small one, rivalling a mouse – that can teach itself via trial-and-error to detect and assess patterns found in the data that its computer receives. At these highest echelons, the advancement of A.I. is awe-inspiring.

Now for the polemic.

In the field of education, where I’m trained and most familiar, nothing about A.I. is nearly so clear. Typically, I’ve found classroom A.I. described cursorily, by function or task:

  • A.I. facilitates individualized learning
  • A.I. furnishes helpful feedback
  • A.I. monitors student progress
  • A.I. highlights possible areas of concern
  • A.I. lightens the marking load

On it goes… A.I., the panacea. Okay, then, so in a classroom, how should we picture what is meant by “A.I.”?

Mr. Dukane
“Anybody remember Mr. Dukane?”

Specific examples of classroom A.I. are hard to come by, beyond top ten lists and other generalized descriptions. I remember those library film-strip projectors we used in Grade 1, with the tape decks attached. Pressing “Play,” “Stop,” and “Eject” was easy enough for my six year-old fingers, thanks to engineers who designed the machines and producers who made the film strips, even if the odd time the librarian had to load them for us. (At home, in a similar vein, how many parents ruefully if necessarily consider the T.V. a “babysitter” although, granted, these days it’s probably an iPad. But personification does not make for intelligence… does it? Didn’t we all understand that Max Headroom was just a cartoon?) There’s a trivia game app with the hand-held clickers, and there’s an on-line plagiarism detector – both, apparently, are A.I. For years, I had a Smart Board although I think that kind of branding is just so much capitalism and harshly cynical. Next to the Smart Board was a whiteboard, and I used to wonder if, someday, they’d develop some windshield wiper thing to clean it. I even wondered if someday I wouldn’t use it anymore. For the record, I like whiteboards. I use them, happily, all the time.

Look, I can appreciate this “ground-up” concept as it applies to e-machines. (I taught English for sixteen years, so metaphor’s my thing.) But intelligence? Anyway, there seems no clear definition of classroom A.I., and far from seeming intelligent to me, none of what’s out there even seems particularly dim-witted so much as pre-programmed. As far as I can tell, so-called classroom A.I. is stuff that’s been with us all along, no different these days than any tool we already know and use. So how is “classroom A.I.” A. I. of any kind, symbolic or otherwise?

"... so whose the Sub?"
“Hey, so who’s the Sub today?”

Symbolic A.I., at least the basis of it, seems not too dissimilar to what I remember about computers and even some video arcade favourites from back in the day. Granted, integrated circuits and micro-processers are a tad smaller and faster these days compared to, say, 1982 (… technology benefitting from its own surplus?). Perhaps more germane to this issue is the learning curve, the literacy, demanded of something “intelligent.” Apparently, a robot vacuum learns the room that it cleans, which as I gather is the “ground-up” kind of Symbolic A.I. Now, for all the respect and awe I can muster for a vacuum cleaner—and setting all “ground-up” puns aside—I still expect slightly less from this robot than passing the written analysis section of the final exam. (I taught English for sixteen years, so written analysis is my thing.) It seems to me that a given tool can be no more effective than its engineering and usage, and for that, isn’t A.I.’s “intelligence” more indicative of its creator’s ingenuity or its user’s aptitude than of itself or its pre-programmed attributes?

Press Any Key to Begin

By the same token, could proponents of classroom A.I. maybe just ease off a bit from their retcon appropriation of language? I appreciate getting caught up in the excitement, the hype—I mean, it’s 21st century mania out there, candy floss and roller coasters—but that doesn’t mean you can just go about proclaiming things as “A.I.” or, worse, proclaiming A.I. to be some burgeoning technological wonder of classrooms nationwide when… it’s really not. Current classroom A.I. is simply every device that has always already existed in classrooms for decades—that could include living breathing teachers, if the list of functions above is any guide. Okay then, hey! just for fun: if classroom tools can include teachers who live and breathe, by the same turn let’s be more inclusive and call A.I. a “substitute teacher.”

Another similarly common tendency I’ve noted in descriptions of classroom A.I. is to use words like “data,” “algorithm,” and “training” as anthropomorphic proxy for experience, decision-making, and judgment, i.e. for learning. Such connotations are applied as simply as we might borrow a shirt from our sibling’s closet, as liberally as we might shake salt on fries, and they appeal to the like-minded, who share the same excitement. To my mind, judicious intelligence is never so cavalier, and it doesn’t take much horse-sense to know that too much salt is bad for you, or that your sibling might be pissed off after they find their shirt missing. As for actually manufacturing some kind of machine-based intelligence, well… it sure is easy to name something “Artificial Intelligence,” much less bestow “intelligence” by simply declaring it! The kind of help I had back in the day, as I see it, was something I just now decided to call “S.I.”: sentient intelligence.

Facetiousness aside, I grant probably every teacher has spent some time flying on auto-pilot, and I’ve definitely had days that left me feeling like an android. And fair enough: something new shakes things up and may require some basic literacy. There’s no proper use of any tool, device, or interface without some learned practical foundation: pencil and paper, protractor, chalk slates, the abacus. How about books, or by ultimate extension, written language, itself? These are all teaching tools, and each has a learning curve. So is A.I. a tool, a device, an interface? All of the above? I draw the line where it comes to classroom tools that don’t coach the basketball team or have kids of their own to pick up by 5pm: the moniker, “A.I.,” seems more than a bit generous. And hey, one more thing, on that note: wouldn’t a truer account of A.I., the tool, honour its overt yet seemingly ignored tag, “artificial”? R2D2 and C-3PO may be the droids we’re looking for, but they’re still just science fiction.

Fantastic tales aside, technological advancements in what is called the field of A.I. have and will continue to yield useful, efficient innovation. And now I mean real Silicon Valley A.I., not retcon classroom A.I. But even so, to what ends? What specifically is this-or-that A.I. for? In a word: why? We’re headed down an ontological road, and even though people can’t agree on whether we can truly consider our self, we’re proceeding with A.I. in the eventual belief that it can. “It will,” some say. Not likely, I suspect. Not ever. But even if I’m wrong, why would anyone hope that A.I. could think for itself?

Artificial Intelligence
10. Be “A.I.”    20. Go to 10     Run

Hasn’t Heidegger presented us with enough of a challenge, as it is? Speaking of time and energy, let’s talk opportunity costs. Far greater minds than mine have lamented our ominous embrace with technology. Isn’t the time and energy spent on A.I.—every second, every joule of it—a slap-in-the-face of our young people and the investment that could have been made in them? It’s ironic that we teach them to develop the very technology that will eventually wash them away.

Except that it won’t. I may be out on a limb to say so, but I suspect we will sooner fall prey to the Twitterverse and screen-worship than A.I. will fulfil some sentient Rise of the Machines. The Borg make good villains, and even as I watch a lobby full of Senior Band students in Italy, staring at their iPhones, and fear assimilation and, yes, worry for humanity… I reconsider because the Borg are still just a metaphor (… sixteen years, remember?). Anyway, as a teacher I am more driven to reach my students with my own message than I am to snatch that blasted iPhone from their hands, much as I might like to. On the other hand, faced with a dystopian onslaught of Replicants, Westworld Gunslingers, and Decepticons, would we not find ourselves merely quivering under the bed, frantically reading up on Isaac Asimov while awaiting the arrival of Iron Man? Even Luke Skywalker proved susceptible to the Dark Side’s tempting allure of Mechanized Humanity; what possible response could we expect from a mere IB cohort of inquiry-based Grade 12 critical thinkers and problem-solvers?

The Borg
“Resistance is futile.”

At the very least, any interruption of learners by teachers with some classroom tool ought to be…

  1. preceded by a primer on its literacy,
    • i.e. explaining how to use that particular tool in
  2. a meaningful context or future setting,
    • i.e. explaining why to use that particular tool, before anybody
  3. begins rehearsing and/or mastering that particular tool,
    • i.e. successfully executing whatever it does

If technology helps create surplus time and energy, then how and why and what had better be considered because we only have so much time and energy at our disposal. The what, the how, and the why are hardly new concepts, but they aren’t always fully considered or appreciated either. They are, however, a means of helpful focusing that few lessons should be without.

As a teacher, sure, I tend to think about the future. But that means spending time and paying attention to what we’re up to, here and now, in the present. To that end, I have an interest in protecting words like “learning” and “intelligence” from ambiguity and overuse. For all the 21st century hearts thumping over the Cinderella-transformation of ENIAC programmable computation to A.I., and the I.o.T., and whatever lies beyond, our meagre acknowledgement of the ugly step-sister, artificiality, is foreboding. Mimicry is inauthentic, but neither is it without consequence. Let’s take care that the tools we create as means don’t replace the ends we originally had in mind because if any one human trait can match the trumpeting of technology’s sky-high potential—for me at least, not sure for you—I’d say its hubris.

Another fantastic tale comes to mind: Frankenstein’s monster. Technological advancement can be as wonderful as horrifying, probably usually somewhere in between. However it’s characterised or defined, though, by those who create it, it will be realised in the end by those who use it, if not those who face it. For most people, the concept of cell phones in 1982 was hardly imagined. Four decades later, faces down and thumbs rapid-fire, the ubiquity of cell phones is hardly noticed.

Umm, This.

[Originally published June 11, 2017]

So, it’s interesting, listening to people talk these days, quite frankly, in terms of their words, their language, their speech. I have an issue with what everyone’s saying – not like everyone everyone but, you know, it’s just their actual words when they talk about complex issues and such, or like politics, what with the whole Trump thing, you know, that Russia probe and the Mueller investigation and everything that goes with that. I’m also a bit of a news hound, and that’s really where I started noticing this on-air style of speeching, of making it sound thoughtful and taking them seriously.

And it’s so much out there, like an epidemic or something, which is interesting, which speaks to on-line streaming and TV news, talk radio, and pretty much the whole 24-hour news cycle. I was a high school English teacher for sixteen years, and I also started noticing all this, you know, frankly, during class discussions, too. And there was me, like guilty as anyone.

Here’s the thing, though, because I guess substance will always be up for debate, but that’s just it – it’s so wide-ranging that it’s like people have no idea they’re even doing it, which is interesting. It’s almost like it’s the new normal, which really begs the question – are people getting dumber? Is education failing us? In terms of intelligent debate, that will always be something that probably might be true or false. And let’s have those conversations!

But in terms of intelligible debate, it’s interesting because, when I listen to how people are talking, it gets really interesting because when I listen what they actually say, it’s like they’re making it all up on the spot in the moment as they go, so it’s just that that makes me not as sure it’s intelligent as it’s less intelligible. But it’s all in a sober tone, and they’re just expressing their opinion, which is democracy.

And that’s the thing – if you challenge anybody with all what I’m saying, clarity-wise, it’s interesting, they’ll get all defensive and whatnot, like it’s a personal attack that you’re calling them stupid or whatever, like you’re some kind of Grammar Jedi.

And, I mean, I get that. So that’s where I think people don’t really get it because I totally get where they’re coming from.

Seriously, who would want to be called like not intelligent or anything all like that, whatever, especially if we’re trying to discuss serious world issues like the whole Russia thing that’s been happening or the environment or all the issues in China and the Middle East? Or terrorism and all? I mean, if you look at all that’s happening in the world right now, but you’re going to get that detailed of the way someone talks, maybe you should look in the mirror.

And I mean, SNL did the most amazingggggggggg job with all this, back in the day, with Cecily Strong on Weekend Update as The Girl You Wish You Hadn’t Started A Conversation With At A Party. Comedy-wise, she even like makes a point, but basically, she’s furthering on intelligence, except I’m talking about intelligibility. But still, if you haven’t seen it, what can I tell you? Your missing out, SO FUNNY. She. Is. Amazing.

And that’s the other thing, and this one’s especially interesting, is just how there’s just SO MUCH out there, what with Google and the Internet, and Wikipedia and all, so who could possibly be expected to know like every single detail about all the different political things or the economy and all the stuff that’s out there? And it’s even more with speaking because pretty much most people aren’t like writing a book or something. (W’ll, and that’s just it – nobody speaks the way they write, so… )

Anyway, so yeah, no, it’s interesting. At the end of the day, first and foremost, one of the most interesting things is that everybody deserves to have a say because that’s democracy. And I think that gets really interesting. But the world gets so serious, probs I just need to sit down. See the bright side, like jokey headlines from newsleader, Buzzfeed, or 2017’s “Comey Bingo” from FiveThirtyEight. Gamify, people! News it up! Nothing but love for the national media outlet that helps gets you wasted. Or the one about the viral tweet, for an audience intimately familiar with pop culture? News should be taken seriously, and the world faces serious aspects, for sure. But the thing is, work hard but party harder! I mean, we’re only here for a good time, not a long time!

And it’s interesting ‘cuz people seem to require more frequent, more intense, more repeated engagement, to spice up their attention spans. There’s some good drinking games, too, on that, because politicians! I know, right? But not like drunk drunk, just like happy drunk, you know? Not sure if all this counts as it means we’re getting dumber, per se, but it’s just interesting.

So, yeah, it’s interesting because we’ve come such a long way, and history fought for our freedom and everything, so I just really think going forward we should just really appreciate that, and all, you know?