Wednesday, January 15, 2014

Keeping my promises

I promised my ENGL/DTC 355 class that I would post a model for today's assignment after it was due. Not a helpful model for the assignment, I admit, but I'm hoping to (a) explain an assignment that isn't super clear to begin with and (b) provide a model for future assignments of this type.

(As a sidenote for anyone in the class checking this page out: I try not to provide many guidelines on purpose. A "C" for our class is doing the absolute minimum to accomplish the assignment, so "A" and "B" grades come from making the assignment your creation rather than a response.)




Ha. Word to the wise: if you're ever going to do a post update like this, don't rely on your browser to keep a tab open all day.

So, the pictures to work from in the alternate version of the assignment are below:
Original. That kid looks so freaking happy.

Black and white. Classy.

Sepia. Classic.

High-contrast. Artsy.
 
Those descriptions are only sort of joking. Black & white and sepia pictures have that kind of aged, old time-y look to them. It generates an ethos of something that matters. Matters more than a low-resolution cell phone photo, anyway. It's a really simple change, but the black and white version in particular looks like it could just as easily be 1913 as 2013. The kid isn't wearing any identifying clothing, so looking at the black and white before color is a little like looking at colorized versions of old photos. This might not be what hops into everyone else's mind (a very real danger when we rely on pathos). But, those first two alterations are simple changes that have the power to add a lot of gravity to an image.
 
Which is why the high-contrast one is so interesting to me -- whether we're talking about this image or any other. Changing the contrast on a picture is something cameras come equipped with these days, not to mention cell phones and photo-posting websites like Instagram. High-contrast filters are a great way to bring out colors, even if you lose definition in the picture. The high contrast one really makes me focus on the kid's hands. They've got the most actual definition still, and his face is a close second (the eyes and mouth are dark lines across a very light space). But, there's always been something about people using that filter when they post things to Facebook...
 
It's the apocalypse. Shows like Survivors and movies like The Book of Eli (what I was able to see of it) really abuse that high-contrast lens (or a polarized lens; I'm not 100% up to date on my editing skills) to achieve a harsh, grainy look to the world. And, it works. Changing the contrast on that picture really makes me worry about that kid. The background is all faded away. I can't tell if he's smiling. The only clear portion of the picture is below the watermelon; hands that look like they haven't been washed in weeks. The watermelon also has the color and marbling of raw beef. Something terrible has happened, and this watermelon is his only solace.
 
Or, at least that would be my argument if I was using this to create a logo for post-apocalyptic survival melons.

Tuesday, September 11, 2012

Digital Humans

Woah. I'm a digital human!
I wanted to use a picture of a bunch of finger puppets arguing over the meaning of Hamlet, but I think Keanu works just as well for digital humanists. I think I like "Humanities Computing" more than "Digital Humanities" (see Kathleen Fitzpatrick's Chronicle article, "The Humanities, Done Digitally"). Humanities Computing sounds like a field within the Humanities -- it's what we normally do, but it happens to take place on the interblog, on a DVD, on a Smart Board or through some medium that's electronic.

Weird thought: Television falls into the realm of digital humanities, but television wasn't required to be digital until just a few years ago. But I guess having an Analog Humanities would've just been silly.

Now that that's out of the way, Fitzpatrick's quick coverage of Kirschenbaum's more in-depth "What Is Digital Humanities abd What's It Doing in English Departments?" (ADE Bulletin 150, both fairly recent) both try to bring us to the point that DH (the field, not the author) is the thing we've always done, but now we're using the power of technology to make it better.

Steve! Stop! We can get the library to PDF our sources to us! You don't have to run there!
We can data mine digital texts (who knew Plato used Justice so much), we can trace the histories and movements of authors and their works (and see where they might have bumped into other interesting folk), we can use Google Scholar to stalk other academics' work. Or we can see how many ways "Literacies Of..." gets abused. Even better, we can weep at the intellectual acclaim something like My Life as a Night Elf Priest gets. Is that better? I don't know.

More importantly, is this new? (Yes, it's a contrived question. I'm trying to answer a prompt here.) Folks like Lanham and Ohmann and Selfe and Cooper and Hawisher and others were building up the wonders of technology ages ago. Even my favorite mad inventor has been telling us how much computers can do for humanity, leading to all sorts of literature and film adaptations of his theories. But, those folks were looking forward at all the possibilities technology could bring. Current conversations seem to look at what we can do now that the potential has solidified a little more.

Woah. My brain can hold more than 250 of the greatest literary works of all time!
Which sort of returns me to that Bauerlein book I mentioned a few weeks ago. Bolter and Grusin's Remediation (2000) details some issues of new media we still haven't exactly figure out how to deal with. Media today is hypermediate (31) - it's realistic, but it's impossible; it draws attention to the medium while we look through the medium.But, like we talked about in class last week, it's that transparency that makes the interfaceless interface (32) so powerful. All of a sudden, when I'm composing, I'm not worried that my body paragraphs
are going to get a funky alignment because I flubbed an HTML tag.

And, like others have noted before me, the prevailing abilities in the means of production dictates the kinds of things we'll produce. So long as I have a big stand mixer, I'll keep making bread. So long as I'm using this blog service, I'll keep adding pictures, links and spinning GIFs to my reading responses. I refuse to do any of those three in traditional print.

Hey, did you know that in just a few short years, you'll be able to pet your Nintendogs? Yeah, that's messed up. Technology is becoming more immediate every day. Does this mean we'll be able to assign projects where students express themselves using only textures?
This is my report on Antebellum tariff policies in the Ohio Valley.
I really hope that composition instructors recognize the responsibility that comes with this power.

When I'm reading or viewing, especially on this series of tubes, I'm not thinking about the fact that I'm scrolling through text or clicking through links. Bolter and Grusin's notion of replacement (48) explains the greatest strength and the worst flaw of electronic ultramedia (mmm, self citation, makes me feel like Kress). At any time, I can summon up a new page, new line of thought, new anything. The interface isn't exactly invisible (those of you with parents who aren't as adept at navigating the blogosphere know what I mean), but it's transparent enough that I think about it less than I think about turning a page. Especially since I can't turn a page without getting two or three at a time.

With this ease, though, comes an almost insatiable need to keep redoing what's been done with the tools we've got now. It's sort of like getting a new power tool. The world needs another Great Gatsby remake as much as my walls need more holes in them, but darn it, if I've got the power to repurpose that useless wall into a speaker-holder, then Di Caprio should get to do what he wants as well.
I take that back. I draw the line at 3d.

This "aggressive remediation" (48) at first made me think of such masterpieces as Videodrome (1983), in which we are regularly told that "television has become the aperture of the mind." Of course, Bolter and Grusin are quick to point out that in our era, virtual reality is a preferred mode because it mimics our subjective point-of-view reality (77). I'm getting a little far from composition issues here, but like Hawisher and Selfe ("The Rhetoric of Technology and the Electronic Writing Class" CCC 42:1) said in the early 90s, we need to be wary of where all this stuff can lead us. The internet may well be the new "aperture of the mind," which might explain why writing instruction has taken such a drastic turn over the past few years (not for better or worse, just a new direction). And it might be "dangerously shortsighted" (Selfe's CCCC Chair's address, 1997) not to think about these affordances of technology, mediation, remediation, and hypermediation in the classroom, but we've got to stay a few steps back to get an idea of the broader picture.

To paraphrase George Carlin, not every Tweet deserves a citation.

Friday, September 7, 2012

As long as I made this...

One of my biggest fears about using any sort of technology more advanced than a burnt stick and a cave wall in the classroom is this: when the tech fails, how do we assess it?

So, since my own advanced technology decided to seppuku its own video card all over the place on Tuesday, I decided to post it and my rambling notes on this blog. Also, because I keep trying to remember an HG Wells quote about increased literacy and Marie Corelli, but I can't find the presentation I used it in a few years ago, I thought it would be good to keep this stuff in multiple places. That way, the next time I spill coffee directly into my laptop, it'll all work out.



John M. Slatin’s “Reading Hypertext: Order and Coherence in a New Medium” is hard to read with twenty years of hindsight. It’s like that Futurama episode where the slick, 80s businessman with bone-itis gets out of the cryogenics lab and joins the Planet Express crew. We all know that his plan of building the company up with classy 80s business tactics won’t end well for the crew (we’ve seen it play out in real life), but we’re stuck watching just to see how he goes about it.


 Likewise, when Slatin opens by describing our “normal” (read: 1980s) reading style as “embarrassingly simple” (870) in contrast with new computerized texts as their own “medium for composition and thought” (Ibid), many of us probably shared the same thought: slow down there, Skippy, no need to get so excited just yet.
But then, it’s easy to be a Debbie Downer with so much experience using hypertext.





 Documents with “multiple points of entry, multiple exit points, and multiple pathways” (871) seem fantastic, and I don’t use that word lightly. It’s hard to think of hypertext, especially when discussed at the end of the 80s, as divorced from some of the earliest commercial uses of it, text-based games like Adventure! and Zork, or their graphical descendents Myst and L.A. Noir. Even if we leave the realm of the nerdy and just go to non-computerized hypertext books – Choose Your Own Adventure – (have we left the realm of the nerdy?) the mode had quite a bit to offer.
So, in trying to look at this as positively as possible, I’ll drive towards everyone’s favorite question: what do I do on Monday?

[Eesh, it is freaking impossible to get text on one side and an image on the other with any sort of regularity here.]


Slatin’s basic premises are still sound. Widespread digital literacy is a comparatively recent phenomenon. Hypertexts, for all their power to break the mold, still typically have a clear beginning and clear points of exit. We still have a desire to “fix” a text in space, time and understanding – we want it to stop wiggling around so much. And, hypertexts must still have some combination of predictability and chaos to be successful, and here, I’ll define success as holding a reader’s attention from an intended entrance through an intended exit.


First off, the problem of widespread digital literacy. Back when computers were so new that no one questioned the plot of Hackers, the compositional and intellectual frontiers of the digital world gave digital composition a neat aura. Slatin says that hypertext can only exist in an “online environment” (874), and although I’d point to texts of similar functionality existing off-line, we’ll run with this idea. Going hand-in-glove with the online-only readability of hypertexts, “the organization of memory in the computer and in the mind… makes [hypertext] fundamentally different from conventional text” (Ibid). True. But, over time, we’ve crafted software like OneNote, and we turn Cooper and Selfe’s wonderful, digital discussion board, full of boundless opportunity for equality and democracy, back into the oppressive notebook we tried to get away from. Although Slatin claims that the “rapidly evolving technological environment” forces hypertext authors to work under a different set of assumptions, he is assuming that this departure from the norm will stay intelligible to readers. Widespread literacy and digital literacy do as much to open the doors to new possibility as they do to normalize both sides of the screen. Our blog posts attest to this – how many of us really have the freedom (or desire) within our respective blog services to create something that doesn’t look like a long sheet of paper? Having created something that really is fundamentally different from traditional text, how many of us would find regular readers willing to develop a new literacy?


Slatin offers a challenge along with this premise. “The difficulty [in holding a reader] is compounded because hypertext systems tend to envision three different types of readers…. One function a rhetoric for hypertext will have to serve will be to provide ways of negotiating it” (875). Unfortunately, that rhetoric often mirrors the rhetoric of a traditional, static text, with link-clicking standing in for page-turning.
To keep the text intelligible, though, Slatin does note that hypertexts must have clear entrances and exits. Otherwise, we’re all like little Donnys, wandering into a theater, trying to figure out what’s going on. So, on the upside, although hypertext’s power has been subverted, we can still act more like “browsers” (in Slatin’s terminology) than “readers.” One power that a theory of hypertext grants us (as opposed to the practice of hypertext) is that of design. I would argue that hypertext is ultimately more limited in means of expression than static and non-digital text, if only out of practicality – I’m still reading hypertext from a two-dimensional screen, and my reading-program must share the fonts, templates, languages, etc. of the author’s machine.


 But, it does make design easier, especially in terms of Nelson’s “non-sequential writing” (qtd. on 876). If I was a terrible person, I could write all of my notes for this presentation in Comic Sans. I could make the quotations blue, my topic sentences red, and put marching ants around things I wanted emphasize. I could cut the second paragraph and move it to the end if I felt it was necessary. Most importantly, “non-sequential writing” and its traditional-authoring uncle, freewriting, is incredibly easy. Although the final product for a student, and for most authors, will be something resembling static text, the authoring and design process is aided by the fact that I can explore many avenues of thought in my writing and keep them all without crafting a brand new document every time. Imagine how many times Hemingway would have rewritten the final lines of The Sun Also Rises if he’d been able to just make a new document for each ending, or if he’d been able to put a script into his writing that changed the final lines every few seconds.


The final premise and challenge I need to deal with (since I have one count against using hypertext on Monday and one in favor) is that hypertexts, like all texts, must have some combination of predictability and unpredictability to hold their readers. Because we as readers always want to get the “right” reading of a hypertext, or perhaps just a “complete” reading (how many of us bookmarked Choose Your Own Adventure books so we could go back and try every branch in the narrative?), hypertext creates trouble. It looks a lot more like natural thought when we consider Slatin’s descriptions of nodes and links. “The freedom of movement and action available to the reader – a freedom including the possibility of co-authorship – means that the hypertext author has to make predictions” (877). How many ways can a reader really imagine what I’m trying to communicate? In terms of practice, I’m imagining students using something like Tomboy Notes – a simple, notebook program that uses tools like hashtags to link words to other Tomboy documents. But, again in practice, this often looks like a citation method rather than a new form of composing. At the very least, it’s the ultramodern “ipse dixit” of our society. By giving the reader the power to check the link immediately or continue reading, though, we’ve backtracked even further than referring to the Master; we’ve re-entered Socratic discourse. Yes, author, continue with your thoughts until I find a tricky point.

I’m two-to-one in favor of hypertext composition in the classroom, if only because the theories of why it could be so powerful (theories old enough to take to the bar, mind you) still resonate with what we’re trying to do today. Unfortunately, the power of hypertext has been replaced by the banality of hypertext. Widespread basic digital literacy (the only kind we can expect of students not in a program that develops e-literacy) has normalized it to the point where Wikipedia, the go-to hypertext example, is really nothing more than an encyclopedia with easier to turn pages.


Wednesday, September 5, 2012

Very simply, the Internet is not the world.

Maybe not your world, Lester.

Faigley's almost-throwaway line in "Literacy After the Revolution," his CCCC Chair's address in 1996 (CCC 48:1), hurts me. Even though it was the present for him, the sepia light that he casts the late 90s in also hurts me. Really, this was just a painful article all around. And I'm not even going to talk about the "glory days" of adjuncts making up only 35% of faculty (34).
No, just one soul-crushing rhetorician at a time, thank you.
How does it pain me? Let me count the ways.

Faigley, in 1996, when Power Point's staggering array of slide transitions blew my mind, noted that "many of [his students'] personal home pages [were] little more than self-advertisements" -- hello, MySpace, Facebook, Twitter and Friendster -- "the students who made them have experience producing and publishing multimedia forms of literacy" (39). As much as I love the CACC folks, this claim feels the same as putting "web design" on my CV because I made a Google Site to hold 101 syllabi and materials. Although the statement is true, it's not entirely honest.

But it only feels that way. I remember making websites on Angelfire, back a'fore Web 2.0 came and gave all these young'uns delusions of design. If I could have made a single page in those sites in the combined time it's taken me to make all these blog posts (including time spent doodling in GIMP), I'd've been a happy camper. Although I haven't seen any of these student web pages (conveniently, Faigley provides no way to track them down!), I wouldn't doubt the man too much because I agree with his observation about visual media, that "after years of attempting to teach students to analyze images, they learn much more quickly when they create images of their own" (41). Again, I'm thinking about the software available in the mid-90s to manipulate images. Both processes, unless you had access to professional tools, were long and arduous. I hate to sound like a grumpy old man, but literacy (here meant just as the means to decipher and create material) was earned because they couldn't just drag images from a Google Image search into an empty space on a blog, attach a witty comment and keep going. If you were going to put in the work to insert an image, a link, a change in color, a change in font or any other deviation from the text you were already writing in, you'd better either be a master at HTML tags or really think the change in modality was worth it.
I don't care what my CV says. I'm not a graphic designer.
And, I'm probably oversimplifying Web 2.0 -- I only know it as the collection of sites, services and technologies that allows powerful text, media, and code editors to exist in browsers. I also know it from news articles and op-eds that I'm conveniently not going to provide links to as the thing that killed design. Sure, I could really get in to tweaking my blog's layout, background, and all sorts of stuff, but I'm not going to. Not until spinning GIFs come back.

Wheeee! And you wondered why this page wouldn't load.
Maybe I'm just bitter for the same reason H.G. Wells was when widespread literacy became a thing along with the rise of a popular press. It was just too easy to do things we struggled for in the past, and all of a sudden kids these days are off reading Marie Corelli (perish the thought) instead of Dunsany. Or, more aptly, kids these days are just uploading every aspect of their lives into Tumblr feeds and Twitter feeds and Facebook feeds and linking and retweeting and I suddenly sound like my grandpa's imitation of old people. Anyway, without the challenge involved in creation, understanding is a little more difficult to come by.

Which is why I liked Cynthia Selfe's CCCC Chair's address the next year (CCC 50:3). I sense a little remorse when she says "computers are rapidly becoming invisible, which is how we like our technology to be. When we don't have to pay attention to machines, we remain free to focus on the theory and practice of language, the stuff of real intellectual and social concern" (413), but I really would like our technology to be invisible. Unfortunately, it's invisible like the pictures hidden in Magic Eye books -- yeah, I can't see the firetruck but I know it's there and we have to talk about it and I'm pretty sure this was a Seinfeld episode.
Sooner or later, all discussions involving rhetoric come back to Seinfeld.
Also, these things lose a lot of value on a computer screen.
In the abstract, I don't mind if this is a "dangerously shortsighted" view (414). I don't like computers to operate in the middle space they're in now -- some students can make brilliant webpages, others can compose fantastic music with a synth program and Audacity, and the other 90%, well, many of them can open Word without getting WordPad by mistake. In every conversation about technology, I lose someone. But, in every class that's devoted to technology (U of I has a Microsoft Office course for freshman, but I don't know if WSU does), the mechanics of software is the content, rather than the potential. Aside from English 300 at WSU, students don't often get a "here's a copy of Photoshop, a pile of copyright-free images, and an hour; go nuts." Faigley's students that created such nice websites had the time, energy and motivation to poke around their respective web-design programs. They probably made some terrible sites before the products Faigley saw. I'd love to "pay attention to technology" (415), and I've tried in the past, but it all comes down to finding something students find useful.

As a fun note, you won't find many students who appreciate things like 750words.com, even though it basically guarantees they won't lose their work.

Maybe it's because I can't collect an all-electronic portfolio. Maybe it's because students really need a composition course based solely on rhetoric (and the many means of persuasion) and another composition course based solely on doing research. Neither of those is innately all-electronic or all-paper, but it would free up a lot of time to really address doing these things well, rather than talking about visual rhetoric for a day or two in class, then assigning a five page analysis of one 4chan meme.

DISCLAIMER:
Don't send your students to 4chan.
Ever.
Fast forward to the less past and the five assumptions that NCTE makes about "courses that engage student in writing digitally"

First assumption? Yes. Moar please. However, there's an epistemological bias here -- ask a group of 18 year olds whether knowledge is "constructed." I'll introduce them. I'll try to get them to reflect on it. But, unless I can shake them of any other epistemological beliefs they've already got, we're going nowhere fast.

Assumption the second? In 2012, that's almost a "duh." But, as Adam Sprague and others have noted, teachers in other fields are asking that we just teach students how to write a paragraph. I'd bet we could teach them to write an effective paragraph while applying digital technologies (meme creation is a good start), but I also think we might be biting off more than we can chew in a 16-week course.

Dritten assumenieren? I don't think you can do the second without it. Unless you have a Photoshop day, then never speak of it again. ... No, wait. I think I did that once. In my defense: 16 weeks!

Fourthly, I think we're already supposed to be addressing information literacy independent of computers. Also, the Wikipedia lesson has gotten so old that when I say the W-word in class, the first response is (chanted in unison by most of the class) "we can use it so long as it isn't our primary resource." We can still get a discussion as to why it shouldn't be a primary resource, but they've already got the gist much of the time. Dealing with news reports (especially "New study reveals that..." reports) is another matter and one I try to tackle.

And, for the Milla Jovovich of these assumptions: I don't think this should be a new assumption for courses that engage students in writing digitally. This is an assumption for courses that teach students writing. No, actually, it's an assumption for courses that teach students.

So, I guess I'm still in some kind of disagreement with Selfe. What we do with computers is what we should be doing all around. The pencil, the typewriter, the computer, and the laser-cannon-for-writing-on-the-moon (in development) all bring different strengths and weaknesses to the table. That fifth assumption is probably the most important -- when should I use which technology? Will my professor appreciate a snarky line pasted onto an image that's related to my topic? What size font is really necessary to make sure everyone remembers I truly am the greatest villain of all time?
Rhetoric.

Monday, September 3, 2012

>get book

There're a thousand variations on the old "the more things change" saying. I've been racking my brain trying to remember where my favorite came from -- that the width of Roman chariot ruts in ancient roads determined the eventual width of common horse-drawn carts, and that those carts would see little change through the industrial revolution, and that the appropriate width of train tracks was deemed to also be the width of carriages, and that, eventually, these specs found their way into car designs. (It was also said more eloquently in the original source, but I think I butchered it enough here for it not to qualify as plagiarism.)

Normally, I'd put a picture of my brain on a rack here. But I left that computer at the office.
Use your imaginations.

The point here (there is one) is that times, empires and technological eras may change, but we're comfortable with what we've got. So, when I read Lanham's thoughts in "The Electronic Word: Literary Study and the Digital Revolution" (ca. 1989), I chuckle, like Aminah. Two decades of hindsight does make him look a little over-excited about this whole computer and interblog thing. His opening alone is yet another way of saying "The more things change, the more they stay the same; but..."
Page 265. Synonymous with "first"in digi-speak.
The question is whether students will read at all. But they've got TV and they still read. Computers are just going to change the way we think about text. OK. But, we must remember that few things in the 80s were worth watching with as much gusto as we watch TV today. (Thanks for making me so productive, Netflix.) Kids had twenty-three hours before the next TMNT episode came on, and you can get through at least a couple of pages in that time. I'll cut Lanham some slack because this article came out in the age of Zork and Oregon Trail. I'm wary of any claims about how awesome new technology can be for writing, though. Mark Bauerlein notes in The Dumbest Generation that
Page 85. There are better pages, though.
When I read Cooper and Selfe lauding discussion forums for providing students with "freedom from interruption" ("Computer Conferences and Learning: Authority, Resistance, and Internally Persuasive Discourse" 853), as encouraging positive "'disruptive' behavior" (Ibid), and giving students a place to "[set] their own agenda" (857), I can't help but see those olde tyme pictures of kids getting fitted for shoes using a fluoroscope. Discussion forums did help left the iron fist of the notebook that had oppressed students for so long, but do they help us now?

Seriously. Those of you teaching, take a moment to check those discussion boards you put on Angel. Then tell me the technology still "exist[s] on the intellectual margins of most traditional academic discourse communities" (858).

Ultimately, my beef is with some things raised by Lanham and Slatin (no spoilers here -- you'll hear my rage tomorrow). Hypertext is awesome. Hypertext narratives are amazingly powerful. Hypertext discussions (you know, a Facebook message with 9 people where everyone keeps posting videos and links in response to something said three lines ago) are utterly insane and usually worthwhile. The times are a-changin', but the digital revolution, like the memes it spawned, was old news the day it occurred. If we're going to take these lessons to heart, we can't embrace what worked twenty years ago (I almost said "when our students were in diapers," but that would have been a falsity).

What medium exists now that isn't a part of accepted academic discourse?

Monday, August 27, 2012

It's a Panopticon!

The title of this entry doesn't have much to do with what I'm going to say, but it is a fun phrase.

As fun as that phrase is, though, I hadn't thought of it in terms of education until just now.

Elizabeth and Ti both mention Bentham's hope that his design for the Panoptic prison/ work/ schoolhouse would be just as good for pedagogical training as for punishment (D&P 206), something that many of us fear about all the Panoptic institutions we can identify in modern life. Granted, part of the trouble in modern life is that everything seems to be a... hold on, have to turn off something.
XBOX SEES ALL
Everything in modern life seems to at least be part of a Panoptic system. I themed a 101 class on Privacy issues last Spring. It was pretty awesome until the second week, when we started talking about all the little ways we are watched and discussed how that watching can change our actions. Then, I hit the wrong button on a classroom computer and turned on the room camera. None of us had noticed the camera before, and that changed the mood of the class until they forgot about it a few weeks later.

I'm coming back to that. Or, I intend to. The fine folks at Vick's might have other plans.
You're welcome for the ad campaign.
Richard Ohmann's "Literacy, Technology, and Monopoly Capital" (College English 47:7) raises the ire of Adam Sprague (note also Jacob's concerns in response to Elizabeth's blog post), and I agree with the primary concerns: in 101, we are expected to teach writing. The expectation might not be from the English department, but (and here's where Foucault re-enters the picture) other departments do expect us to teach the orderly arrangement of words to elicit a planned response. And, thankfully, that's what English scholars study. (Current arrangement of words notwithstanding.)

But, I come back to that camera accidentally turned on last Spring. I come back to the time-stamp on this blog post, woefully behind schedule. And, perhaps most importantly, I come to the greatest advancement and curse that computer technology has brought to writing instruction. Everything is open for viewing and held up to the light (D&P 200-202), possibly carefully scrutinized at any moment.

Supposedly, student texts make for great readings and course material in a writing classroom (Joe Harris' A Teaching Subject, 1996, makes some good recommendations if you're into that sort of thing). However, as Alfie Kohn has noted, the cooperative, collaborative document construction that these tools allow is rarely the product in the classroom. Instead, borne partly out of our testing culture, the final products are always individual. Like Bentham's factory manager or prison warden (because, really, they're all the same job, amirite?), the teacher is ultimately concerned with how well the individual performs on the given task. I'd love to do collaborative portfolios (if only for the reduction in grading), but as a link in this Panoptic chain, the slight chance of upper administration finding out is enough to scare me away from trying the idea out.

The worst part is, I don't even know if there's a rule against it. I just know a collaborative course project worth at least 50% of the final grade is deviant enough from the norm that I don't want to risk it.
Nope. Thoughts aren't quite organized yet.
The far-too-centralized power in any Foucauldian apparatus always relies on the belief that objectivity is possible. That one could look out through the central tower of the Panopticon, observe an individual in one of the outer cells, and know all relevant information (indeed, all information) based solely on behaviors or other external signals. For me, a major hurdle in teaching any sort of ultramedia is breaking with the old notion that group work is somehow practice for the "real" assessment down the road. The rationale for any media alternative to the traditional essay format is always that the job world expects proficiency in so many other modes. But, appealing to the grown-up world of jobs and business necessarily raises the point that Bentham's isolation chambers are not the norm. Group work is hard to teach, but if we're going to teach alternative media and the rhetorical awareness necessary to use it, then we're also going to have to take our hands off the wheel and let the inmates students spread their contagion ideas much more fluidly.
Yeah. That looks better.

Wednesday, August 22, 2012

Ultramodality

Multimedia, New Media, Hypermedia, Ultramedia. I like "Ultramedia" or "Ultramodal." Lauer's article (linked to Kairos' site, unsure about access) is well done, but Kairos has pretty high standards. Whenever the multimodality discussion bears its head, though, I get wary. Partly, I'm wary of the fact that multimodality gets paired with multimedia, which is just code for "computer things that are more than Word." Like some horrible, lizard monster that seemed OK from afar, but has a tendency to become unnecessarily complicated the closer you get to the pedagogy.

Unfortunately, as Alfie Kohn once observed, teachers can be a bright lot with lots of wonderful, progressive ideas for expanding students' minds, but we have a habit of teaching the way we've been taught. It's true, too. My attempts at getting multimodal primarily involve crayons (to add a stronger visualization to quoting, paraphrasing and "original" thought), Legos (to practice clear diction and explanation) and a rainbow of dry erase markers.

I used to joke with other students here about the word "multimodality," because it implies the existence of "monomodality." But if color and language are both modes, then any text visible on a page is inherently multimodal. Somehow we all knew that multimodality meant there were colors or pictures or sounds or scratch-and-sniff panels, so maybe we just need to hit three or four modes before something crosses into real multimodality. Lauer is right, the naming is a pain to deal with.

So, I want to push for ultramodality, just to say we've met some threshold beyond "2+." Also, Ultramodality sounds like a sweet techno band. Also, continuing to apply new terms to the situation helps to avoid answering the real issue of justification. When I first started teaching, I used as many modes as possible thanks to the complex thesis "Multimodality is good because technology." To give a hint at how well I did it, I'd like to share a question I was asked during my last semester before moving to Pullman:
This is a writing class. Why are we always using the computers?
Sure, the student in question was under the impression that it was a handwriting class (I didn't understand the question until last Fall, when it finally clicked). But, my values weren't being clearly transmitted to the class, and it really wasn't the right setting for ultramodality anyway. In the same class, another student demanded that I stop using Blackboard and switch to Twitter since everyone could do discussion and receive updates on their cell-phones. My modality wasn't ultra enough for that one. Wysocki's "Openings & Justifications" in Writing New Media has some good thoughts (below) to handle that situation, but her claim that "writing classes can easily decontextualize writing such that agency and material structures look independent" (4) could just as easily be paraphrased to say "[pedagogical guides] can easily decontextualize [good pedagogy] such that agency and [student psychologies] look independent." Maybe there's too many brackets there.

Wysocki's sample assignments and lesson plans help contextualize an ultramodal pedagogy. I might just be grumpy here because I'm focused on first-year composition and transitions from high school to college writing. Any time I deviate from the perceived norm of college work (double spaced essays in 12 point, TNR, black ink, etc. that must go to the final line of the page or else you fail FOREVER!), the validity of the assignment is immediately called into question (echoing Wysocki's self citation on page 12), as though I was asking students to learn and practice a skill whose only end was a grade in this single class, and focusing on questioning the value of "traditional" media makes it seem like the class only exists to teach and practice those skills. But, I don't see the Engineering department changing its tune any time soon.



As for issues of ultramodal authoring, Lauer mentions the Herculean effort necessary to get these ideas rolling ("A Technological Journey"). Teaching introductory writing is busy enough without also getting into the intermediate features of Word and Word-clones, let alone the fact that there are Word-clones, let alone the basic features of other authoring programs.

So, maybe the problem is that I'm trying to imagine teaching students how to use a hammer before they understand the need for it. It's easy for me to agree with Lauer that the nomenclature is unwieldy because I get that we're all essentially talking about reflective understandings of what it means to compose a text -- specifically, to understand the rhetorical implications of paper, of a blog, of a video or podcast and to play up the strengths of those choices. It's also easy for me to agree with Wysocki that we have to start at a very basic level of questioning how visuality (and by extension aurality or olfactorality) affects (maybe effects) our understanding and persuadedness.
I knew someone whose business cards all smelled like doughnuts. He got a lot of call-backs.
I feel like there is more to say, but too much of it comes down to "But what do I do on Monday?" Wysocki's plans are great, but at this moment I feel my hands are tied by the need to teach research, academic voice, planning, revision and all those other things that go into the size 12 font, arranged so neatly on 8 1/2 x 11 sheets of paper. Maybe if I could figure out how to teach rhetorical awareness, it would be easier to shoehorn into the syllabus.

Hm. I feel like I should fill more space here. I have other thoughts, but they aren't connected or organized.
-----
Under "A Technological Journey," Lauer mentions that she made a "mini-wiki" for folks to comment on different elements of the text to better develop their meanings. That's something I don't trust about new technology. So many things are awesome about what a journal like Kairos can do (especially since it doesn't have a comments feature) when it incorporates every mode the average computer can manage, but so many things go wrong when I expect my texts to give me a space to broadcast my response to the world. Even now, I'm trying to form a coherent, reasoned response to this reading, but I haven't had enough time to reflect and digest it. All I did was read that Lauer wants an almost conversational interactivity to her article, and I flew off the handle because it seemed like she wasn't ready to stand by her convictions about what visual/verbal/auditory elements really meant.

In the words of a great thinker, Mr. Horse from Ren and Stimpy, "No sir, I don't like it." I blame Mark Bauerlein's discussion of the millennials and Web 2.0 (The Dumbest Generation) for this line of thinking.
-----
So here's a weird note. Maybe it's my browser, maybe something else, but when I was reading, the pages wouldn't always start at the top. The top visible line was always the beginning of a paragraph, but it wasn't always the beginning of the section. I have no deeper commentary here, except to note that this is either a pitfall of technology or the coolest way to tell someone they can skip the exposition in each section (which I had to go back and read anyway).
-----