Skip navigation

Category Archives: academia

Maybe I should be blogging about the end of DOMA or the Supreme Court’s awful blow to the universal franchise. But I’ve already said what I have to say about same-sex marriage, the tunnel-vision it has produced in lesbian and gay activism (because, let’s be real, bisexual and trans folks are not invited to the party), and the contemporary undermining of voting rights.

So instead, inspired by a piece I read recently in the Chronicle of Higher Education, I want to talk about the trouble with understanding college as a form of vocational education rather than an academic project.

One of my pet peeves is rise of administrative language that we need to teach undergraduates things that are “useful”—“real world” “skills” that will supposedly get them jobs.

And, to be sure, there are jobs where you need specific training in that field to do them—electricians as much as doctors or lawyers. Vocational training is valuable, and should not be treated as substandard or what people do who aren’t intelligent the way it currently is. Electricians and plumbers should get the same respect as doctors and lawyers.

But the value of a four-year undergraduate degree is something different. It’s a time to learn how to think (not what to think, despite what the one cranky conservative student inevitably in my class each semester seems to believe), how to ask questions, how to assess evidence and make arguments and write. This is the “liberal arts”-type model, but I’d suggest that it’s the intellectual capacity cultivated by any undergraduate degree that doesn’t share its name with a profession. (And, I suppose, some that do, like journalism.)

And it turns out that the Chronicle piece, based on surveying employers, shows that the latter type of capacity is exactly what they want.

“93 percent of the employers surveyed said that ‘a demonstrated capacity to think critically, communicate clearly, and solve complex problems is more important than [a candidate’s] undergraduate major.’ They were not saying that a student’s major does not matter, but that, overwhelmingly, the thinking, problem-solving, and communication skills a job candidate has acquired in college are more important than the specific field in which the applicant earned a degree.”

“More than 75 percent of employers say they want more emphasis on five key areas, including critical thinking, complex problem-solving, written and oral communication, and applied knowledge in real-world settings.”

I have been arguing for years that critical thinking and related skills are what make people good employees (one example), so I have to say that I’m glad to have my previous argument validated and to have some ammunition the next time someone tries to make the skills argument.

And, just as more proof, I happened (coincidentally enough) to be talking with a friend from undergrad about how my job as an academic is like or unlike her job at a major Internet company just as I was starting this blog and Lo, and Behold! Her job requires her to be able to write and argue a position!

To propose a new idea at her workplace, they have to write a paper that lays out the suggestion and provides evidence to back it up. She says, “as a liberal arts graduate, I’ve been in heaven”; she likes doing this writing, she’s been trained for it, and she’s good at it where some of her colleagues shy away from having to stand up to the “intellectual rigor” it requires. This is what she does, and the skills she needs, in a job in corporate America.

What this means is that when we treat college as an extension of high school, as learning “facts” to be regurgitated on multiple choice tests, we’re selling our students short. The survey said that employers felt “college graduates were most lacking in ‘written and oral communication skills, adaptability and managing multiple priorities, and making decisions and problem-solving.’”

It means that when we teach our students how use a particular software or do a particular business procedure, we’re selling them short by setting them up for their first job (maybe, if the instructor’s knowledge isn’t out of date) but giving them no tools for the next job or the one after. As the Chronicle piece put it, employers “want a student who has learned how to learn and how to adapt flexibly to rapidly changing demands.”

And I think the survey reported in the Chronicle piece is an important step forward, but it only helps readers of the Chronicle (so, faculty, grad students, some administrators) understand what employers really want.

Unless that is widely disseminated, we’re stuck grappling with what students think employers want, and the ways they choose their majors and course work and evaluate their classes on the basis of that. Then there’s what parents think employers want as they help their student choose those things and sometimes foot the bill.

Then there’s what the class of university administrators who are more administrator than educator think employers want and the requirements that come down the ladder as a result. Perhaps most crucially, there’s what legislatures think employers want, since they have a lot of financial control even as state funding for higher education is less of the budget than it has ever been.

If all of these stakeholders still think skills are where the jobs are, we’re going to be stuck.

I am likely to miss the next 2-3 weeks since I’ll be traveling and using my limited work time to keep up with my dissertation timeline rather than blogging. See you in August!

For its 2013 conference (#IR14 if you’d like to follow along at home October 23-27), the Association of Internet Researchers (AoIR) implemented a new format for submissions. The template went beyond asking for the standard abstract fare of a “description/summary of the work’s intellectual merit with respect to its findings”; it also required a discussion of “its relation to extant research and its broader impacts,” “a description of the methodological approach or the theoretical underpinnings informing the research inquiry” and “conclusions or discussion of findings,” and it wanted all of this in a space of 1000-1200 words (CFP).

This was a departure from the previous template that allowed submission of either a 500 word abstract or a full paper. It’s also a pretty unusual conference submission format that I hadn’t ever seen in the 7 years I’ve been doing this job, and based on comments about it on AoIR’s mailing list (AIR-L) neither had anyone else. It was challenging for me and my panelists to produce something that kind of explained our work (but didn’t have space to, really), but we did it and we were accepted and yay for us.

But as acceptances and rejections came back, AIR-L exploded starting May 30 in something that seems to me to be a paradigm skirmish (like a war, but smaller!), centering on whether the submission process had been tilted toward empiricist work at the expense of the theoretical.

Conflict between paradigms is an area of interest to me in general, but what I found particularly interesting was the incidence of people making incommensurable arguments—using different criteria but not realizing they were on different planes. This is something that I discussed (and attempted to resolve) in the field of Communication in a piece I published last year in Communication Theory, which articulated a model akin to intersectionality for disciplines, allowing similarity and difference on multiple research axes (ontology/epistemology, methodology, axiology) rather than grouping people by a single characteristic a la identity politics.

So what I’d like to do here is explore that disconnect, but also the ways in which the conversation reinforced empiricist projects as “real” research and perpetuated a quite normative definition of rigor. I’m going to do so in a way that names no names and uses no direct quotes. You can go look up the archives if you want—they’re open—but there are way too many people for me to ask permission of all of them and it’s not strictly public, so I’m going to err on the side of caution.

AoIR describes itself as “an academic association dedicated to the advancement of the cross-disciplinary field of Internet studies. It is a member-based support network promoting critical and scholarly Internet research independent from traditional disciplines and existing across academic borders,” but this inclusiveness, cross-discplinarity, and border-crossing were troubled by the introduction of the new submission format.

First, it was quite clear in the debate that non-social scientists felt alienated by the template. Some said they had trouble cramming what they did into it, and others said they hadn’t submitted at all because they couldn’t figure out how to explain their work on its terms.

And emails to the list suggested that some researchers were in fact not accepted to the conference because the format didn’t accommodate them very well. Several noted that theoretical work was rejected on account of lack (or lack of specificity) of methods where that was not an appropriate evaluation. Others specifically noted the humanities as what was disadvantaged, with one scholar pointing to the normalizing force of subheadings, charts, and diagrams built into the conference template.

There were some gestures in the debate toward a hypothetical “qualified” reviewer who could understand disciplinary difference and preserve AoIR’s diversity and not judge one paradigm by another, but mostly that seems not to have materialized. Many participants complained about being assessed based on inappropriate criteria (like methods/findings in a non-social-scientific paper) or reviewers just being pedantic about the template rather than making substantive critiques.  Some called for better guidelines for reviewers to avoid this.

One thing that was not explicitly recognized is that ultimately a great deal of this is a question of reviewing labor. It is my understanding that endemic to conferences reviewed by submitters is an overrepresentation of junior scholars (especially grad students) in the reviewing. Senior scholars are busy or can’t be bothered, or whatever (in addition to being outnumbered)—but regardless of the reason, this has consequences for review quality.

Many of the people making these judgment calls were likely inexperienced and reviewing based on their (seemingly faulty) sense of the rules or based on the paradigm in which they are trained rather than having a developed gut instinct for good work across types of research (which I feel like I can say now because I have at least partially developed that instinct). This is the risk of inexperienced reviewers, a relationship a couple of participants in the discussion also noted, and it’s particularly dangerous to an internally diverse organization such as AoIR.

The response to the theory/humanities complaint was pushback from other scholars who argued that the conference has not been rigorous enough in the past and that this year’s submission process was an improvement. There was little recognition among these proponents that this conflated rigor with scientistic modes of inquiry and presentation.

The new format was held up as a way to lessen the chances of bad presentations at the conference itself by catching those who can write good abstracts or latch on to a trendy topic but then not deliver, a goal certainly worth attempting. But there was a clear divide around the relationship between incomplete research and bad research.

It was social scientists who raised the specter of the cocktail-napkin presentation or simply argued that it’s hard to assess quality on to-be-completed research. The other camp contended that saying the work had to be complete in February or March to present it in October seemed to exclude a lot of people and types of work. Members of this group pointed out that some presentations are just bad, irrespective of done-ness.

Part of the argument about rigor was because of the different “home” disciplines to which AoIR members belong. Social-scientists have had the experience that AoIR isn’t taken seriously. They mentioned being unable to be funded to attend or that attending AoIR wouldn’t “count” for tenure or other evaluations.

In large part, it seems, this has been because AoIR doesn’t require full papers. In previous years, one had the option to submit a paper and then go through a review process to be published in Selected Papers of Internet Research, but one could get accepted without doing so. And indeed, one rationale for the new format was that almost no one was using the full paper option, such that it’s clear that AoIR was primarily an abstract-based conference—which, discussion participants noted, some disciplines see as lazy.

That interdisciplinarity can be constrained by one’s “home” discipline was also clear from the disciplinary divide around the subject of conference proceedings. The folks hooked in to science-type conferences like the Association for Computing Machinery noted the lack of proceedings as another source of disrespect and of the conference seeming less rigorous.

(This is interesting to me because I always thought of conference proceedings as what people did when they weren’t good enough for a real, journal publication. But my field doesn’t use them, so I just had to figure out what they were for as I encountered them—and by comparison to the average journal article they’re kind of shoddy.)

Ultimately, though AoIR is founded on inclusiveness of different research modes, it is clear that speaking the language of methods and findings (and charts and subheads and figures) conflated the conference’s push for rigor with a more scientistic mode. That is, while people could recast that into terms that made sense for their work, and some did, that wasn’t always accepted in the review process.

It made me wonder what the equivalent humanities/cultural studies-centric template would look like. Can we even imagine it? “Be sure to include your theoretical framing and account for race, class, gender, and sexuality”? Related to this, one participant in the discussion noted that if she had applied her humanities criteria to a social science paper and rejected it for being boring and dated, there would be a huge outcry, but making the same assessment the other direction was totally acceptable.

Thus, it is unsurprising that, while there were certainly statements of valuing other types of research than the one any given participant did, this was an unequal sort of mutual respect. Empiricist research got to stand as “straight” or default or unmarked research (even in some statements by the humanities folks, hello internalized inequality!).

It is, after all, often the case that dominant/more socially valued groups get to stand as normative/universal. When social scientists advocated for including other types of work, they tended to ghettoize it out of normative presentation venues like paper sessions into roundtables, workshops, etc.

Of course, there was also some devaluation going the other way, with the humanities proponents concerned about the danger of producing dated research by talking about something that happened a year ago on a rapidly-changing Internet. One wondered what the point was of watching a paper that is going to be published in the next month or two.

As a whole, the AoIR debate points to two sides of a single concern: if the research is closed (completed), and the structure for participation is closed (restricted), what gets shut out?  While some participants were worried about research being boring or stale, others suggested bigger stakes: that this was an anti-interdisciplinary move—perhaps even a betrayal of what AoIR stands for.

This is an important question. Some modes of research are more respected than others—this is something that is currently true about the world, however much we might dislike it and seek to change it in the long term. Doing interdisciplinarity without recognizing the existence of this hierarchy produces circumstances like the scuffle that took place on AIR-L over the IR14 conference template.

I received an email on March 29 from the University of Illinois Gradlinks service announcing “MOOC Monday is almost here,” which feels like it should have an exclamation point but doesn’t.

This strikes me as a bizarre thing to have as the kickoff event for “Grad Student Appreciation Week.” Grad students don’t teach MOOCs, not least since much of the selling point is having free (unfettered and unpaid) access to famous professors. Moreover, my understanding is that the “massive” and “open” parts mean there aren’t really grades and so grad students don’t TA for MOOCs either. And graduate students definitely don’t take MOOCs as students, since graduate education is not suitable for the format (I’ll come back to suitability later).

The email’s own explanation is that “with more and more online courses, future faculty will want to be well versed in the ins and outs of online teaching.” This collapse of all online teaching into MOOCs sounds a bit like a memorable post  from Academic Men Explain Things to Me, retweeted to me by I can’t remember who, in which “an older gentleman” mansplained to the poster that “although he had never taught nor taken an online class, he was quite sure that I was wrong…all online classes were MOOC.”

There is a chance that the title was chosen for alliteration (the others are Tax Tuesday, Work Wednesday, Thirsty Thursday, Fitness Friday, Skating Saturday), but the ways in which MOOC stood out to someone as a good idea because it’s a sexy buzzword should not be discounted.

And now for the part of the blog where I argue that seemingly disparate things are related and indicative of a broader phenomenon (I figure I should embrace being predictable). This arrived in the week after Senator Tom Coburn of Oklahoma introduced an amendment to a funding bill that would “prohibit the NSF [National Science Foundation] from funding political science research unless a project is certified as ‘promoting national security or the economic interests of the United States’” (Huffington Post).

Now, I’m agnostic on whether part of the motivation for Coburn’s amendment (and his apparent overall hatred for political science) is a desire to defund research that is potentially lefty and exposing of his party’s machinations. It’s an appealing theory, but I don’t have any basis to assess it. What I think is much more likely (and maybe the two operate in conjunction) is a fiscal conservative outrage at federal funding for research that seems not to benefit the nation.

And given that hunch about fiscal priorities, I think the MOOC-ification of education and the Coburn amendment (though later defeated) both speak to the same set of beliefs about what knowledge is valuable to teach or to discover, respectively.

This is a logic that values only knowledge that is tangible, immediately apparent as useful, and/or applied, at the expense of other sorts: knowledge for its own sake, knowledge that will be applied one day but whose applications are not yet apparent, and the thing I tend to teach my students—in the phrase of a University of Illinois INTERSECT project—“learning to see systems.”

Online courses in general, and massive, open ones in particular, seem to me to lend themselves only to the first sort of knowledge. They’re suitable, as I usually put it, to things that “have a right answer”: introductory math and science, history when the goal is to learn facts, skills-based learning like business or accounting or advertising.

I don’t think those kinds of subjects are unworthy of study (though, as often happens, this division is hierarchical and those practitioners may not extend me the same courtesy). I also don’t think online teaching is inherently bad. Certainly, the Chronicle of Higher Education piece on online courses I read a while back had suggestions for a successful class that aren’t so different than in person teaching: “Respond to all student queries within 24 hours”? I do that; “End with a post that sums up the conversation”? Not really different than summing up a class discussion; “constantly be on the prowl for YouTube clips, articles and essays, photos, and even online crossword puzzles that highlight and reinforce themes in your course”? Yep.

But the online course cannot substitute for the work of trying out frameworks of thought and asking “what if?” nor for laboratory or problem-solving activities, and this is the stuff of advanced technical subjects, studying society (contemporary or historical) as a structure, and philosophy/theory.

As Suzanne Scott noted in her comparison of SCMS as a “massively open online conference” to MOO-courses, “they can never fully replicate the social experience of a class, or the social dynamics of a class cohort” either. Of course, this kind of work is devalued in the “useful knowledge” paradigm that says anything that doesn’t teach students “skills” is a waste of tuition and tax dollars.

But it’s this definition of what constitutes a “benefit” to society or to an individual that I want to question. In terms of research, lots of things had unexpected benefits that weren’t planned when the research was done. Penicillin was discovered by accident (which is common enough knowledge to be a Google autocomplete option), etc. Shutting down legitimate, fundable research to only that which already has apparent uses prevents us from ever making those kinds of discoveries again.

Similarly, if we think of “benefit” in terms of teaching, I never learned any job skills in my undergraduate education, but my high-quality liberal arts education made me great at the “real” job that I had before coming back to academia. Because I knew to not take things at face value, but look at the bigger picture, I could ask whether there were better ways to do the work I was assigned. I streamlined processes and my efficiency was greatly appreciated by our clients, but it’s not anything I was taught in school directly so much as the outcome of learning to ask “why?” or “why not?” and “what if?”

It’s tempting, as with the suspicion of Coburn’s motives, to see this as some sort of class-demarcating move—the rank and file learn skills and not how to question (or rather, they don’t learn that they should question), so they will be docile underlings. The problem is that if this becomes the model, those managers will just learn “management” skills, and not broad thinking either. Moreover, after watching my supervisor at that office job struggle with the fact that the person who replaced me had no ability to problem-solve, having this sort of employee actually makes more work for management.

Ultimately, the MOOC-ification of education and the Coburn Amendment are both salvos in the battle over the meaning of education. And, while I will definitely argue that knowledge is worth learning—and worth paying for—even if it never has a practical application, I don’t even need to make that argument. Because the “squishy,” nebulous, allegedly useless topics without right answers are the key to personal and business success.

Now, if only we could get the powers that be to recognize it.

This post is inspired in large part by Suzanne Scott’s post Distanced Learning: SCMS as MOOC (massively open online conference)?, which (amidst a larger argument) described how the Twitter feed (and livestreaming, but really, there was much more Twitter happening) helped her experience #SCMS13 remotely, and Amanda Ann Klein’s post Turning Twitter into Work: Digital Reporting at SCMS 2013, about how tweeting from an “official” account led her to think differently about how she tweeted and view it more as work.

But it’s also inspired by having attended some professionalization workshops at SCMS and reminding myself of my reasons for having a digital media presence in the first place.

In reflecting on her experience as an official Twitter reporter for the @CJatSCMS account (affiliated with SCMS publication Cinema Journal), Klein asked for a reconsideration of academic labor: “what do we count as labor in the world of digital and social media, what is the ‘value’ of that labor, and how do we document it?”

Klein and Scott identified conference tweeting as particular kind of work, with Klein noting that “the pseudo anonymity of the @CJatSCMS account made me less concerned with my personal Twitter brand (i.e., snark) and more concerned with the transmission of information” and Scott expressing a similar sentiment from the other direction, wherein people’s attention to things other than transmission of information made her remote conferencing challenging: “I only experience [sessions’] limited digital residues, often filtered through disciplinary lenses or with an intertextual frame I don’t have direct access to.”

This was interesting to me, because I have always seen Twitter, and my blog/website, as work, but I see it as a quite different sort of work than Klein and Scott. I freely, and routinely, admit that I have trouble catching the substance of talks (in fact, I’m so bad at aural processing that I wonder how I ever made it through K-12 and undergrad). Instead, I usually tweet the quippy bits—I can do color commentary, but for the play-by-play you need someone else.

In this sense, I am the problem for someone like Scott, though I did my best to swing into reporter mode for a panel in which she was interested when asked. I probably did not entirely succeed, but I tried.

I am also the problem for an older generation of scholars unfamiliar or uncomfortable with the idea that what they say in one room, in one physical place, might be transmitted globally, as I discovered last spring when my PhD program had a reunion and the keynote speaker found my tweeting distressing (even though I had been tasked by the department chair to livetweet). Klein notes that “in the weeks leading up to the conference, everyone involved with the @CJatSCMS account agreed on a loose set of Best Practices (including requesting permission before tweeting panel/workshop content),” and perhaps I should have followed something similar in that case.

So, if I don’t see my job tweeting at conferences as the work of reporting, what kind of work is it for me? It’s promotional labor for the Mel Stanfill brand, “Bringing Foucault to Fandom since 2006.” I tweet so that people following the conference hashtag might see it and think I have said something of merit—and maybe retweet, and maybe just remember my name if they come across it again. (This can backfire if I say something that upsets the person who sees it, of course.)

This, for me, is the point of social media. It may be a shocking confession since it’s so unusual these days, but I don’t have Facebook. (Though, as response to the formation of a Facebook group as the way to organize the Fan Studies SIG for SCMS showed, I’m part of a committed minority of nonusers. They later added a Google group.)

When people are startled to hear about this abstention, I point out to them that I study digital media and know way too much about Facebook to be on it. That’s partially true, because I’m enough of an anti-capitalist that I’m not in a big hurry to have my personal data and social ties generating any more revenue than I can possibly avoid (with full acknowledgement there’s lots I can’t avoid).

But more than that, my problem with Facebook is its norms of use—one is normatively expected to add everyone one has ever met, bringing high school into contact with family into contact with career in a way that, to me, sounds like a recipe for disaster.

I do, however, use Twitter. I’m not against social media itself, just how Facebook tends to work. I like the way that Twitter has the norm of nonreciprocal following; unlike “friending,” I am not responsible for who I am followed by, only who I follow, such that there can be identity management between spheres.

(Which should not be taken as a critique or a distancing from anyone who happens to follow me that I don’t follow. I haven’t actively done this and I’m also behind on post-SCMS following back. But it’s very comforting to know I could distance myself if I needed to.)

Because of its different relationship to, well, relationships, Twitter can serve as a platform for creating and maintaining my professional identity and visibility, integrated with my website, in a way that Facebook can’t. And I know that LinkedIn exists, but that’s really for a different kind of professionals—let’s be honest.

So that’s what I was doing when tweeting SCMS. I was entirely thrilled to have my Klout score hit the 81st percentile after the conference (it’s declining again now, of course, because I’m not interacting in the same way, but I know that’s how it goes). I was thrilled to get 5x more hits on my blog than my usual good day (and 25x my average day) for my posting of my SCMS presentation, even though that also is not being sustained. These, for me, are the metrics by which my digital media work has been successful.

I have had people express some skepticism that I find time to blog, like it’s a hobby or I’m somehow shirking my “real” work to do it; it is certainly a hobby for some people, but for me it is part of my real work. It’s time I commit to the big picture of my career, to making sure there are lots of (hopefully smart) ideas floating around out there with my name on them. It’s advertising for my intellectual capacity.

It’s also work that helps make sure all the Google results for my name are me. (With quotation marks, they all are. Without, two aren’t. At least when searching from my own computer.)

My Twitter is time I commit daily to keeping track of what scholars I know and respect are doing and maintaining myself as someone they are aware of. Of course, the professionalization workshop reminded me that I have moved away from professionalism on Twitter to some degree and should probably mosey on back, so I’ll be cutting back on observations about the weirdness of life.

All of this is work for me, even if it’s not for everyone. It’s unmeasurable and unpaid. As Klein noted, it’s not counted yet in official ways, like for promotion and tenure. And it’s deeply neoliberal as an act of self-management in the interest of getting ahead. But right now, at this point in my career, I don’t make the rules. I’m left playing this game the best way I know how.

ETA: Some people are being told that they’re “forbidden” to view the Prezi. Not sure what’s going on with that, since it is working for me (even in a different browser than is logged in). But, hopefully this link will work. If not, let me know in the comments or on twitter @melstanfill and I’ll try something else!

In lieu of a blog post this week, I’m posting the prezi of the talk I gave on Saturday at the Society for Cinema and Media Studies conference in Chicago, IL, entitled “Between Commodity and Consent: Implications of the Vanishing Distinction between Play and Work in Fandom.” Partially, this is because several people  have requested it, but partially I was at a conference for 5 days and had no time to write a blog! So, here it is:

A couple of caveats:

First, as was noted by an aggressive audience member in the Q&A, the Marx is oversimplified. That was intentional, because I was trying to explain why we don’t talk about labor in fandom, and why fandom doesn’t seem like labor–because the everyday idea about labor is grounded in a set of ideas that don’t seem to apply.

Second, there’s a lot of slippage between fans and viewers in the piece that I didn’t really intend, and I know it’s a problem.

And third, the presentation was intended as a base for me to talk from, so the actual presentation was significantly different. Hopefully the prezi gives you a basic idea anyway.