Skip navigation

For its 2013 conference (#IR14 if you’d like to follow along at home October 23-27), the Association of Internet Researchers (AoIR) implemented a new format for submissions. The template went beyond asking for the standard abstract fare of a “description/summary of the work’s intellectual merit with respect to its findings”; it also required a discussion of “its relation to extant research and its broader impacts,” “a description of the methodological approach or the theoretical underpinnings informing the research inquiry” and “conclusions or discussion of findings,” and it wanted all of this in a space of 1000-1200 words (CFP).

This was a departure from the previous template that allowed submission of either a 500 word abstract or a full paper. It’s also a pretty unusual conference submission format that I hadn’t ever seen in the 7 years I’ve been doing this job, and based on comments about it on AoIR’s mailing list (AIR-L) neither had anyone else. It was challenging for me and my panelists to produce something that kind of explained our work (but didn’t have space to, really), but we did it and we were accepted and yay for us.

But as acceptances and rejections came back, AIR-L exploded starting May 30 in something that seems to me to be a paradigm skirmish (like a war, but smaller!), centering on whether the submission process had been tilted toward empiricist work at the expense of the theoretical.

Conflict between paradigms is an area of interest to me in general, but what I found particularly interesting was the incidence of people making incommensurable arguments—using different criteria but not realizing they were on different planes. This is something that I discussed (and attempted to resolve) in the field of Communication in a piece I published last year in Communication Theory, which articulated a model akin to intersectionality for disciplines, allowing similarity and difference on multiple research axes (ontology/epistemology, methodology, axiology) rather than grouping people by a single characteristic a la identity politics.

So what I’d like to do here is explore that disconnect, but also the ways in which the conversation reinforced empiricist projects as “real” research and perpetuated a quite normative definition of rigor. I’m going to do so in a way that names no names and uses no direct quotes. You can go look up the archives if you want—they’re open—but there are way too many people for me to ask permission of all of them and it’s not strictly public, so I’m going to err on the side of caution.

AoIR describes itself as “an academic association dedicated to the advancement of the cross-disciplinary field of Internet studies. It is a member-based support network promoting critical and scholarly Internet research independent from traditional disciplines and existing across academic borders,” but this inclusiveness, cross-discplinarity, and border-crossing were troubled by the introduction of the new submission format.

First, it was quite clear in the debate that non-social scientists felt alienated by the template. Some said they had trouble cramming what they did into it, and others said they hadn’t submitted at all because they couldn’t figure out how to explain their work on its terms.

And emails to the list suggested that some researchers were in fact not accepted to the conference because the format didn’t accommodate them very well. Several noted that theoretical work was rejected on account of lack (or lack of specificity) of methods where that was not an appropriate evaluation. Others specifically noted the humanities as what was disadvantaged, with one scholar pointing to the normalizing force of subheadings, charts, and diagrams built into the conference template.

There were some gestures in the debate toward a hypothetical “qualified” reviewer who could understand disciplinary difference and preserve AoIR’s diversity and not judge one paradigm by another, but mostly that seems not to have materialized. Many participants complained about being assessed based on inappropriate criteria (like methods/findings in a non-social-scientific paper) or reviewers just being pedantic about the template rather than making substantive critiques.  Some called for better guidelines for reviewers to avoid this.

One thing that was not explicitly recognized is that ultimately a great deal of this is a question of reviewing labor. It is my understanding that endemic to conferences reviewed by submitters is an overrepresentation of junior scholars (especially grad students) in the reviewing. Senior scholars are busy or can’t be bothered, or whatever (in addition to being outnumbered)—but regardless of the reason, this has consequences for review quality.

Many of the people making these judgment calls were likely inexperienced and reviewing based on their (seemingly faulty) sense of the rules or based on the paradigm in which they are trained rather than having a developed gut instinct for good work across types of research (which I feel like I can say now because I have at least partially developed that instinct). This is the risk of inexperienced reviewers, a relationship a couple of participants in the discussion also noted, and it’s particularly dangerous to an internally diverse organization such as AoIR.

The response to the theory/humanities complaint was pushback from other scholars who argued that the conference has not been rigorous enough in the past and that this year’s submission process was an improvement. There was little recognition among these proponents that this conflated rigor with scientistic modes of inquiry and presentation.

The new format was held up as a way to lessen the chances of bad presentations at the conference itself by catching those who can write good abstracts or latch on to a trendy topic but then not deliver, a goal certainly worth attempting. But there was a clear divide around the relationship between incomplete research and bad research.

It was social scientists who raised the specter of the cocktail-napkin presentation or simply argued that it’s hard to assess quality on to-be-completed research. The other camp contended that saying the work had to be complete in February or March to present it in October seemed to exclude a lot of people and types of work. Members of this group pointed out that some presentations are just bad, irrespective of done-ness.

Part of the argument about rigor was because of the different “home” disciplines to which AoIR members belong. Social-scientists have had the experience that AoIR isn’t taken seriously. They mentioned being unable to be funded to attend or that attending AoIR wouldn’t “count” for tenure or other evaluations.

In large part, it seems, this has been because AoIR doesn’t require full papers. In previous years, one had the option to submit a paper and then go through a review process to be published in Selected Papers of Internet Research, but one could get accepted without doing so. And indeed, one rationale for the new format was that almost no one was using the full paper option, such that it’s clear that AoIR was primarily an abstract-based conference—which, discussion participants noted, some disciplines see as lazy.

That interdisciplinarity can be constrained by one’s “home” discipline was also clear from the disciplinary divide around the subject of conference proceedings. The folks hooked in to science-type conferences like the Association for Computing Machinery noted the lack of proceedings as another source of disrespect and of the conference seeming less rigorous.

(This is interesting to me because I always thought of conference proceedings as what people did when they weren’t good enough for a real, journal publication. But my field doesn’t use them, so I just had to figure out what they were for as I encountered them—and by comparison to the average journal article they’re kind of shoddy.)

Ultimately, though AoIR is founded on inclusiveness of different research modes, it is clear that speaking the language of methods and findings (and charts and subheads and figures) conflated the conference’s push for rigor with a more scientistic mode. That is, while people could recast that into terms that made sense for their work, and some did, that wasn’t always accepted in the review process.

It made me wonder what the equivalent humanities/cultural studies-centric template would look like. Can we even imagine it? “Be sure to include your theoretical framing and account for race, class, gender, and sexuality”? Related to this, one participant in the discussion noted that if she had applied her humanities criteria to a social science paper and rejected it for being boring and dated, there would be a huge outcry, but making the same assessment the other direction was totally acceptable.

Thus, it is unsurprising that, while there were certainly statements of valuing other types of research than the one any given participant did, this was an unequal sort of mutual respect. Empiricist research got to stand as “straight” or default or unmarked research (even in some statements by the humanities folks, hello internalized inequality!).

It is, after all, often the case that dominant/more socially valued groups get to stand as normative/universal. When social scientists advocated for including other types of work, they tended to ghettoize it out of normative presentation venues like paper sessions into roundtables, workshops, etc.

Of course, there was also some devaluation going the other way, with the humanities proponents concerned about the danger of producing dated research by talking about something that happened a year ago on a rapidly-changing Internet. One wondered what the point was of watching a paper that is going to be published in the next month or two.

As a whole, the AoIR debate points to two sides of a single concern: if the research is closed (completed), and the structure for participation is closed (restricted), what gets shut out?  While some participants were worried about research being boring or stale, others suggested bigger stakes: that this was an anti-interdisciplinary move—perhaps even a betrayal of what AoIR stands for.

This is an important question. Some modes of research are more respected than others—this is something that is currently true about the world, however much we might dislike it and seek to change it in the long term. Doing interdisciplinarity without recognizing the existence of this hierarchy produces circumstances like the scuffle that took place on AIR-L over the IR14 conference template.

Leave a Reply

Your email address will not be published. Required fields are marked *