Skip navigation

Category Archives: new media

Earlier this week, the hashtag #middleclassbands was trending on Twitter, and in that such tags tend to please my love of clever wordplay I clicked over to see. And indeed it was the case that linguistic ingenuity abounded, but what I didn’t expect to find is wild disagreement over what constitutes middle-classness.

Granted, some of this may be contortions produced by the need to be punny, but within the general categories of things middle-class folks are imagined to do and be and buy there were what seemed to me to be downright contradictions.

Tweet-ers identified this group as consuming Applebee’s (which strikes me as lower in the class spectrum), hummus (which seems about right for bourgie foodie types), and foie gras (pretty upper class).

 

 

Middle class folks, the TT contended, drive station wagons and Range Rovers (things which are not like each other). They also shop at IKEA, which I for one know as a place folks go for cheap furniture (sometimes, cheap furniture that looks expensive).

 

 

The middle class can also be identified, according to Twitter-ers, by its correct spelling, linking this class status to education.

 

 

 

 

 

 

Except when it’s pretentious.

 

 

Or pedantic.

 

 

Indeed, the only thing that Twitter seemed to agree on about middle-class folks is that they work in office jobs, which I guess does confirm the death of union-enabled blue collar middle-classness.

What’s interesting about this is not that some of these people are “right” in identifying the middle class and some are “wrong.” It’s that “middle class” doesn’t mean the same thing to everyone.

The hashtag was started by UK-based The Poke, seemingly a humor site. Thus, some of the tweets are likely from Brits, whereas others are from elsewhere (not least because there are no Applebee’s in the UK, according to the company’s site). Indeed, many of them “feel” American to me, though that may be ethnocentrism on my part.

Class varies rather dramatically between nations, to be sure, but also, as my parenthetical notes above were intended to suggest, it also varies within nations. Not sure why I keep using the Applebee’s example, but it may well be solidly middle-class for certain populations within the US even though it strikes me as trashy (which, let there be no mistake, is not a criticism but what I like about it).

This lack of agreement about what it means to be a member of a particular class is especially important in the contemporary moment as class is put back on the table in US politics. In the UK, it was always a term of relevance (and means your class of birth, not your contemporary income status, as I learned while using a demographic sheet designed with American sensibilities to collect data about a Brit), but—likely due to some combination of American individualist ideology and decades of anticommunist hysteria—it has not tended to be used in everyday discourse in my lifetime (though I myself see and use it a lot as a Marxist-flavor academic).

However, with the 2011 rise of Occupy Wall Street and its discourse of the 1% vs. the 99%, and with the Obama campaign’s appeals to improving the lot of the middle class rather than giving tax cuts to the wealthy, people are starting to speak the language of class again. The question is, though, which language is that?

If one’s “middle class” appeals are pitched at the foie gras and Range Rover crowd, the Applebee’s and station wagon folks will be alienated. If you call out folks making over $200,000 a year as wealthy it’ll hold for lots of parts of the country, but not places where the cost of living is sky-high, because there those folks are decidedly middle-class.

There are pitfalls to putting class back into the discussion, then, but that should not be taken to mean we shouldn’t do it. Indeed, these pitfalls are, I think, the result of the fact that class has been off the radar for so long that there has been no opportunity to form a popular consensus about what it means.

It’s this lack of knowledge that enables a state in which, to be flippant, everyone thinks they are middle class and will someday be rich if they work hard enough, and therefore are not only opposed to redistributive policies but tend to vote against their own current economic interests.

So this TT’s motley composition, far from being a sign of lazy tweeting, is actually pretty revealing about how class works in the US in the contemporary moment.

Blogs are going to be a bit patchy for a while. Though there are lots of important and interesting things happening in the world right now, not many are jumping out at me as suitable for this genre. Also I’m going to be traveling for most of July.

On May 20, Twitter announced some “Updates to Twitter and Our Policies,” which they pushed out to all accounts—indeed, I got it three times between the two accounts I actually use and one I set up for a conference that then didn’t get used.

The announcement struck me as particularly user-friendly compared to other companies’ policies. That is, by comparison to the way these issues are discussed in the couple different pieces from Christian Fuchs on privacy policies that I’ve read in the last few months, this seemed not so bad. Not perfect, clearly, but not as awful as they might be. In light of that, I thought I’d try to work through them and tease out what it that seemed less nefarious than usual.

First, what caught my attention is that the announcement said: “We’ve provided more details about the information we collect and how we use it to deliver our services and to improve Twitter. One example: our new tailored suggestions feature, which is based on your recent visits to websites that integrate Twitter buttons or widgets, is an experiment that we’re beginning to roll out to some users in a number of countries. Learn more here. ”

Now, there are some problems here—they frame taking user information as solely making the service better rather than owning up to their self-interest, and they track you when you go elsewhere with your Twitter login, which is a little Big Brother.

However, the interesting part is that they’re providing information about what they’re collecting and what they’re doing with it. I don’t expect that there’s full disclosure on their part, of course, but they are operating from the assumption that users have a right to know these things or that they will run into PR or regulatory trouble if they don’t say they care about these things, and that feels like an advance, even if a small one.

Twitter also actively made information available about “the many ways you can set your preferences to limit, modify or remove the information we collect. For example, we now support the Do Not Track (DNT) browser setting, which stops the collection of information used for tailored suggestions. ”

In his March 1 blog post Google’s “New” Terms of Use and Privacy Policy: Old Exploitation and User Commodification in a New Ideological Skin, Fuchs critiques these sorts of moves to let people use settings and preferences to protect privacy, noting that “Opt-out options are always rather unlikely to be used because in many cases they are hidden inside of long privacy and usage terms and are therefore only really accessible to knowledgeable users. Many Internet corporations avoid opt-in advertising solutions because such mechanisms can drastically reduce the potential number of users participating in advertising. ”

He is not wrong that opt-out is to the benefit of the company and that it’s often hard to figure out how to actually do so, but this feels different. Twitter pushed this out to all users, told all of them that if they want to alter how they interact with the system they can do so. This, I think, points to a similar shift to the one discussed above—either they recognize that users have a right to a say in how their data gets used, or they feel like they’re supposed to pretend to think so, but in both cases it’s not just companies doing whatever they like with impunity.

There are, of course, some larger problems here. Twitter says, “We’ve clarified the limited circumstances in which your information may be shared with others (for example, when you’ve given us permission to do so, or when the data itself is not private or personal).” This frames the issue as being about personally identifiable information, when such an understanding in fact misses the point.

As Fuchs puts it in his recent article The Political Economy of Privacy on Facebook, this kind of attitude “engages in privacy fetishism by focusing on information disclosures by users” (p. 142). That is, “the main privacy issue is not how much information users make available to the public, but rather: Which user data are used for Facebook for advertising purposes; in which sense users are exploited in this process; and how users can be protected from the negative consequences of economic surveillance on Facebook” (p. 141).

So Twitter, in assuming that everyone agrees that it’s okay to share information when it’s “not private or personal,” is working from this same framework. The set of anonymized user data is fair game to provide vital market research not only for Twitter’s own service but for any company to whom whom it sells the data, or for Twitter to use to allow it to sell advertising where it can report very specifically what kind of people are getting the ad, which makes for more valuable ads (which is how Google can make money while claiming it doesn’t sell user data; contrary to Fuchs, I don’t think they’re lying about selling but rather using their data to profit in this indirect way). That’s a fairly standard industry-wide assumption that Twitter has not broken from.

But Twitter seems not to go as far as Google did in the recent Privacy Policy update that Fuchs, among others, found so objectionable. In that update, Google said: “We may combine personal information from one service with information, including personal information, from other Google services – for example to make it easier to share things with people you know. We will not combine DoubleClick cookie information with personally identifiable information unless we have your opt-in consent.”

Opt-in, of course, is better than opt out, but the idea that your personal information would ever get combined with data on your searching and click-through is pretty scary, and fortunately that’s one road Twitter seems not to have gone down (even if, as is likely, it’s only because their service doesn’t lend itself to collection of the same sorts of data).

Similarly, Google likely thinks of itself as taking a stand for privacy when it says that “When showing you tailored ads, we will not associate a cookie or anonymous identifier with sensitive categories, such as those based on race, religion, sexual orientation or health,” but as Fuchs’s blog post points out, “algorithms can never perfectly analyze the semantics of data. Therefore use of sensitive data for targeted advertising cannot be avoided as long as search queries and other content are automatically analyzed. ”

Ultimately, Fuchs makes the bold claim that “the main form of privacy on Facebook is the opacity of capital’s use of personal user data based on its private appropriation” (p. 147), and I think it’s certainly suggestive or provocative. I’m willing to claim that Twitter, in pushing out information and making it easy to understand what they do with user data, is doing a better job with it than some of its contemporaries.

But that is only because the bar is so low.

Privacy has been a hot topic in the last few years, due largely to the confluence of digital media that travels easily with social platforms that encourage inputting all the information about one’s life. But the term gets thrown around and used to mean keeping all kinds of things private from all kinds of people.

I read something recently that made an offhand remark about privacy and privatization while citing Amitai Etzioni‘s 1999 book The Limits of Privacy (I don’t know what I was reading and I really did go looking but I can’t find it again; however, I’m fairly certain it was either Saskia Sassen or Nick Dyer-Witheford, based on when it was in the semester, and the latter seems the more likely suspect).

That reading, whatever it was, sparked me to think about the relationship between privacy and privatization, public and publicity, and what we talk about when we talk about privacy. (Lapsed English major FTW with the Raymond Carver reference!)

It seems that people are most concerned with interpersonal privacy. They don’t want their mom to know they got totally wasted last weekend, their employer to know they lied to go to a party,  or potential stalkers nearby to know their location.

(Own work Transferred from en.wikipedia) [Public domain, via Wikimedia Commons”

They are, to a lesser degree, concerned with privacy from the government. Post-9/11 surveillance in the name of counterterrorism has gotten some pushback—certainly, SumofUs.org wants me to petition Facebook not to give its members’ information to the governmentwithout a warrant, which both seems important and like a drop in the proverbial bucket o’ surveillance—but the sheer trauma of that event was sufficient to convince at least some people of what Etzioni contended in 1999 that Americans generally steadfastly refused to accept, that public goods (like safety and health) sometimes require violating privacy (p. 2).

However, there is markedly less concern about privacy when it comes to corporations. As Etzioni put it, “although our civic culture, public policies, and legal doctrines are attentive to privacy when it is violated by the state, when privacy is threatened by the private sector our culture, policies, and doctrines provide a surprisingly weak defense” (p. 10).

There are exceptions, of course, as shown by discomfort with the fact that Target can figure out women are pregnant based on their purchases and will send them coupons for pregnancy and baby items, often before they’ve told anyone in their immediate families.  By and large, though, protecting privacy from corporations doesn’t generate a lot of attention among the general public.

Likely this is at least somewhat because most people are not aware of how Facebook or Google or any of the other big Internet companies works. They get, to use Dallas Smythe‘s famous terminology, a free lunch, and they think that’s in exchange for the advertisements they can freely ignore, so it seems like a good deal.

However, what the company really gets from them is not their attention, but the traces of their life—demographics, location, social relationships, likes, hobbies, what they click on, what other websites they visit, etc. etc. etc—left behind every time they do anything, much like footprints, fingerprints, or dead skin cells in the physical world.

(For a detailed critique of Google’s use of data, which forms some of my background knowledge here, see Christian Fuchs’ Google’s “New“ Terms of Use and Privacy Policy: Old Exploitation and User Commodification in a New Ideological Skin)

However, I suspect that even if people did know how it worked, protecting privacy from corporations still wouldn’t get very many people’s dander up, for two reasons:

  1. Privacy is imagined in relation to publicity, such that as long as the information is impersonal, aggregate, and not released to the public, it seems compatible with privacy; and
  2. The strong pro-privatization ethos in much of U.S. public discourse has tended to operate with the assumption that the private sector is in some sense controlled by the public through competition and people voting with their dollars.

 

Etzioni described this as “the privacy paradox: Although they fear Big Brother most, they need to lean on him to protect privacy better from Big Bucks” (p. 10), but I think that’s no longer true (and indeed I’m skeptical that it ever was). That is, though multinational capital is capable of overpowering any other force on the planet, with the possible exceptions of the U.S., E.U., and Chinese governments should they suddenly decide to stand up to it, there’s a persistent and mistaken belief that “the market” can keep it in check and thereby keep customers in the driver’s seat as companies compete for their dollars.

The real paradox, then, is that ultimate belief in consumer sovereignty leads consumers to quite freely give up sovereignty over their own data. Or that, we don’t want our data to be publicized, and we don’t want the public sector to intervene, but, as Safiya Noble points out,  these Web technologies are themselves framed as a “public good,” which constrains how (and how much) they can be critiqued.

To say: “Obviously it’s good! It gives people access to information and social connection! For free! Well yeah, maybe it also takes, but it’s worth it! And my information is still private!” takes some pretty complex mental gymnastics and willful ignorances, and the fact that those contortions have become unremarkable is actually quite remarkable.

This week, I’ve been provoked into critiquing the casual ease with which people who, by all indications, ought to understand how to avoid stereotyping reproduce the reduction of groups people to the least common stereotype.

Now, I have myself been nailed for this. I wrote in a response paper for my Ethnic Studies 10AC course that “when I think Asian, I don’t think turban” as part of my discussion of the ways Indians get elided. And I was meaning this as a commentary on other people reducing Sikhs to turbans, but that wasn’t clear, and I got a very snarky comment in the margin from my TA. My fault for not being precise. Bad 17 year old me!

I want to distinguish the objects of my ire here from the Unintentionally Hilarious Figure  version of stereotyping, where people had actually found a pattern among groups of people but then just reported it without the necessary commentary or critique:

 

(I’m pretty sure this got from the Atlantic to me via @anetv  but Twitter’s extreme non-searchability precludes verifying. I’ll give her credit anyway.)

Instead, what I’m interested in is the ways that knowledgeable people, generating a name or an image, are using extremely loaded iconography that reproduces stereotypes when they don’t have to. They’re starting from scratch—with the acknowledgement that “scratch” is “the ideas already swirling around in culture around the objects they’re describing”—and yet they deploy these stereotyped ideas, seemingly without sufficient thinking-through.

This first came to my attention when I was forwarded a call for papers: From Veiling to Blogging: Women and Media in the Middle East. This was a goodly while ago now, but it sat there in my inbox until quite recently. The subject is not my area of expertise, so I wasn’t going to submit and should probably have just deleted it. But every time I came to it, I just got mad and tempted to fire off a reply to the listserv about it. I didn’t, because I burn enough bridges on a day-to-day basis without resorting to nuclear tactics, but it was really frustrating.

What was particularly problematic about it was that the people doing the special issue should have known better. The general public discourse around this may well be still about “white people saving brown women from brown men”—Gayatri Chakravorty Spivak specified men doing the saving in the case of sati, but the way some feminists have picked up the veil and run with it makes it an equal opportunity formula—but academics should really know better, because “Can the Subaltern Speak?” is as old as I am.

And indeed the organizers did, in fact, know better, because the actual text of the CFP started out by framing the special issue as a critique of precisely the way that “Middle Eastern women have traditionally been viewed as weak and submissive, passively accepting male authority and leadership rather than seeking to be a leader in their own right” as well as how “women of the Middle East have been portrayed as helpless creatures who are often hidden behind the veil, quietly waiting to be liberated.”

It’s therefore baffling to me why on earth they’d frame the topic as “from veiling to blogging.” Why imply that’s a chronological shift in “women and media in the Middle East” rather than (as they probably intended) in the thinking on women and media in that region? Why redeploy the veil at all, given the enormous risks of re-instantiating the very discourse they’re attempting to dispute?

The second entry in the “wow, you didn’t think that through” file, and the one that solidified my determination to write this blog post, comes from reading the article Minimalist posters explain complex philosophical concepts with basic shapes, which I got from @mikemonello

So there I am, scrolling down, not finding the geometric shapes particularly illuminating—the black and white X for Nihilism, sure, but many of the other Venn diagram-looking ones didn’t strike me as the “surprisingly simple and accessible package” the article’s introduction had promised—when I get to Hedonism and come to a screeching halt.

 

Really? A pink triangle for Hedonism? What decade is this that we’re still reinforcing the idea that gay sex is about irresponsible pleasure-seeking and gay folks have a worldview in which, as the poster-makers describe Hedonism, “Pleasure is the only intrinsic good. Actions can be evaluated in terms of how much pleasure they produce”?  I mean, yes, clearly that’s the world Rick Santorum and other far-right ideologues live in, but the rest of us get that homos are no more or less irresponsible in their pleasure-seeking than anyone else.

If these are people who have enough grasp on philosophy to make posters summarizing it, they should be no strangers to sophisticated thinking. And they should therefore have enough intelligence and sense of the world to think of something considerably less reductive. Like the veiling example, some people may not know better, but these people should.

And I guess that’s the issue. How will the general public know any better if we who do aren’t more careful in how we communicate, to each other (the CFP) and to people in general (the posters)?

This week’s post is also a cross-posting of something I put up elsewhere, this time a dialogue with Rayvon Fouché over at the collaborative blog project Culture Digitally.

Go check it out!    (How) Have Technological Shifts Changed Being a Sports Fan?