Internet

Political Data-Mining and Election 2012

by Chris Evans, UMN Law Student, MJLST Managing Editor

Thumbnail-Chris-Evans.jpgIn “It’s the Autonomy, Stupid: Political Data-Mining and Voter Privacy in the Information Age,” I wrote about the compilation and aggregation of voter data by political campaigns and how data-mining can upset the balance of power between voters and politicians. The Democratic and Republican data operations have evolved rapidly and quietly since my Note went to press, so I’d like to point out a couple of recent articles on data-mining in the 2012 campaign.

In August, the AP ran this exclusive: “Romney uses secretive data-mining.” Romney has hired an analytics firm, Buxton Co., to help his fundraising by identifying untapped wealthy donors. The AP reports:

“The effort by Romney appears to be the first example of a political campaign using such extensive data analysis. President Barack Obama’s re-election campaign has long been known as data-savvy, but Romney’s project appears to take a page from the Fortune 500 business world and dig deeper into available consumer data.”

I’m not sure it’s true Buxton is digging any deeper than the Democrats’ Catalist or Obama’s fundraising operation. Campaigns from both parties have been scouring consumer data for years. As for labeling Romney’s operation “secretive,” the Obama campaign wouldn’t even comment on its fundraising practices for the article, which strikes me as equally if not more secretive. Political data-mining has always been nonpartisanly covert; that’s part of the problem. When voters don’t know they’re being monitored by campaigns, they are at a disadvantage to candidates. (And when they do know they’re being monitored, they may alter their behavior.) This is why I argued in my Note for greater transparency of data-mining practices by candidates.

A more positive spin on political data-mining appeared last week, also by way of the AP: “Voter registration drives using data mining to target their efforts, avoid restrictive laws.” Better, cheaper technology and Republican efforts to restrict voting around the country are inducing interest groups to change how they register voters, swapping their clipboards for motherboards. This is the bright side of political data-mining: being able to identify non-voters, speak to them on the issues they care about, and bring them into the political process.

The amount of personal voter data available to campaigns this fall is remarkable, and the ways data-miners aggregate and sort that data is fascinating. Individuals ought to be let in on the process, though, so they know what candidates and groups are collecting what type of personal information, and so they can opt out of the data-mining.


Obama, Romney probably know what you read, where you shop, and what you buy. Is that a problem?

by Bryan Dooley, UMN Law Student, MJLST Staff

Thumbnail-Bryan-Dooley.jpgMost voters who use the internet frequently are probably aware of “tracking cookies,” used to monitor online activity and target ads and other materials specifically to individual users. Many may not be aware, however, of the increasing sophistication of such measures and the increasing extent of their use, in combination with other “data-mining” techniques, in the political arena. In “It’s the Autonomy, Stupid: Political Data-Mining and Voter Privacy in the Information Age,” published in the Spring 2012 volume of the Minnesota Journal of Law, Science, & Technology, Chris Evans discusses the practice and its implications for personal privacy and voter autonomy.

Both parties rely extensively on data-mining to identify potentially sympathetic voters and target them, often with messages tailored carefully to the political leanings suggested by detailed individual profiles. Technological developments and the widespread commercial collection of consumer data, of which politicians readily avail themselves, allow political operatives to develop (and retain for future campaigns, and share) personal voter profiles with a broad swath of information about online and market activity.

As Evans discusses, this allows campaigns to allocate their resources more efficiently, and likely increases voter turnout by actively engaging those receptive to a certain message. It also has the potential to chill online discourse and violate the anonymity of the voting booth, a central underpinning of modern American democracy. Evans ultimately argues that existing law fails to adequately address the privacy issues stemming from political data-mining. He suggests additional protections are necessary: First, campaigns should be required to disclose information contained in voter profiles upon request. Second, voters should be given an option to be excluded from such profiling altogether.


Censorship, Technology, and Bo Xilai

by Jeremy So, UMN Law Student, MJLST Managing Editor

Thumbnail-Jeremy-So.jpgAs China’s Communist party prepares for its once-a-decade leadership transition, the news has instead been dominated by the fall from power of Bo Xilai, the former head of the Chongching Communist Party and formerly one of the party’s potential leaders. While such a fall itself is unusual, the dialogue surrounding Bo’s fall is also remarkable–Chinese commentators have been able to express their views while facing only light censorship.

This freedom is remarkable because of the Chinese government’s potential control over the internet, which was recently outlined by Jyh-An Lee and Ching-Yi Liu in “Forbidden City Enclosed by the Great Firewall: The Law and Power of Internet Filtering in China” recently published in the Minnesota Journal of Law, Science & Technology. Lee and Liu explain how early on in the internet’s development, the Chinese government decided to limit a user’s ability to access non-approved resources. By implementing a centralized architecture, the government has been able to implement strict content filtering controls. In conjunction with traditional censorship, the Chinese government has an unprecedented amount of control over what can be viewed online.

Lee and Liu argue that these technological barriers rise to the level of de facto law. Within this framework, the Chinese government’s history of censorship indicates that there are rules against criticizing the party, its leaders, or its actions.

Chinese internet reactions to the Bo Xilai case are notable because thy have included criticism of all three. Posts expressing differing opinions, including those criticizing the government’s reaction and those supporting the disgraced leader, have not been taken down. Such posts have remained online even while commentary on China’s next leader, Xi Jinping, has been quickly taken down. Given the Chinese government’s potential control and past use of those controls, the spread of such dissent must be intentional.

Whether this is part of a broader movement towards more openness, a calculated response by the party, or a failure of Chinese censorship technology remains to be seen. Regardless, the changing nature of the internet and technology will force the Chinese government to adapt.


Digital Privacy: Who is Tracking you online?

by Eric Friske, UMN Law Student, MJLST Managing Editor

Thumbnail-Eric-Friske.jpgFrom one mouse click to the next, internet users knowingly and unknowingly leave a vast array of online data points that reveal something about those users’ identities and preferences. These digital footprints are collected and exploited by websites, advertisers, researchers, and other parties for a multitude of commercial and non-commercial purposes. Despite growing awareness by users that their online activities do not simply evaporate into the ether, many people are unaware of the extent to which their actions may be visible, collected, or used without their knowledge.

Scholars Omer Tene and Jules Polontensky, in their article “To Track or ‘Do Not Tract’: Advancing Transparency and Individual Control in Online Behavioral Advertising,” discusses the various online tracking technologies that have been used by industries to document and analyze these digital footprints, and argue that policymakers should be addressing the underlying value question regarding the benefits of online data usage and its inherent privacy costs.

With each new technological advance that seeks to make us more connected with the world around us, our daily lives and our online presence have become increasingly intertwined. Ordinary users have become more aware that their online activities lacks the anonymity that they once thought existed. However, despite this awareness, many users may not know what personal information is available online, how it got there, or how to prevent it. Moreover, some tracking services are undertaking efforts to prevent users from evading them, even when those users intentionally attempt to keep their online activities private.

Corporations have begun to recognize the importance of providing consumers with the opportunity to choose what information they wish to share while on the internet. For example, last May, Microsoft announced that Internet Explorer 10 will have a “Do Not Track” flag on by default, stating that it believes “consumers should have more control over how information about their online behavior is tracked, shared and used.” Not unexpectedly, the Interactive Advertising Bureau, a global non-profit trade association for the online advertising industry, denounced Microsoft’s move as “a step backwards in consumer choice;” although, some have argued that these pervasive tracking practices are actually robbing individuals of free choice. It should perhaps be noted that the popular internet browser Firefox already possesses a Do Not Track feature, though it is not engage by default, and Google has stated that it will include Do Not Track support for Chrome by the end of the year.

Regardless, while academic and political discussions on how to address these concerns continue to simmer, internet users who desire privacy must learn how to protect themselves in an online environment replete with corporations that are relentlessly trying scavenge every morsel of information they leave behind, something which may not be an easy task when tracking is so prevalent.


Don’t Track Me! – Okay Maybe Just a Little

by Mike Borchardt, UMN Law Student, MJLST Managing Editor

Thumbnail-Michael-Borchardt.jpgRecent announcements from Microsoft have helped to underscore the current conflict between internet privacy advocates and businesses which rely on online tracking and advertising to generate revenues. Microsoft recently announced that “Do Not Track” settings will be enabled by default in the next version of their web browser, Internet Explorer 10 (IE 10).

As explained by Omer Tene and Jules Polonetsky in their article in the Minnesota Journal of Law, Science & Technology 13.1, “To Track or ‘Do not Track’: Advancing Transparency and Individual Control in Online Behavioral Advertising,” the amount and type of data web services and advertisers collect on users has developed as quickly as the internet itself. (For an excellent overview of various technologies used to track online behavior, and the variety of information they can obtain, see section II of their article). The success and ability of online services to supply their products free to users is heavily dependent on this data tracking and the advertising revenue it generates. Though many online services are dependent on this data collection in order to generate revenue, users and privacy advocates are suspicious about the amount of data being collected, how it is being used, and who has access.

And it is in response to this growing environment of unease concerning the amount and types of user data being collected that Microsoft has added these new Do Not Track features (All other major browsers are set to include do not track settings, with Google’s Chrome the last to announce them. These settings, however, will likely not be enabled by default. This, however, may not be the boon for user privacy that some have been hoping for. Do Not Track is a voluntary standard developed by the web industry; it relies on browser headings to tell advertisers not to track users (for a more in depth description of how this technology works, see pgs. 325-26 of Tene and Polonetsky’s article). This is where the problem arises-websites can ignore browser headings and track users anyway. Part of the Do Not Track standard developed by the industry is that users must opt in to Do Not Track-it cannot be enabled by default. In response to Microsoft’s default Do Not Track settings, Apache (the most common webserver application), has been updated to ignore do not track setting from IE 10 users. With one side claiming that “Microsoft deliberately violate[d] the standard,” and the other claiming that the industry is ignoring privacy for profit, the conflict over user data collection seems poised to continue.

A variety of alternatives to the industry implemented Do Not Track settings have been proposed. As the conflict continues, one of the most commonly proposed solutions is legislation. Privacy advocates and web companies, however, have very different views about what Do Not Track legislation should cover. (For differing viewpoints see “‘Do Not Track’ Internet spat risks legislative crackdown). Tene and Polonetsky argue that a value judgment must be made, that policymakers must evaluate whether the “information-for-value business model currently prevailing online” is socially acceptable or “a perverse monetization of users’ fundamental rights,” and create Do Not Track standards accordingly. Unfortunately, this choice between the generally free-to-use websites and web services users have come to expect on one hand, and personal privacy on the other, does not seem like much of a choice at all.

There are, however, alternatives to the standard Do Not Track proposals. One of the best is allowing the collection of user data to continue, but to legally limit the ways in which that data could be used. Tene and Polonetsky recommend a variety of policies that could be enacted which could help to assuage users’ privacy concerns, while allowing web services to continue generating targeted advertising revenue. Some of their proposals include limiting user data use to advertising and fraud prevention, preventing the use of data collected from children, anonymizing data as much as possible, limiting the retention of user data, limiting transmission of data to third parties, and clearly explaining to users what data is being collected about them and how it is being used. Many of these options have been proposed before, but used in conjunction they could provide an acceptable alternative to the strict Do Not Track approach proposed by privacy advocates, while still allowing the free-to-use, advertising-based web to thrive.


Mashing up Copyright Infringment with the Beastie Boys and Ghostface Killah

by Eric Maloney, UMN Law Student, MJLST Staff

Thumbnail-Eric-Maloney.jpgApparently, Bridgeport Music has never seen the episode of Chappelle’s Show declaring that “Wu-Tang Clan ain’t nothing to [mess] with.” The record label has decided to sue the group, specifically artists Raekwon, Ghostface Killah, Method Man, and producer RZA, for reportedly using a sample of a 1970’s recording originally by the Magictones on a 2009 Raekwon album track. The portion of the recording allegedly utilized in production of the song was sped up to change the sample’s key from E minor to F# minor, and constituted four measures of the original tune. The sample was only ten seconds long.

Wu-Tang Clan isn’t the only group currently under scrutiny for their use of sampling. The Beastie Boys are also facing an infringement suit, due to allegedly sampling two songs by a group called Trouble Funk in four of their tracks from the late 1980’s. This suit is different in at least one respect from the Bridgeport matter: the record company, Tuf America, will have to show not only infringement, but also explain why the suit shouldn’t be barred by the statute of limitations after over 20 years have passed since the Beasties released these songs.

These lawsuits are hardly novel; hip-hop and electronica artists have been subject to infringement liability for years now due to the rise in their use of digital sampling methods. The Beastie Boys especially have been repeatedly sued for using unauthorized samples. (See, e.g. Newton v. Diamond, 204 F. Supp. 2d 1244 (C.D. Cal. 2002). For a great summary of the history of sampling in music production and court cases regarding infringement, see Professor Tracy Reilly’s article Good Fences Make Good Neighboring Rights in the Winter 2012 issue of the Minnesota Journal of Law, Science & Technology.

As Professor Reilly indicates in her article, the latest federal appeals court to directly address this issue has taken a hard-line stance: appropriation of any part of a sound recording is a physical taking, no matter how minute the sample may be. That case, Bridgeport Music v. Dimnesion Films, featured the same plaintiff record company that is now suing Wu-Tang Clan. The Sixth Circuit Court of Appeals in this instance held that there is no type of de minimis protection for use of small samples; instead, any unauthorized, direct sample of a protected recording subsequently used by an artist constitutes infringement.

The risk that courts run in following such a bright-line doctrine is that they may be a bit behind trends in culture and technology in dealing so harshly with those who choose to sample copyrighted works. So-called “mash-up” artists, such as Greg Gillis of Girl Talk, make a living through exclusively sampling copyrighted works and then distributing them for free under the penumbra of “fair use.” His sampling is both notorious and fairly obvious; there are websites dedicated to tracking which samples he chooses to use in his productions. Gillis is still able to make a living by touring and selling merchandise, while also speaking out against current copyright infringement standards.

As digital sampling techniques continue to improve and the demand for “mash-up” artists grows, the Bridgeport ruling will start to look dated in the face of the reality of modern-day music production. This is especially true in the case against the Wu-Tang Clan, where it appears somewhat absurd to condition liability on such a small amount of sampled music. For now, though, artists will need to stay on their toes and be sure to license any samples, no matter how minimal, or face the consequences. This doctrine may stifle creativity for the time being, but perhaps all this legal wrangling will give artists emotional fodder for future compositions. Either way, it’s becoming clearer as more of these suits are brought that greater clarity on the issue is needed, either from Congress or the courts. A better balance between encouraging creativity and protecting copyrights than what is given to us by Bridgeport can hopefully be found as this area of law continues to evolve.


Google Glass: Augmented Realty or ADmented Realty?

by Sarvesh Desai, UMN Law Student, MJLSTStaff

Thumbnail-Sarvesh-Desai.jpgGoogle glasses . . . like a wearable smartphone, but “weighing a few ounces, the sleek electronic device has a tiny embedded camera. The glasses also deploy what’s known as a ‘heads-up display,’ in which data are projected into the user’s field of vision on a small screen above the right eye.”

google-glasses2.jpgThe glasses are designed to provide an augmented reality experience in which (hopefully useful) information can be displayed to the wearer based on what the wearer is observing in the world at that particular moment. The result could be a stunning and useful achievement, but as one commentator pointed out, Google is an advertising company. The result of Google glasses, or as Google prefers to call them “Google Glass”(since they actually have no lenses) is that advertisements following you around and continuously updating as you move through the world may soon be a reality.

With the ever increasing digital age, more of our movements, preferences, and lives are incessantly tracked. A large portion of the American population carries a mobile phone at all times, and as iPhone users learned in 2011, a smartphone is not only a handy way to keep Facebook up to date, it is also a potential GPS tracking device.

With technologies like smartphones, movement data is combined with location data to create a detailed profile of each person. Google Glass extends this personal profile even further by recording not only where you are, but what you are looking at. This technology makes advertising, as displayed in the hit movie, The Minority Report, a reality, while also creating privacy issues that previously could not even be conceptualized outside science fiction.

Wondering what it might look like to wander the world, as context-sensitive advertisements flood your field of vision? Jonathan McIntosh, a pop culture hacker has the answer. He released a video titled ADmented Reality in which he placed ads onto Google’s Project Glass promotional video demonstrating what the potential combination of the technology, tracking, and advertising might yield. McIntosh discussed the potential implications of such technology in the ABC News Technology Blog. “Google’s an ad company. I think it’s something people should be mindful of and critical of, especially in the frame of these awesome new glasses,” McIntosh said.

As this technology continues to improve and become a more integrated part of our lives, the issue of tracking becomes ever more important. For a thorough analysis of these important issues, take a look at Omer Tene and Jules Polonetsky’s article in the Minnesota Journal of Law, Science & Technology, “To Track or ‘Do Not Track’: Advancing Transparency and Individual Control in Online Behavioral Advertising.” The article covers current online tracking devices, the use of tracking, and recent developments in the regulation of online tracking. The issues are not simple and there are many competing interests involved: efficiency vs. privacy, law enforcement vs. individual rights, and reputation vs. freedom of speech, to name a few. As this technology inexorably marches on, it is good to consider whether legislation is needed and, if so, how will it balance those competing interests. In addition, what values do we consider to be of greatest importance and worth preserving at the risk of hindering “awesome new” technology?


FBI Face Recognition Concerns Privacy Advocates

by Rebecca Boxhorn, Consortium Research Associate, Former MJLST Staff & Editor

Thumbnail-Rebecca-Boxhorn.jpgHelen of Troy’s face launched a thousand ships, but yours might provide probable cause. The FBI is developing a nationwide facial recognition database that has privacy experts fretting about the definition of privacy in a technologically advanced society. The $1 billion Next Generation Identification initiative seeks to harness the power of biometric data in the fight against crime. Part of the initiative is the creation of a facial photograph database that will allow officials to match pictures to mug shots, electronically identify suspects in crowds, or even find fugitives on Facebook. The use of biometrics in law enforcement is nothing new, of course. Fingerprint and DNA evidence have led to the successful incarceration of thousands. What privacy gurus worry about is the power of facial recognition technology and the potential destruction of anonymity.

Most facial recognition technology relies on the matching of “face prints” to reference photographs. Your face print is composed of as many as 80 measurements, including nose width, eye socket depth, and cheekbone shape. . Sophisticated computer software then matches images or video to a stored face print and any data accompanying that face print. Accuracy of facial recognition programs varies, from accuracy estimates as low as 61% to as high as 95%.

While facial recognition technology may prove useful for suspect identification, your face print could reveal much more than your identity to someone with a cell phone camera and a Wi-Fi connection. Researchers at Carnegie Melon University were able to link face print data to deeply personal information using the Internet: Facebook pages, dating profiles, even social security numbers! Although the FBI has assured the public that it only intends to include criminals in its nationwide database, this has not quieted concerns in the privacy community. Innocence before proof of guilt does not apply to the collection of biometrics. Police commonly collect fingerprints from arrestees, and California’s Proposition 69 allows police to collect DNA samples from all people they arrest, no matter the charge, circumstances, or eventual guilt or innocence. With the legality of pre-conviction DNA collection largely unsettled, the legal implications of new facial recognition technology are anything but certain.

It is not difficult to understand, then, why facial recognition has captured the attention of the federal government, including Senator Al Franken of Minnesota. During a Judiciary Committee hearing in July, Senator Franken underscored the free speech and privacy implications of the national face print database. From cataloging political demonstration attendees to misidentifications, the specter of facial recognition technology has privacy organizations and Senator Franken concerned.

But is new facial recognition technology worth all the fuss? Instead of tin foil hats, should we don ski masks? The Internet is inundated with deeply private information voluntarily shared by individuals. Thousands of people log on to Patientslikeme.com to describe their diagnoses and symptoms; 23andme.com allows users to connect to previously unknown relatives based on shared genetic information. Advances in technology seem to be chipping away at traditional notions of privacy. Despite all of this sharing, however, many users find solace and protection in the anonymity of the Internet. The ability to hide your identity and, indeed, your face is a defining feature of the Internet and the utility and chaos it provides. But as Omer Tene and Jules Polonetsky identify in their article “To Track or ‘Do Not Track’: Advancing Transparency and Individual Control in Online Behavioral Advertising,” online advertising “fuels the majority of free content and services online” while amassing enormous amounts of data on users. Facial recognition technology only exacerbates concerns about Internet privacy by providing the opportunity to harvest user-generated data, provided under the guise of anonymity, to give faces to usernames.

Facial recognition technology undoubtedly provides law enforcement officers with a powerful crime-fighting tool. As with all new technology, it is easy to overstate the danger of governmental abuse. Despite FBI assurances to use facial recognition technology only to catch criminals, concerns regarding privacy and domestic spying persist. Need the average American fear the FBI’s facial recognition initiative? Likely not. To be safe, however, it might be time to invest in those oversized sunglasses you have been pining after.


Social Media Evidence: Not Just an Attorney Niche

mjlst-logo-button.pngOnce just the province of Generation Y and high tech culture, it is not breaking news that social media is now as mainstream as . . . well . . . the internet. What is new is that social media issues are no longer just an interesting specialty niche for tech savvy lawyers, but something that likely touches most attorneys’ practices.Social-Media-Evidence.PNG

A look at the rapid rise of appellate level cases involving social media evidence gives a hint at just how common social media evidence is becoming in civil litigation and criminal prosecution. The chart accompanying this post, while not a definitive study, shows the results of a Westlaw search for the number of appellate cases that likely involved the admission of evidence related to the major social media outlets — increasing 8-fold since 2008 and doubling in the past two years.

In separate research, eDiscovery firm “X1 Discovery” recently reported finding 674 appellate cases in 2010-2011 that mentioned social media evidence. With that many cases involving social media evidence at the appellate level, it is not unreasonable to conclude that social media evidence must be seen frequently by the lower courts.

Whether it is understanding how to authenticate a Tweet during trial, or avoiding a career-ending discovery sanction for spoliation of Facebook evidence, there is a growing need for litigators and other attorneys to understand the implications of social media for clients.

In issue 13.1 of the Minnesota Journal of Law, Science & Technology, Professor Ira P. Robbins of American University’s Washington College of Law outlines the challenges involved in authenticating social media evidence and proposes an authorship-centric approach to the authentication of such evidence. Read “Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-Networking Evidence