Articles by mjlst

What Does It Mean to Be Human?

by Mike Walls, UMN Law Student, MJLST Staff

What does it mean to be human? Where does our conception of life and death come from? Scholars and writers alike have been toiling with these questions for centuries. Our understanding of the term “human” is often embedded in the culture we grew up in. For some, human may simply mean the physical embodiment of our soul. To them, the human form is no more than just a transitory stage in our soul’s celestial journey. For others, the term is specifically defined by our genetic makeup, chromosome for chromosome, allele for allele, etc. However, our legal understanding of “human” has been especially difficult to discern. States have taken many different approaches.

In his article, “Defining the Essence of Being Human,” Professor Efthimios Parasidis discusses various states’ interpretations of human. Parasidis discusses Ohio’s interpretation of human life as beginning when one can detect the fetal heartbeat. For Ohio citizens, life begins with the beating heart, but it is unclear what this definition means for heart-related anomalies in adult-life, such as a person whose heart ceases to pump blood temporarily. Nebraska focuses on whether pain can be detected in the fetus, although the Act glosses over individuals who are incapable of experiencing pain, or fetus’s with delayed sensory development. Mississippi’s unsuccessful amendment attempted to define “person” as “every human being from the moment of fertilization, cloning, or the equivalent thereof,” which left the door open to further criticism. Should splicing human genes with animal genes necessarily discount human-like organisms from our understanding of human? Would a “manimal” have been covered under the Mississippi law? It appears that elsewhere around the world introducing animal genes into the human form is forbidden, whereas human genes introduced into animals is sometimes permitted.

Parasidis’ discussion of various states’ conceptions of human life involving pain, heartbeats, and fertilization, led him to attempt to answer the question: “What distinguishes humans from other species?” His answer fell into two categories, the anthropological record (Homo sapiens, Homo neanderthalensis, etc.) and genetics. He concluded that both explanations are vague (or at least their boundary lines do more to raise serious questions than to console skeptics).

As I ponder Parasidis’ article as a grown-up, the kid in me has found it difficult to set aside my childhood curiosity for science-fiction. What wisdom could I take from science-fiction without getting sidetracked? I turned to Dr. Moreau for answers. In 1896, H.G. Wells published The Island of Dr. Moreau, based on a mad-scientist destined to create half-men, half-animals, through vivisection. Wells wrote this piece without knowledge of genes or modern discoveries in anthropology and human origins. Wells’ interpretation of human and animal is both poetic and informative, and his philosophy on scientific inquiry plagued by moral dilemmas speaks through Dr. Moreau.

Astonished by the bestial creatures he saw on Dr. Moreau’s island, the weary naturalist Prendrick viewed Moreau as a heartless scientist, detached from the ethical world we live in. In my favorite chapter, Dr. Moreau Explains, Moreau lays out his unadorned view on the commonalities between man and animal. After Dr. Moreau inserts a blade into his own thigh, he states the following:

“Then I am a religious man, Prendrick, as every sane man must be. It may be I fancy I have seen more of the ways of this world’s Maker than you–for I have sought his laws, in my way, all my life, while you, I understand, have been collecting butterflies. And I tell you, pleasure and pain have nothing to do with heaven and hell. Pleasure and pain–Bah! . . . men and women set on pleasure and pain, Prendrick, is the mark of the beast upon them, the mark of the beast from which they came. Pain! Pain and pleasure–they are for us, only so long as we wriggle in the dust . . . .”

On the one extreme, Dr. Moreau believed that anything fathomed under the laws of science (and thus, to him, created by God), was permissible scientific inquiry. To him, pain is unmistakably detached from religion–rather it is incidental to experimentation. Moreover, Dr. Moreau believed pain is simply whatever we make of the sensation (“The capacity for pain is not needed in the muscle, and only here and there over the thigh is a spot capable of feeling pain. Pain is simply our intrinsic medical adviser to warn us and stimulate us.”). Moreau’s religion is his scientific inquiry, completely unfettered from ethical obligations. On the other hand, Nebraska takes the opposite view. Pain is the factor between life and death. But for Nebraskan legislators, is the criterion of inflicting “pain” on a fetus any more of a medical description than it is a moral obligation? Dr. Moreau and Professor Parasidis would probably argue that simply detecting pain is a smokescreen for anti-abortionists. The Nebraskan viewpoint that pain is a medical factor that necessarily dictates abortion rights might be fogging the issue. I struggle to understand some of our states’ preemptive abortion policies, as does Professor Parasidis, in their inability to separate conceptions of human life (when pain is felt) from their legal obligations owed to the individual (informed by science).

The point of all this is rather simple. No matter how we decide to determine what is human, or when life begins, public policy should influence how exacting our legal definition of human is. Dr. Moreau wouldn’t sacrifice scientific inquiry for the ethical norms of others. To him, everyone else must reconcile their moral differences with science. Professor Parasidis argues that when legislators use descriptive characteristics (like pain) in their legal definitions, they should also consider other policy implications. Besides the fetus, who else is affected? By redirecting our focus to science, we may free ourselves from biases currently clouding our reproductive rights jurisprudence, and potentially answer the question, what is the essence of being human?


Target Data Security Breach: It’s Lawsuit Time!

by Jenny Warfield, UMN Law Student, MJLST Staff

On December 19th, 2013, Target announced that it fell victim to the second-largest security attack in US retail history. While initial reports showed the hack compromised only the credit and debit card information (including PIN numbers and CVV codes) of 40 million customers, recent findings revealed that the names, phone numbers, mailing addresses, and email addresses of 70 million shoppers between November 27 to December 15 had also been stolen.

As history has proved time and again, massive data security breaches lead to lawsuits. When Heartland Payment Systems (a payment card processing service for small and mid-sized businesses) had its information on 130 million credit and debit card holders exposed in a 2009 cyber-attack, it faced lawsuits by banks and credit card companies for the costs of replacing cards, extending branch hours, and refunding consumers for fraudulent transactions. These lawsuits have so far cost the company $140 million in settlements (with litigation ongoing). Similarly, when TJX Company (parent of T.J. Maxx) had its accounts hacked in 2007, it cost the company $256 million in settlements.

Target currently faces at least 15 lawsuits in state and federal court seeking class action status, and several other lawsuits by individuals across the country. Common themes by the claimants are that 1) Target failed to properly secure customer data (more specifically, that Target did not abide by Payment Card Industry Security Standards Council Data Security Standards “PCI DSS”); 2) Target failed to promptly notify customers of the security breach in violation of state notification statutes, preventing customers from taking steps to protect against fraud; 3) Target violated the Federal Stored Communications Act; 4) and Target breached its implied contracts with its customers.

A quick review of past data breach cases reveals that these plaintiffs face an uphill battle, especially in the class-action context. While financial institutions and credit card companies can point to pecuniary damages in the form of costs associated with card replacements and customer refunds for fraudulent transactions (as in the TJX and Heartland cases), the damages suffered by plaintiffs in these cases are usually speculative. Not only are customers almost always refunded for transactions they did not make, it is unclear how to value the loss of information like home addresses and phone numbers in the absence of evidence that such information has been used to the customer’s detriment. As a result, almost all of the class action suits brought against companies in cyber-attacks have failed.

However, the causes of the cyber-attack on Target are still unclear, and it may be too early to speculate on Target’s liability. Target is currently being investigated by the DOJ (and potentially the FTC) for its role in the data breach while also conducting its own investigation in partnership with the U.S. Secret Service. In any event, affected customers should take advantage of Target’s year-long free credit monitoring while waiting for more facts to unfold.


Can I Keep It Private? Privacy Laws in Various Contexts

by Ude Lu, UMN Law Student, MJLST Articles Editor

Target Corp., the second-largest retailer in the nation, announced to its customers on Dec 20, 2013 that its payment card data had been breached. About 40 million customers who shopped at Target between Nov. 27 and Dec. 15, 2013 using credit or debit cards are affected. The stolen information includes the customer’s name, credit or debit card number, and the card’s expiration date. [Update: The breach may have affected over 100 million customers, and additional kinds of information may have been disclosed.]

This data breach stirred public discussions about data security and privacy protections. Federal Trade (FTC) Commissioner Maureen Ohlhausen said on Jan. 6, during a Twitter chat, that this event highlights the need for consumer and business education on data security.

In the US, the FTC’s privacy protection enforcement runs on a “broken promise” framework. This means the FTC will enforce privacy protection according to what a business entity promised to its customers. Privacy laws have increasing importance in wake of the information age.

Readers of this blog are encouraged to explore the following four articles published in MJLST, discussing privacy laws in various contexts:

  1. Constitutionalizing E-mail Privacy by Informational Access, by Manish Kumar. This article highlights the legal analyses of email privacy under the Fourth Amendment.
  2. It’s the Autonomy, Stupid: Political Data-Mining and Voter Privacy in the Information Age, by Chris Evans. This article explores the unique threats to privacy protection posed by political data-mining.
  3. Privacy and Public Health in the Information Age: Electronic Health Records and the Minnesota Health Records Act, by Kari Bomash. This article examines the adequacy of the Minnesota Health Records Act (MHRA) that the state passed to meet then-Governor Pawlenty’s 2015 mandate requiring every health care provider in Minnesota to have electronic health records.
  4. An End to Privacy Theater: Exposing and Discouraging Corporate Disclosure of User Data to the Government, by Christopher Soghoian. This article explores how businesses vary in disclosing privacy information of their clients to governmental agencies.


Making It Personal: The Key to Climate Change Action

by Brandon Palmen, UMN Law Student, MJLST Executive Editor

Climate change is the ultimate global governance challenge, right? It’s an intractable problem, demanding a masterfully coordinated international response and a delicate political solution, balancing entrenched economic interests against deeply-discounted, diffuse future harms that are still highly uncertain. But what if that approach to the problem were turned on its head? We often hear that the earth will likely warm 3-5 degrees centigrade (+/- 2 degrees), on average, over the next hundred years, and we may wonder whether that’s as painful as higher utility bills and the fear of losing business and jobs to free-riding overseas competitors. What if, instead, Americans asking “what’s in it for me?” could just go online and look up their home towns, the lakes where they vacation, the mountains where they ski, and fields where their crops are grown, and obtain predictions of how climate change is likely to impact the places they actually live and work?

A new climate change viewing tool from the U.S. Geological Survey is a first step toward changing that paradigm. The tool consolidates and averages temperature change predictions based on numerous climate change models and displays them on a map. The result is beautiful in its simplicity; like a weather map, it allows everyday information consumers to begin to understand how climate change will affect their lives on a daily basis, making what had been an abstract concept of “harm” more tangible and actionable. So far, the tool appears to use pre-calculated, regional values and static images (to support high-volume delivery over the internet, no doubt), and switching between models reveals fascinatingly wide predictive discrepancies. But it effectively communicates the central trend of climate change research, and suggests the possibility of developing a similar tool that could provide more granular data, either by incorporating the models and crunching numbers in real time, or by extrapolating missing values from neighboring data points. Google Earth also allows users to view climate change predictions geographically, but the accessibility of the USGS tool may give it greater impact with the general public.

There are still challenging bridges to be crossed — translation of what “N-degree” temperature changes will likely have on particular species, and “tagging,” “fencing,” or “painting” of specific tracts of land with those species — but it is plausible that within a few years, we will be able to obtain tailored predictions of climate change’s impact on the environments that actually matter to us — the ones in which we live. Of course those predictions will be imprecise or even wholly incorrect, but if they’re based on the best-available climate models, coupled with discoverable information about local geographic features, they’ll be no worse than many other prognostications that grip everyday experience, like stock market analysis and diet/nutrition advice. Maybe the problem with public climate change debate is that it’s too scientific, in the sense that scientists know the limitations of their knowledge and models, and are wary of “defrauding” the public by drawing inductive conclusions that aren’t directly confirmed by evidence. Or maybe there’s just no good way to integrate the best climate models with local environmental and economic knowledge … yet.

Well, so what? Isn’t tackling climate change still an intractable global political problem? Maybe not. The more that people understand about the impacts climate change will have on them personally, the more likely they are to personally take action to ameliorate climate change, even absent meaningful top-down climate change policy. And while global governance may be beyond the reach of most individuals, local and state programs are not so far removed from private participation. In her recent article, Localizing Climate Change Action, Myanna Dellinger examines several such “home-grown” programs, and concludes that they may be an important component of climate change mitigation. Minnesotans are probably most worried about climate change’s impact on snow storms, lake health, and crop yields, while Arizonans might worry more about drought and fragile desert ecosystems, and Floridians might worry about hurricanes and beach tourism. If all of these local groups are motivated by the same fundamental problem, their actions may be self-coordinating in effect, even if they are not coordinated by design.


Worldwide Canned Precooked Meat Product: The Legal Challenges of Combating International Spam

by Nathan Peske, UMN Law Student, MJLST Staff

On May 1, 1978 Gary Thuerk sent the first unsolicited mass e-mail on ARPANET, the predecessor to today’s Internet. Thuerk, a marketing manager for Digital Equipment Corporation (DEC), sent information about DEC’s new line of microcomputers to all 400 users of the ARPANET. Since ARPANET was still run by the government and subject to rules prohibiting commercial use, Thuerk received a stern tongue lashing from an ARPANET representative. Unfortunately this failed to deter future senders of unsolicited e-mails, or spam, and it has been a growing problem ever since.

From a single moderately annoying but legitimate advertisement sent by a lone individual in 1978, spam has exploded into a malicious, hydra-headed juggernaut. Trillions of spam e-mails are sent every year, up to 90% of all e-mail sent. Most spam e-mails are false ads for adult devices or health, IT, finance, or education products. The e-mails routinely harm the recipient through attempts to scam money like the famous Nigerian scam, phishing attacks to steal the recipient’s credentials, or distribution of malware either directly or through linked websites. It is estimated that spammers cost the global economy $20 billion a year in everything from lost productivity to the additional network equipment required to transmit the massive increase in e-mail traffic due to spam.

While spam is clearly a major problem, legal steps to combat it are confronted by a number of identification and jurisdictional issues. Gone are the Gary Thuerk days when the sender’s e-mail could be simply read off the spam e-mail. Spam today is typically distributed through large networks of malware-infected computers. These networks, or botnets, are controlled by botmasters who send out spam without the infected user’s knowledge, often for another party. Spam may be created in one jurisdiction, transmitted by a botmaster in another jurisdiction, distributed by bots in the botnet somewhere else, and received by recipients all over in the world.

Anti-spam laws generally share several provisions. They usually include one or all of the following: OPT-IN policies prohibiting sending bulk e-mails to users that have not subscribed to them, OPT-OUT policies requiring that a user must be able to unsubscribe at any time, clear and accurate indication of the sender’s identity and the advertising nature of the message, and a prohibition on e-mail address harvesting. While effective against spammers that can be found within that entity’s jurisdiction, these laws cannot touch other members in the spam chain outside of its borders. There is also a lack of laws penalizing legitimate companies, often more easily identified and prosecuted, that pay for spamming services. Only the spammers themselves are prosecuted.

Effectively reducing spam will require a more effective international framework to mirror the international nature of spam networks. Increased international cooperation will help identify and prosecute members throughout the spam chain. Changes in the law, such as penalizing those who use spamming services to advertise, will help reduce the demand for spam.

Efforts to reduce spam cannot include just legal efforts against spammers and their patrons. Much like the international drug trade, as long as spam continues to be a lucrative market, it will attract participants. Technical and educational efforts must be made to reduce the profit in spam. IT companies and industry groups are working to develop anti-spam techniques. These range from blocking IP address and domains at the network level to analyzing and filtering individual messages, and a host of other techniques. Spam experts are also experimenting with techniques like spamming the spammers with false responses to reduce their profit margins. Efforts to educate users on proper e-mail security and simple behaviors like “if you don’t know the sender, don’t open the attachment” will also help bring down spammers’ profit margins by decreasing the number of responses they get.

Like many issues facing society today, e-mail spam requires a response at all levels of society. National governments must work individually and cooperatively to pass effective anti-spam laws and prosecute spammers. Industry groups must develop ways to detect and destroy spam and the botnets that distribute them. And individual users must be educated on the techniques to defend themselves from the efforts of spammers. Only with a combined, multi-level effort can the battle against international e-mail spam be truly won.


Supreme Court Denies Request to Review FISC Court Order.

by Erin Fleury, UMN Law Student, MJLST Staff

Last week, the Supreme Court denied a petition requesting a writ of mandamus to review a decision that ordered Verizon to turn over domestic phone records to the National Security Administration (“NSA”) (denial available here). The petition alleged that the Foreign Intelligence Surveillance Court (“FISC”) exceeded its authority because the production of these types of records was not “relevant to an authorized investigation . . . to obtain foreign intelligence information not concerning a United States person.” 50 U.S.C. § 1861(b)(2)(A).

The Justice Department filed a brief with the Court that challenged the standing of a third party to request a writ of mandamus from the Supreme Court for a FISC decision. The concern, however, is that telecommunication companies do not adequately fight to protect their users’ privacy concerns. This apprehension certainly seems justified considering the fact that no telecom provider has yet challenged the legality of an order to produce user data. Any motivation to fight these orders for data is further reduced by the fact that telecommunication companies can obtain statutory immunity to lawsuits by their customers based on turning over data to the NSA. 50 USC § 1885a. If third parties cannot ask a higher court to review a decision made by the FISC, then the users whose information is being given to the NSA may have their rights limited without any recourse short of legislative overhaul.

Unfortunately, like most denials for hearing, the Supreme Court did not provide its reasoning for denying the request. The question remains though; if the end users cannot object to these orders (and may not even be aware that their data was turned over in the first place), and the telecommunication companies have no reason to, is the system adequately protecting the privacy interests of individual citizens? Or can the FISC operate with impunity as long as the telecom carriers do not object?


Problems With Forensic Expert Testimony in Arson Cases

by Becky Huting, UMN Law Student, MJLST Staff

In MJLST Volume 14, Issue 2, Rachel Diasco-Villa explored the evidentiary standard for arson investigation. Ms. Diasco-Villa, a lecturer at the School of Criminology and Criminal Justice at Griffith University, examined the history of arson-investigation knowledge, and how the manner in which it is conveyed in court can mislead, possibly leading to faulty conclusions and wrongful convictions. The article discussed the case of Todd Willingham, who was convicted and sentenced to death for setting fire to his home and killing his three children. Willingham had filed numerous unsuccessful appeals and petitions for clemency, and several years after his execution, a commission’s investigation concluded that there were several alternative explanations as to the cause of the fire, and that neither the investigation nor the evidence testimony were compliant with existing standards.

During the trial, the prosecutor’s fire expert, a Deputy Fire Marshall from the State Fire Marshall’s Office, testified as to why he believed the fire was set by arson. Little science was used in his explanation:

Heat rises. In the winter time when you are going to the bathroom and you don’t have any carpet on the rug. . .the floor is colder than the ceiling. It always is. . . So when I found that floor is hotter than the ceiling, that’s backwards, upside down. . .The only reason that the floor is hotter is because there was an accelerant. That’s the difference. Man made it hotter or woman or whatever.

The expert went on to explain that fire investigations and fire dynamics are logical and common sense, such that jurors themselves could evaluate with their sense and experiences to arrive at the same conclusions. All samples taken from “suspicious” areas of the house tested negative for any traces of an accelerant. The expert explained the chemical results: “And so there won’t be any — anything left; it will burn up.”

Fire and arson investigation has traditionally been experiential knowledge, passed down from mentors to their apprentices without experimental or scientific testing to validate their claims. Fire investigators do not necessarily have scientific training, nor is it necessary for them to hold a higher educational degree beyond a high school diploma. The National Academy of Science released a report in 2009 stating that the forensic sciences needed standardized reporting of their findings and testimony, and fire and arson investigation was no exception. The International Association of Arson Investigators has pushed back on such guidance, having filed an amicus brief arguing that arson investigation is experience-based and not novel or scientific, so it should not be subjected to higher evidentiary standards. This argument failed to convince the court, which ruled that fire investigation expertise should be subject to scrutiny under the Daubert standards that call for exacting measures of reliability.

Ms. Diasco-Villa’s note also considers the risk of contextual bias and overreach, should these experts’ testimony be admitted. In the Willingham case, the expert was given wide latitude as to his opinion on the defendant’s guilt or innocence. He was allowed to testify as to his belief that the suspect’s intent “was to kill the little girls” and identify the defendant by name as the individual who started the fire. Under Federal Rules of Evidence section 702, expert witnesses are given a certain degree of latitude in stating their opinions, but the author was concerned with the risk of jurors giving extra weight to this arguably overreaching testimony by expert witnesses.

She concluded by presenting statistics concerning the vast number of fires in the United States each year (1.6 million), and the significant quantity that are classified as having been intentionally set (43,000). There is a very real potential that thousands of arrests and convictions each year may have relied on overreaching testimony or evidence collected and interpreted using a defunct methodology. This state of affairs in arson investigations warrants continued improvements in forensic science techniques and evidentiary standards.


My Body, My Tattoo, My Copyright?

by Jenny Nomura, UMN Law Student, MJLST Managing Editor

A celebrity goes into a tattoo shop and gets an elaborate tattoo on her arm. The celebrity and her tattoo appear on TV and in magazines, and as a result, the tattoo becomes well-known. A director decides he wants to copy that tattoo for his new movie. He has an actress appear in the film with a copy of the signature tattoo. Not long after, the film company gets notice of a copyright infringement lawsuit filed against them, from the original tattoo artist. Similar situations are actually happening. Mike Tyson’s face tattoo artist sued Warner Bros. for copying his tattoo in “The Hangout Part 2.” Warner Bros. settled with the tattoo artist. Another tattoo artist, Christopher Escobedo, designed a large tattoo on a mixed martial arts fighter, Carlos Condit. Both the tattoo and the fighter appeared in a video game. Now Escobedo wants thousands of dollars for copyright infringement. Most people who get a tattoo never think about potential copyright issues, but these recent events might change that.

These situations leave us with a lot of uncertainties and questions. First of all, is there a copyright in a tattoo? It’s seems like it meets the basic definition of a copyright, but maybe just a thin copyright (most tattoos don’t have a lot of originality). Assuming there is a copyright, who owns the copyright: the wearer or the tattoo artist? Who can the owner, whoever he is, sue for copyright infringement? Can he or she sue other tattoo artists for violation of right of derivative works? Can he or she sue for violation of reproduction if another tattoo artist copies the original onto someone else? What about bringing a lawsuit against a film company for publicly displaying the tattoo? There are plenty of tattoos of copyrighted and trademarked materials, so could tattoo artists and wearers themselves be sued for infringement?

What can be done to avoid copyright infringement lawsuits? Assuming that the owner of the copyright is the tattoo artist, the potential-wearer could have the tattoo artist sign a release. It may cost more money to get the tattoo, but there’s no threat of a lawsuit. It has been argued that the best outcome would be if a court found an implied license. Sooner or later someone is going to refuse to settle and we will have a tattoo copyright infringement lawsuit and hopefully get some answers.


Uh-Oh Oreo? the Food and Drug Administration Takes Aim at Trans Fats

by Paul Overbee, UMN Law Student, MJLST Staff

In the near future, food currently part of your everyday diet may undergo some fundamental changes. From cakes and cookies to french-fries and bread, a recent action by the Food and Drug Administration puts these types of products in the spotlight. On November 8th, 2013 the FDA filed a notice requesting comments and scientific data on partially hydrogenated oils. The notice states that partially hydrogenated oils, most commonly found in trans fats, are no longer considered to be generally recognized as safe by the Food and Drug Administration.

Some partially hydrogenated oils are created during a stage of food processing in order to make vegetable oil more solid. The effects of this process contribute to a more pleasing texture, greater shelf life, and stronger flavor stability. Additionally, some trans fat is naturally occurring in some animal-based foods, including some milks and meats. The FDA’s proposal is meant to only to restrict the use of artificial partially hydrogenated oils. According to the findings of the FDA, exposure to partially hydrogenated oils raises bad cholesterol levels. This raised cholesterol level has been attributed to a higher risk of coronary heart disease.

Some companies have positioned their products so that they should not have to react to these new changes. The FDA incentivized companies in 2006 by putting rules in place to promote trans fat awareness. The new regulations allowed companies to label their products as trans fat free if they lowered the level of hydrogenated oils to near zero. Kraft Foods decided to change the recipe of its then 94-year-old product, the Oreo. It took 2 ½ years for Kraft Foods to reformulate the Oreo, and once that period was over, the trans fat free Oreo was introduced to the market. The Washington Post invited two pastry chefs to taste test the new trans fat free Oreo against the original product. Their conclusion was that the two products were virtually the same. This fact should act as a form of reassurance for consumers that are worried that their favorite snacks will be pulled off the shelves.

Returning to the FDA’s guidance, there are a few items worth highlighting. At this stage, the FDA is still in the process of formulating its opinion on how to regulate these partially hydrogenated oils. Actual implementation may take years. Once the rule comes into effect, products seeking to continue to use partially hydrogenated oils will still be able to seek approval on a case by case basis from the FDA. The FDA is seeking advice on the following issues: the correctness of its determination that partially hydrogenated oils are no longer considered safe, ways to approach a limited use of partially hydrogenated oils, and any other sanctions that have existed for the use of partially hydrogenated oils.

People interested in participating with the FDA in determining the next steps taken against partially hydrogenated oils can submit comments to http://www.regulations.gov.


Required GMO Food Labels Without Scientific Validation Could Undermine Food Label Credibility

by George David Kidd, UMN Law Student, MJLST Managing Editor

GMO food-label laws that are on the voting docket in twenty-four states will determine whether food products that contain genetically modified ingredients should be either labeled or banned from store shelves. Recent newspaper articles raise additional concerns that states’ voting outcomes may spur similar federal standards. State and perhaps future federal regulation, however, might be jumping the gun by attaching stigma to GMO products without any scientific basis. FDA labeling regulation, discussed in J.C. Horvath’s How Can Better Food Labels Contribute to True Choice?, provides that FDA labeling requirements are generally based upon some scientific support. Yet, no study has concluded that genetically modified ingredients are unsafe for human consumption. Required labeling based upon the belief that we have the right to know what we eat, without any scientific basis or otherwise, could serve to further undermine the credibility of food labeling practices as a whole.

The argument for labeling GMO food products is simple: we have a “right to know what we eat.” The upshot is that we should know, or be able to find out, exactly what we are putting into our bodies, and be able to make our own consumer decisions based upon the known consequences of its manufacture and consumption. But, the fact that we do not know whether our food is synthetic or its exact origins might not matter if the product is both better for us and the environment. Indeed, the FDA admits that “some ingredients found in nature can be manufactured artificially and produced more economically, with greater purity and more consistent quality, than their natural counterparts.” If some manufactured products are better than their natural counterparts, why are we now banning/regulating GMO products before we know whether they are good or bad? If we knew they were bad in the first place, GMO products would likely already be banned.

Analysis is an important part in establishing the underlying credibility of labeling claims on food products. Without some regulation of label credibility there would be an even greater proliferation of bogus health claims on food packaging. Generally, the U.S. Food and Drug Administration has held that health claims on food labels are allowed as long as they are supported by evidence, and that food labeling is required when it discloses information that is of “material consequence” to a consumer in their choice to purchase a product. For example, the FDA has found that micro- and macro-nutritional content, ingredients, net weight, commonly known allergens, and whether “imitation” or diluted product is used, must be included on food labeling. The FDA has not, however, required labeling for dairy products produced from cows treated with synthetic growth hormone (rBST) because extensive studies have determined that rBST has no effect on humans. Just imagine the FDA approving food labeling claims without evaluating whether or not that claim was supported by evidence.

Premature adoption of new state or federal labeling policy would contradict and undermine the current scientific FDA standards underlying labeling regulation. The decision of whether to require labeling or ban GMOs, absent any scientific rigor as to whether GMO products are safe, only serves to perpetuate the problem of “meaningless” food labels. Further, the possible increases in food cost and labeling requirements might ultimately be passed on to the consumer without enough information to justify the increase. But now that GMOs are allegedly commonplace ingredients, shouldn’t legislation wait until the verdict is in on whether GMO products are good or bad for human health before taking further action?