Uh-Oh Oreo? the Food and Drug Administration Takes Aim at Trans Fats

by Paul Overbee, UMN Law Student, MJLST Staff

In the near future, food currently part of your everyday diet may undergo some fundamental changes. From cakes and cookies to french-fries and bread, a recent action by the Food and Drug Administration puts these types of products in the spotlight. On November 8th, 2013 the FDA filed a notice requesting comments and scientific data on partially hydrogenated oils. The notice states that partially hydrogenated oils, most commonly found in trans fats, are no longer considered to be generally recognized as safe by the Food and Drug Administration.

Some partially hydrogenated oils are created during a stage of food processing in order to make vegetable oil more solid. The effects of this process contribute to a more pleasing texture, greater shelf life, and stronger flavor stability. Additionally, some trans fat is naturally occurring in some animal-based foods, including some milks and meats. The FDA’s proposal is meant to only to restrict the use of artificial partially hydrogenated oils. According to the findings of the FDA, exposure to partially hydrogenated oils raises bad cholesterol levels. This raised cholesterol level has been attributed to a higher risk of coronary heart disease.

Some companies have positioned their products so that they should not have to react to these new changes. The FDA incentivized companies in 2006 by putting rules in place to promote trans fat awareness. The new regulations allowed companies to label their products as trans fat free if they lowered the level of hydrogenated oils to near zero. Kraft Foods decided to change the recipe of its then 94-year-old product, the Oreo. It took 2 ½ years for Kraft Foods to reformulate the Oreo, and once that period was over, the trans fat free Oreo was introduced to the market. The Washington Post invited two pastry chefs to taste test the new trans fat free Oreo against the original product. Their conclusion was that the two products were virtually the same. This fact should act as a form of reassurance for consumers that are worried that their favorite snacks will be pulled off the shelves.

Returning to the FDA’s guidance, there are a few items worth highlighting. At this stage, the FDA is still in the process of formulating its opinion on how to regulate these partially hydrogenated oils. Actual implementation may take years. Once the rule comes into effect, products seeking to continue to use partially hydrogenated oils will still be able to seek approval on a case by case basis from the FDA. The FDA is seeking advice on the following issues: the correctness of its determination that partially hydrogenated oils are no longer considered safe, ways to approach a limited use of partially hydrogenated oils, and any other sanctions that have existed for the use of partially hydrogenated oils.

People interested in participating with the FDA in determining the next steps taken against partially hydrogenated oils can submit comments to http://www.regulations.gov.


Required GMO Food Labels Without Scientific Validation Could Undermine Food Label Credibility

by George David Kidd, UMN Law Student, MJLST Managing Editor

GMO food-label laws that are on the voting docket in twenty-four states will determine whether food products that contain genetically modified ingredients should be either labeled or banned from store shelves. Recent newspaper articles raise additional concerns that states’ voting outcomes may spur similar federal standards. State and perhaps future federal regulation, however, might be jumping the gun by attaching stigma to GMO products without any scientific basis. FDA labeling regulation, discussed in J.C. Horvath’s How Can Better Food Labels Contribute to True Choice?, provides that FDA labeling requirements are generally based upon some scientific support. Yet, no study has concluded that genetically modified ingredients are unsafe for human consumption. Required labeling based upon the belief that we have the right to know what we eat, without any scientific basis or otherwise, could serve to further undermine the credibility of food labeling practices as a whole.

The argument for labeling GMO food products is simple: we have a “right to know what we eat.” The upshot is that we should know, or be able to find out, exactly what we are putting into our bodies, and be able to make our own consumer decisions based upon the known consequences of its manufacture and consumption. But, the fact that we do not know whether our food is synthetic or its exact origins might not matter if the product is both better for us and the environment. Indeed, the FDA admits that “some ingredients found in nature can be manufactured artificially and produced more economically, with greater purity and more consistent quality, than their natural counterparts.” If some manufactured products are better than their natural counterparts, why are we now banning/regulating GMO products before we know whether they are good or bad? If we knew they were bad in the first place, GMO products would likely already be banned.

Analysis is an important part in establishing the underlying credibility of labeling claims on food products. Without some regulation of label credibility there would be an even greater proliferation of bogus health claims on food packaging. Generally, the U.S. Food and Drug Administration has held that health claims on food labels are allowed as long as they are supported by evidence, and that food labeling is required when it discloses information that is of “material consequence” to a consumer in their choice to purchase a product. For example, the FDA has found that micro- and macro-nutritional content, ingredients, net weight, commonly known allergens, and whether “imitation” or diluted product is used, must be included on food labeling. The FDA has not, however, required labeling for dairy products produced from cows treated with synthetic growth hormone (rBST) because extensive studies have determined that rBST has no effect on humans. Just imagine the FDA approving food labeling claims without evaluating whether or not that claim was supported by evidence.

Premature adoption of new state or federal labeling policy would contradict and undermine the current scientific FDA standards underlying labeling regulation. The decision of whether to require labeling or ban GMOs, absent any scientific rigor as to whether GMO products are safe, only serves to perpetuate the problem of “meaningless” food labels. Further, the possible increases in food cost and labeling requirements might ultimately be passed on to the consumer without enough information to justify the increase. But now that GMOs are allegedly commonplace ingredients, shouldn’t legislation wait until the verdict is in on whether GMO products are good or bad for human health before taking further action?


The Importance of Appropriate Identification Within Social Networking

by Shishira Kothur, UMN Law Student, MJLST Staff

Social networking has become a prominent form of communication and expression for society. Many people continue to post and blog about their personal lives, believing that they are hidden by separate account names. This supposed anonymity gives a false sense of security, as members of society post and upload incriminating and even embarrassing information about themselves and others. This information, while generally viewed by an individual’s 200 closest friends, is has also become a part of the courtroom.

This unique issue is further explained in Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-
Networking Evidence
, Volume 13, Issue 1 of the Minnesota Journal of Law, Science and Technology. Professor Ira P. Robbins emphasizes that since social media provides an easy outlet for wrongful behavior, it will inevitably find its way as evidence in litigation. Her article focuses on the courts’ efforts to authenticate the evidence that is produced from Facebook, Twitter and other social media. Very few people take care to set appropriate privacy settings. The result from this practice is an easy way for anyone to find important, personal information, which they can use to hack accounts, submit their own postings under a different name, and incriminate others. Similarly, the creation of fake accounts is a prominent tool to harass and bully individuals to the point of disastrous and suicidal effects. With results such as untimely deaths and inappropriate convictions, the method of proving the authorship of such postings becomes a critical step when collecting evidence.

Professor Robbins comments that currently a person can be connected to and subsequently lawfully responsible for a posting without appropriate proof that the posting is, in fact, theirs. The article critiques the current method the court applies to identifying these individuals, claiming that there is too much emphasis on testimonials of current access, potential outside access, and other various factors. It proposes a new method of assigning authorship to the specific item instead of the account holder. She suggests a specific focus on the type of evidence when applying Federal Rule of Evidence 901(b)(4), which will raise appropriate questions such as the account ownership, security, and the overall posting that is related to the suit. The analysis thoroughly explains how this new method will provide sufficient support between the claims and the actual author. As social media continues to grow, so do the opportunities to hack, mislead, and ultimately cause harm. This influx of information needs to be filtered well in order for the courts to find the truth and serve justice to the right person.


All Signs Point Toward New Speed Limits on the Information Superhighway

by Matt Mason, UMN Law Student, MJLST Staff

The net neutrality debate, potentially the greatest hot-button issue surrounding the Internet, may be coming to a (temporary) close. After years of failed attempts to pass net neutrality legislation, the D.C. Circuit will soon rule as to whether the FCC possesses the regulatory authority to impose a non-discrimination principle against large corporate ISP providers such as Verizon. Verizon, the plaintiff in the case, alleges that the FCC exceeded its regulatory authority by promulgating a non-discrimination net neutrality principle. In 2010, the FCC adopted a number of net neutrality provisions, including the non-discrimination principle, in order to prevent ISPs like Verizon from establishing “the equivalents of tollbooths, fast lanes, and dirt roads” on the Internet. Marvin Ammori, an Internet policy expert, believes that based on the court’s questions and statements at oral argument, the judges plan to rule in favor of Verizon. Such a ruling would effectively end net neutrality, and perhaps the Internet, as we know it.

The D.C. Circuit Court is not expected to rule until late this year or early next year. If the D.C. Circuit rules that the FCC does not have the regulatory power to enforce this non-discrimination principle, companies such as AT&T and Verizon will have to freedom to deliver sites and services in a faster and more reliable fashion than others for any reason at all. As Ammori puts it, web companies (especially start-ups) will now survive based on the deals they are able to make with companies like Verizon, as opposed to based on the “merits of their technology and design.”

This would be terrible news for almost everyone who uses and enjoys the Internet. The Internet would no longer be neutral, which could significantly hamper online expression and creativity. Additional costs would be imposed on companies seeking to reach users, which would likely result in increased costs for users. Companies that lack the ability to pay the higher fees would end up with lower levels of service and reliability. The Internet would be held hostage and controlled by only a handful of large companies.

How the FCC will respond to the likely court ruling rejecting its non-discrimination principle is uncertain. Additionally, wireless carries such as Sprint, have begun to consider the possibility of granting certain apps or service providers preferential treatment or access to customers. Wireless phone carriers resist the application of net neutrality rules to their networks, and appear poised to continue to do so despite the fact that network speeds are beginning to equal those on traditional broadband services.

In light of the FCC potentially not having the regulatory authority to institute net neutrality principles, and because of the number of failed attempts by Congress to pass net neutrality legislation, the question of what can be done to protect net neutrality has no easy answers. This uncertainty makes the D.C. Circuit’s decision even more critical. Perhaps the consumer, media, and web company outcry will be loud enough to create policy change following to likely elimination of the non-discrimination rule. Maybe Congress will respond by making the passage of net neutrality legislation a priority. Regardless of what happens, it appears as though we will soon see the installation of speed limits on the information superhighway.


The Affordable Care Act “Death Spiral”: Fact or Fiction?

by Bryan Morben, UMN Law Student, MJLST Managing Editor

A major criticism about the Patient Protection and Affordable Care Act of 2010 (“Affordable Care Act” or “ACA”) is that it will lead to a premium “death spiral.” Because the Affordable Care Act proscribes health insurance companies from discriminating against individuals with preexisting health conditions, some believe that people might just wait until they’re sick before signing up for coverage. If that happens, everyone else’s premiums will rise, causing healthy people to drop their coverage. With only sick individuals left paying premiums, the rates go up even more. And so on . . .

On the other hand, supporters of the ACA cite its other provisions to safeguard against this scenario, specifically, the subsidy/cost sharing and “individual mandate” sections. The former helps certain individuals reduce the amount of their premiums. The latter requires individuals who forego buying minimal health insurance to pay a tax penalty. The penalty generally “is capped at an amount equal to the national average premium for qualified health plans which have a bronze level of coverage available through the state Exchange.” Therefore, the idea is that enough young, healthy individuals will sign up if they would have to pay a similar amount anyway.

States that have guaranteed coverage for everyone with preexisting conditions before have seen mixed results. New York now has some of the highest individual health insurance premiums in the country. Massachusetts, which also has an individual mandate, has claimed more success. But it still leaves some residents wondering whether breaking the law might make more sense.

There are notable differences between the ACA and the Massachusetts law as well. For example, the subsidies are larger in Massachusetts than they are with the ACA, so there’s less of an incentive for healthy people to sign up for the federal version. In addition, the ACA’s individual mandate seems to have less of a “bite” for those who elect to go without insurance. The penalty is enforced by the Treasury, and individuals who fail to pay the penalty will not be subject to any criminal penalties, liens, or levies.

Finally, the unveiling of the HealthCare.gov website, a health insurance exchange where individuals will learn about insurance plans, has been a catastrophe so far. There is also some concern that “only the sickest, most motivated individuals will persevere through enrollment process.” Since high enrollment of young, healthy participants is crucial to the success of the marketplace, the website problem, and any negative effect it has on enrollment, are just the latest contributor to the possible looming spiral.

In all, it remains to be seen whether the Affordable Care Act will succeed in bringing about a positive health care reform in the United States. For an excellent discussion on the ACA’s “right to health care” and additional challenges the law will face, see Erin C. Fuse Brown’s article Developing a Durable Right to Health Care in Volume 14, Issue 1 of the Minnesota Journal of Law, Science & Technology.


Is the Juice Worth the Squeeze? Fighting Patent Trolls With Fee-Shifting

Troll Warning

by Eric Maloney, UMN Law Student, MJLST Lead Managing Editor

It’s a bad time to be a patent troll in the United States. Both the Supreme Court and Congress are taking aim at these widely disparaged “trolls” who buy up a portfolio of patents and proceed to file lawsuits against anyone who may be using or selling inventions covered by those patents, often with a disregard for the merits of such suits.

Critics see these patent trolls as contributing nothing but a waste of time and resources to an already-burdened court system. President Obama has echoed this sentiment, accusing these trolls of “hijack[ing] somebody else’s idea and see[ing] if they can extort some money out of them.” On the other hand, legitimate patent holders are concerned that their ability to sue infringers may be limited in this mad rush to curb the patent troll problem.

The Patent Act does already have a mechanism in place to deal with frivolous patent lawsuits–35 U.S.C. § 285. This statute allows courts to award patent suit winners with “reasonable attorney fees.” There’s a catch, though–this fee-shifting isn’t available for just any winner. It can only be awarded in “exceptional cases.”

The Federal Circuit hears all patent appeals and sets patent precedent that is followed by district courts throughout the country. So far, their interpretation of “exceptional case” has required losing parties to misbehave quite flagrantly; the patent holder’s suit must have been “objectively baseless,” and the loser must have known it was baseless. Failing that, fees can only be shifted if the loser committed misconduct in the course of the suit or in obtaining the patent. MarcTec, LLC v. Johnson & Johnson, 664 F.3d 907, 916 (Fed. Cir. 2012). This high standard makes it tough for those sued by patent trolls to recover fees spent defending against a frivolous suit.

Two branches of government are taking aim at potentially easing this standard and making fee-shifting more commonplace, or even mandatory. The Supreme Court has decided to hear two appeals for fee-shifting cases, and may be looking to change how courts evaluate what is an “exceptional case” to make it easier for courts to punish frivolous plaintiffs. Rep. Goodlatte (R-VA) introduced the Innovation Act last week, which would change § 285 to mandate that patent suit losers pay fees to the winner, with some exceptions.

This would bring patent suits more in line with how English courts treat losing parties. The American legal system typically does not add insult to injury by forcing losing parties to reimburse the winners. While all the concern about patent trolls may not be misplaced, it may be worthwhile for policymakers (be they Congressional or judicial) to step back and consider the effect this may have on legitimate patent holders, such as inventors wishing to protect their patented products. Is mandatory fee-shifting the answer? All those involved should tread carefully before making groundbreaking changes to the patent litigation system.


Trusting Antitrust Law: Anti-Competitive Agreements in the Technology Industry

by Mayura Iyer, UMN Law Student, MJLST Staff

Recently, the District Court of the Northern District of California certified a group of plaintiffs as a FRCP 23(b)(3) class in the High-Tech Employee Antitrust Litigation case. This case is a consolidation of five underlying cases instituted by individual plaintiffs against Adobe Systems, Inc., and the class action is now taking on some of the biggest names in Silicon Valley, including Apple, Google, Intel, and Pixar.

The plaintiffs, a group including software and hardware engineers, programmers, and other employees of the high-tech industry, are relying on principles of antitrust law to show that their employers made unlawful, anti-competitive agreements. They are alleging that their employers engaged in a conspiracy to eliminate competition for skilled labor by entering into agreements with each other that prohibited them from poaching each other’s employees. Interestingly, all the companies involved were either associated with Steve Jobs, former Apple CEO, or shared at least one common director with Apple’s Board of Directors, suggesting a concerted effort among executives of these companies.

As a result of these agreements, wages for technical professionals like the plaintiffs have been artificially suppressed, since the employers have created a non-competitive environment for recruiting employees. With the class potentially including 64,000 plaintiffs, these companies are likely to settle. However, if any of these claims do go through to trial, the defendants will likely have large hurdles in their path, since there is electronic documentation of communications between executives acknowledging the existence and potential illegality of their gentlemen’s agreements.

These agreements have stifled competition within the technology industry by limiting the forces of the free market. The best talent was not allowed to be competitively recruited, thus devaluing those employees and consequently, likely suppressing innovation. Regardless of whether these cases are resolved through settlements or through trial, the fact that these back-door agreements have been brought to light is likely to change the landscape of the technology industry in a major way. Breaking the cycle of these anti-competitive practices will likely change the ways in which employees in this sector are recruited and compensated and perhaps will also encourage innovation and the transfer of ideas. While these companies will likely still be able to protect themselves through other safeguards such as non-compete clauses, perhaps now the scales of the technology industry will tip further towards equalizing the power between the employers and their most invaluable intellectual resources, their technical employees.


Mucking Up the Clean Air Act

by David Tibbals, UMN Law Student, MJLST Staff

When does “mobile” mean “stationary”?

Noah Webster’s response should be obvious. But it appears the U.S. Supreme Court is preparing to weigh in on that very question.

Just last week, the Court granted certiorari in the case of Utility Air Regulatory Group v. Environmental Protection Agency, an amalgam of six separate lawsuits questioning the authority of the EPA to broaden its regulation of greenhouse gases. At issue is the EPA’s decision to begin enforcing regulatory and permitting programs against stationary producers of greenhouse gases, such as coal-fired power plants.

The case can be viewed as a direct descendant of 2007’s Massachusetts v. EPA, in which the Court held that the EPA can regulate greenhouse gases, despite the fact that they weren’t actually recognized as “air pollutants” covered under the Clean Air Act. The Court’s ruling, however, was limited to greenhouse gases emitted by mobile sources, namely new automobiles.

Although the Court’s grant doesn’t challenge the general characterization of greenhouse gases as “air pollutants,” it poses a single question, the answer to which could effect a dramatic change in agency rulemaking. Is the EPA allowed to “trigger” permitting requirements for stationary sources based solely on its past regulation of mobile sources?

In essence, does “mobile” mean “stationary”?

The only prudent answer to that question is an emphatic “no.” Allowing the EPA–or any agency, for that matter–to premise broadened jurisdiction in such a manner vests an inordinate amount of power in a body well-nigh immune from the political process. Although it’s heretical to mention in a post-Chevron world, Locke and Montesquieu urged the incompatibility of such extra-legislative lawmaking power with democratic principles.

But a more eye-opening reason for answering in the negative is the adverse economic blow such expanded regulation will strike. Expanding regulation to “stationary” sources–an incredibly equivocal characterization–will inevitably result in increased compliance costs. This increase is already being realized by producers and consumers alike; a power company in Mississippi has raised electricity rates by 15% this year to fund a new, fully-compliant plant.

By the way, that new plant has already run $1.4 billion over budget.

The Court is expected to announce its judgment next summer. If it is interested in relying on democratic principles and catalyzing a languid economy, it will overrule expanded regulation and prevent the EPA from further soiling the Clean Air Act.


Making the Case for Public-Private Collaboration in the Fight Against Cybercrime

by Ryan Connell, UMN Law Student, MJLST Lead Articles Editor

In Cyber-Threats and the Limits of Bureaucratic Control, Volume 14, Issue 1 of the Minnesota Journal of Law, Science & Technology, Professor Susan Brenner delivered a thoughtful and compelling analysis of the current state of the United States Government’s approach to cybercrime. Professor Brenner advocates for a new threat-control strategy. Specifically, Professor Brenner urges us to abandon the rigid hierarchical structures that currently define our strategy. Professor Brenner instead would support a system that correlates with the lateral networked structures that are found in cyberspace itself.

Almost certainly, cybercrime must be at the forefront of our concerns. Hackers across the globe constantly threaten government secrets. In the private sector, corporations’ data also provide lucrative targets for hackers.

As Professor Brenner points out, we, as a country, have given the government complete responsibility for addressing the cybercrime threat. The problem however, is that the government has distributed its response among the many agencies that comprise the government. This has created a fragmented response where agencies either needlessly repeat each other’s work or operate in the dark due to a lack of information sharing between the agencies. Overall, this response has left many, particularly in the corporate world, feeling dissatisfied with the government.

Unfortunately, this dissatisfaction in the corporate world has damaged the government’s ability to address cybercrime in the private sector. For instance, although private industry has spent in upwards of 300 billion dollars to fight hackers, only one third of companies report cybercrimes to the government. This may suggest that the companies think they can solve the problem better than the government can. It bears mentioning that this problem is not unique the United States. The United Kingdom, for instance, has suffered similar problems. Indeed, in the UK, banks are more likely to simply reimburse most victims of cybercrime than they are to report it to the government.

Professor Brenner has presented an interesting and plausible solution. She has recognized that the Internet itself is community-based and is laterally networked. Accordingly, it is difficult to address the problems raised by cybercrime using a vertically networked system. The government should encourage and facilitate civilian participation in the fight against cybercrime. The government should recognize that it alone cannot solve this problem. Cybercrime is a solution that takes more than government to solve; it takes a government and its citizens.


Ready or Not, Here It Comes: The FDA’s Attempt to Regulate the E-Cigarette Industry

by Dylan Quinn, UMN Law Student, MJLST Staff

While the United States partial government shutdown created widespread uncertainty for federal employees and the monetary system, some are worried that the shutdown may cause the FDA to miss its self-imposed October 31, 2013 deadline for releasing the highly anticipated e-cigarette regulations. The FDA has already failed to meet its initial, self-imposed deadline of April 2013. While there are clearly no penalties for missing a self-imposed deadline, there are increasing external pressures that may force the FDA into action before the agency has a full grasp of the issues surrounding e-cigarettes.

It is estimated that e-cigarette sales in the U.S. will reach $1.7 Billion this year. E-cigarette use by students in middle and high school more than doubled from 2011 to 2012, according to the Centers for Disease Control and Prevention. They have become so popular that the use of the e-cigarette product has been coined, “vaping“.

While the FDA regulates e-cigarettes that are marketed for therapeutic purposes, it has made clear that it intends to treat e-cigarettes as a “tobacco product”, and establish regulatory control over the entire industry. However, by seemingly having this plan for years, the question arises of why the agency is on the brink of missing another deadline. The practical, and probable, answer is that the agency has no idea how to approach (or regulate) e-cigarettes.

Earlier this month the European Parliament took a “permissive approach” to e-cigarettes by shooting down proposals that called for strict regulation. European law makers seem to be influenced by the potential of e-cigarettes to be a healthy alternative to smoking, and are likely hesitant to place constraints on an industry that offers immense potential benefit to public health.

While the U.S. may benefit from taking the same approach, many think that the e-cigarettes are making nicotine addiction worse among youth, and there seems to be added pressure on the FDA to tightly regulate the industry. Just last month, Attorneys General from 41 states urged the FDA to issue the promised regulations, and there have been months of talks over a possible ban of online e-cigarette sales. However, the Obama Administration has just recently announced a significant funding program to operate 14 research centers focused on regulatory policy over tobacco products, and the FDA has expressly stated that more research is needed in regards to e-cigarettes.

There is no doubt that the public health impacts of e-cigarettes are not fully understood, and while this may not be a good enough reason to hold off strict regulation, the FDA may simply not know enough to effectively regulate the industry. Although continually missing deadlines, and gaining a better understanding, may lead to better regulation in the long run, the external pressures facing the FDA will not allow it to put off the regulations for much longer.