Tuesday, November 15, 2016

The Sharing Economy and Sec. 230(c) of the Communications Decency Act

The sharing economy is a challenge for local communities. On the good, it creates economic opportunity and reduces price. On the bad, it circumvents public safety and welfare protection.

Such is the clash between Airbnb and local jurisdictions. San Francisco implemented a local ordinance that permits short-term rentals on the condition that the rental property is registered. In order to register the property, the resident must provide proof of liability insurance and compliance with local code, usage reporting, tax payments, and a few other things. San Francisco then enacted another ordinance that makes it a misdemeanor crime to collect a booking fee for unregistered properties.

Airbnb and Homeaway sued arguing that plaintiffs' businesses are protected by 47 U.S.C. § 230(c) of the Communications Decency Act (and some other arguments ignored here). EFF, CDT, The Internet Association and some other usual suspects intervened ~ this case is attracting lots of attention. AIRBNB, INC. v. City and County of San Francisco, Dist. Court, ND California 2016.

Before delving into the application of the law to this case, let's review a few key facts. Airbnb is a website where property owners can list available rentals, and guests can arrange for accommodations. Airbnb does not own the properties in question. Airbnb makes its money by charging a service fee to the property owner and the guest.

Sec. 230(c) protects interactive computer services from liability for third party content. Specifically, "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." 

Plaintiffs sued, arguing that the San Francisco regulation is preempted by Sec. 230(c).

Plaintiffs lost, and it's surprising because plaintiff is here being held accountable for the actions of a third party. But listen to the rational of the court, which reflects careful crafting by San Francisco.

First, plaintiffs are not being held liable for a third party's speech. If a property owner publishes an advertisement for a property, that is perfectly fine. The property owner can do sue, and plaintiffs have no liability. Plaintiffs are not being called on to "monitor, edit, withdraw, or block" any listings. Hosting of those rental announcements, in and of itself, is not actionable.

It's the next step that plaintiffs cannot do. Having received the content and hosted it, plaintiffs themselves cannot take the action of collecting a booking fee for an unregistered property. That is something the plaintiffs are doing, not the third party. The regulation is placing an obligation on the plaintiffs to confirm that the properties are registered. The compliance of plaintiffs is what is actionable, not the content of the property owners' listings.

Plaintiffs and intervenors scramble and cite to the long litany of caselaw that establish that Sec. 230(c) provides broad immunity. Sec. 230(c) is the favorite go-to statute for establishing that online services cannot be held liable for what other people say. The problem, according to the Court, is that in each of those cited cases, the website is being held liable for its role as a publisher of third party content. There is something illegal, offensive, or problematic with the content, and the solution of the government, or the plaintiff who believed that he or she was being defamed, was to make the website liable for the third-party content. But that is exactly the type of liability Sec. 230(c) was enacted to prevent. Online services are interactive manifestations of the engagements of multiple sources, including the host and third-parties. Sec. 230(c) establishes that interactive computer services are not liable for third-party content.

In this case, an obligation is placed on plaintiffs to confirm that rental properties are registered, and only collect booking fees from those properties that are registered. Plaintiffs can host any third party content they want.

Tuesday, November 08, 2016

🚴 NIST Cybersecurity Practice Guide, Special Publication 1800-6: “Domain Name Systems-Based Electronic Mail Security”



Business Challenge

"Email has become the dominant method of electronic communication for both private and public sector organizations, fueled by low costs and fast delivery.  Securing these transactions has been less of a priority, which is one reason why email attacks have increased.
"Whether the goal is authentication of the source of an email message or assurance that the message has not been altered by or disclosed to an unauthorized party, organizations must employ some cryptographic protection mechanism. Economies of scale and a need for uniform security implementation drive most enterprises to rely on mail servers and/or Internet service providers (ISPs) to provide security to all members of an enterprise. Many current server-based email security mechanisms are vulnerable to, and have been defeated by, attacks on the integrity of the cryptographic implementations on which they depend. The consequences of these vulnerabilities frequently involve unauthorized parties being able to read or modify supposedly secure information, or introduce malware to gain access to enterprise systems or information. Protocols exist that are capable of providing needed email security and privacy, but impediments such as unavailability of easily implemented software libraries and operational issues stemming from some software applications have limited adoption of existing security and privacy protocols.

Solution

"This project has resulted in NIST Special Publication 1800-6, “Domain Name Systems-Based Electronic Mail Security,” which illustrates how commercially available technologies can meet an organization’s needs to improve email security and defend against email-based attacks such as phishing and man-in-the-middle types of attacks.
"This draft practice guide describes a proof of concept security platform that demonstrates trustworthy email exchanges across organizational boundaries and includes authentication of mail servers, signing and encryption of email, and binding cryptographic key certificates to the servers.
The goal of this project is to help organizations:
  • Encrypt emails between mail servers
  • Allow individual email users to digitally sign and/or encrypt email messages
  • Allow email users to identify valid email senders as well as send digitally signed messages and validate signatures of received messages
"The example solution uses Domain Name System Security Extension (DNSSEC) protocol to authenticate server addresses and certificates used for Transport Layer Security (TLS) to DNS names.
The project's demonstrated security platform can provide organizations with improved privacy and security protection for users' operations and improved support for implementation and use of the protection technologies. The platform also improves the usability of available DNS security applications and encourages wider implementation of DNSSEC, TLS and S/MIME to protect electronic communications.
NIST SP 1800-6 is in draft form and open for public comment until December 19, 2016. Please share your comments and feedback on this project and its example solution.

:: RFC :: Copyright Office Seeks Additional Comments for Section 512 Study

Copyright Newsletter: "The U.S. Copyright Office is conducting a study to evaluate the operation of the ISP safe harbor provisions of section 512 of title 17 and has reviewed public input from the first round of written comments and from roundtable participation. You may access the comments and a transcript of the roundtables on the Copyright Office website here.
"To further aid the analysis, the Copyright Office is now soliciting additional written comments on a subset of issues. These include questions relating to the characteristics of the current Internet ecosystem, operation of the current DMCA safe harbor system, potential future evolution of the DMCA safe harbor system, and other developments relevant to this study. The Copyright Office is also seeking submissions of empirical research on any topics that are likely to provide useful data to assess and/or improve the operation of section 512. 
"You may access the Federal Register notice here. Written comments are to be submitted electronically using the regulations.gov system. Specific instructions for submitting comments are available on the Copyright Office website at http://copyright.gov/policy/section512/comment-submission/
"Comments must be received no later than 11:59 p.m. Eastern Time on February 6, 2017. Empirical research studies must be received no later than 11:59 p.m. Eastern Time on March 8, 2017.

When CDA Immunity is not CDA Immunity

Here's a question:  If 47 USC 230(c) (the Good Samaritan provision of the Communications Decency Act) says that online services are not liable for third party content, then can you even sue the online service?  Shouldn't the online service be immune from lawsuit? Because, after all, what would be the point of being sued for something for which you cannot be liable?

This is a question which courts have pondered.  Why does it matter?  With immunity, you can file a Rule 12(c) Motion for Judgment on the Pleading - saying "Judge, there just aint nothing here."  With protection from liability, the litigation proceeds a bit further and you file a Rule 12(b)(6) Motion for Failure to State a Claim - saying "Judge, there just aint nothing here." See the difference? One lets the litigation out of the gates; the other does not.  Both have the same result (potentially).

We visit this question in GENERAL STEEL DOMESTIC SALES, LLC v. Chumley, Court of Appeals, 10th Circuit 2016, where two companies were in the business of prefrabricating steel buildings. 
PLAINTIFF employed Mr. DEFENDANT until 2005, when he left to start his own competing steel building company. The parties have been engaged in numerous legal disputes ever since. 
The underlying dispute involves DEFENDANT Steel's negative online advertising campaign against PLAINTIFF Steel. When internet users searched for "PLAINTIFF Steel," negative advertisements from DEFENDANT Steel would appear on the results page. Clicking on the advertisements would direct users to DEFENDANT Steel's web page entitled, "Industry Related Legal Matters". The IRLM Page contained thirty-seven posts, twenty of which form the basis of General Steel's complaint. To varying degrees, the twenty posts summarize, quote, and reference lawsuits involving PLAINTIFF Steel. Each lawsuit is listed with a title, a brief description of the case, and a link, by which the reader could access the accompanying court document. The majority of the case descriptions contained quotes that were selectively copied and pasted from the underlying legal documents. 
Plaintiff sued. Defendant moved to dismiss under Sec. 230(c), arguing that "the CDA bars not just liability, but also suit."
The district court found that DEFENDANT Steel was entitled to immunity for three posts because those posts simply contained links to content created by third parties. The court refused, however, to extend CDA immunity to the remaining seventeen posts and the internet search ads. The court found that the "defendants created and developed the content of those ads," and were therefore not entitled to immunity. With respect to the remaining seventeen posts, the court found that the defendants developed the content by selectively quoting and summarizing court documents in a deceiving way.  

So, right away, there's a problem.  Sec. 230(c) protects online services from liability for third party content.  But not from liability for their own content. And not so much from liability when the online service has a hand in the creation of that third party content.  As we have seen in cases like Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1164-65 (9th Cir. 2008) (en banc) (holding website operator was not entitled to § 230(c)(1) protection where it made users' answer discriminatory questions a condition of doing business, thereby participating in the "development" of the users' submissions); Fed. Trade Comm'n v. Accusearch Inc., 570 F.3d 1187, 1198 (10th Cir. 2009). ("The Tenth Circuit held that a website could not claim immunity under the CDA if it was "responsible for the development of the specific content that was the source of the alleged liability.")

So there is a bit of a question.  Is this third party content or not? And even if it is third party content, what hand did Defendant have in cultivating that content? These are factual questions upon which liability could turn.

But the court seems to want to nix the "immunity" discussion.  It states, "Whether Section 230 provides immunity from suit or liability such that a denial would permit an interlocutory appeal is an issue of first impression for this court."

Um, no.  Sixteen years ago I am pretty sure the 10th Circuit said, "We hold that America Online ... is immune from suit under § 230." Ben Ezra, Weinstein, & Co., Inc. v. Am. Online Inc., 206 F.3d 980, 986 (10th Cir. 2000).

But much to the chagrin of my fellow professionals who can't understand how lawyers write, tucked down in a footnote the Court states "Our description of the CDA as providing immunity from suit in our case of Ben Ezra, Weinstein, & Co. v. America Online Inc., 206 F.3d 980, 983 (10th Cir. 2000), did not resolve this question, as this issue was not before us in that case."

Um.  Okay.  What are we talking about?  What "issue" was not before the court?  Well, in Ben Ezra, Defendant won on a Motion for Summary Judgment.  That is a pleading on the facts, frequently after discovery has been completed.  This is a motion on the pleadings that the litigation cannot proceed at all.

In other words, what does the word "immune" mean? Does it mean "not liable" because of an affirmative defense?  Or does it mean you cannot even sue the Defendant in the first place?  Same word; two different meanings. 

If the question is whether you cannot even sue the Defendant, that's a pretty high bar, says the Court. The statute in question, Sec. 230(c), must itself contain a statutory or constitutional bar.  We are talking not being able to sue government officials or not being able to sue the federal government (unless it gives you permission).  It's not common.  It's normally protection government grants itself.  And.... as the Court points out.... Defendant is not the government.  The Court concludes, Defendant "has not identified a historical basis for providing private parties immunity from suit under the CDA."

In short, Sec. 230(c) is not a bar to lawsuit.  Sec. 230(c) does, however, provide an affirmative defense to liability for third party content. Defendants still gotta defend.  

Wednesday, November 02, 2016

1998, Nov. 2: Morris Worm Unleashed on Internet

"In the fall of 1988, Morris was a first-year graduate student in Cornell University's computer science Ph.D. program. Through undergraduate work at Harvard and in various jobs he had acquired significant computer experience and expertise. When Morris entered Cornell, he was given an account on the computer at the Computer Science Division. This account gave him explicit authorization to use computers at Cornell. Morris engaged in various discussions with fellow graduate students about the security of computer networks and his ability to penetrate it.



Morris Internet Worm Source Code by Go Boston Card
"In October 1988, Morris began work on a computer program, later known as the Internet "worm" or "virus." The goal of this program was to demonstrate the inadequacies of current security measures on computer networks by exploiting the security defects that Morris had discovered. The tactic he selected was release of a worm into network computers. Morris designed the program to spread across a national network of computers after being inserted at one computer location connected to the network. Morris released the worm into Internet, which is a group of national networks that connect university, governmental, and military computers around the country. The network permits communication and transfer of information between computers on the network.

"Morris sought to program the Internet worm to spread widely without drawing attention to itself. The worm was supposed to occupy little computer operation time, and thus not interfere with normal use of the computers. Morris programmed the worm to make it difficult to detect and read, so that other programmers would not be able to "kill" the worm easily. Morris also wanted to ensure that the worm did not copy itself onto a computer that already had a copy. Multiple copies of the worm on a computer would make the worm easier to detect and would bog down the system and ultimately cause the computer to crash. Therefore, Morris designed the worm to "ask" each computer whether it already had a copy of the worm. If it responded "no," then the worm would copy onto the computer; if it responded "yes," the worm would not duplicate. However, Morris was concerned that other programmers could kill the worm by programming their own computers to falsely respond "yes" to the question. To circumvent this protection, Morris programmed the worm to duplicate itself every seventh time it received a "yes" response. As it turned out, Morris underestimated the number of times a computer would be asked the question, and his one-out-of-seven ratio resulted in far more copying than he had anticipated. The worm was also designed so that it would be killed when a computer was shut down, an event that typically occurs once every week or two. This would have prevented the worm from accumulating on one computer, had Morris correctly estimated the likely rate of reinfection.

"Morris identified four ways in which the worm could break into computers on the network: (1) through a "hole" or "bug" (an error) in SEND MAIL, a computer program that transfers and receives electronic mail on a computer; (2) through a bug in the "finger demon" program, a program that permits a person to obtain limited information about the users of another computer; (3) through the "trusted hosts" feature, which permits a user with certain privileges on one computer to have equivalent privileges on another computer without using a password; and (4) through a program of password guessing, whereby various combinations of letters are tried out in rapid sequence in the hope that one will be an authorized user's password, which is entered to permit whatever level of activity that user is authorized to perform.

"On November 2, 1988, Morris released the worm from a computer at the Massachusetts Institute of Technology. MIT was selected to disguise the fact that the worm came from Morris at Cornell. Morris soon discovered that the worm was replicating and reinfecting machines at a much faster rate than he had anticipated. Ultimately, many machines at locations around the country either crashed or became "catatonic." When Morris realized what was happening, he contacted a friend at Harvard to discuss a solution. Eventually, they sent an anonymous message from Harvard over the network, instructing programmers how to kill the worm and prevent reinfection. However, because the network route was clogged, this message did not get through until it was too late. Computers were affected at numerous installations, including leading universities, military sites, and medical research facilities. The estimated cost of dealing with the worm at each installation ranged from $200 to more than $53,000.

"Morris was found guilty, following a jury trial, of violating 18 U.S.C. Section 1030(a)(5)(A). He was sentenced to three years of probation, 400 hours of community service, a fine of $10,050, and the costs of his supervision."
- U.S. v. Morris, 928 F.2d 504 (2nd Cir. 1991)

See more at Cybertelecom :: Morris Worm