Friday, December 09, 2016

Origins of Settlement Free Peering

Internet culture stood as an antithesis to Bell telephone culture. In the early 1960s, Paul Baran, who was concerned about the vulnerability of U.S. communications, advocated that the Department of Defense migrate to a distributed packet-switched network that could survive failure and still get a “Go; No-Go” command to the field. Bell-trained DOD engineers dismissed packet-switched communications as an idea that could not possibly work. In 1971, Larry Roberts, having successfully demonstrated the viability of his packet-switched network experiment, attempted to give ARPANET to AT&T; AT&T was not interested. In the meantime, AT&T had refused to sell private lines to nascent data networks, refused to allow foreign devices (i.e., modems) to their network, and refused to interconnect their network with nascent rivals. Those seeking telecommunications services in order to build computer networks became frustrated, and serious question arose concerning whether AT&T’s telecommunications services would meet the needs of growing computer networks

Thus emerged a cultural rivalry between Nethead and Bellhead cultures. In the eyes of the Netheads, Bellheads were about perpetuating a century-old communications network monopoly that followed a command-and-control model of central operation which stunted innovation, while Netheads were all about innovation, building a network that fostered the cool things transpiring at the end. Netheads wanted nothing of the Bellhead model. [Greenstein 2015 at 38 ("most had an almost visceral dislike for" Ma Bell)] [Mayo (quoting Bob Taylor, "Working with AT&T would be like working with Cro-Magnon man. I asked them if they wanted to be early members so they could learn technology as we went along. They said no. I said, Well, why not? And they said, Because packet switching won’t work. They were adamant.").] 

In the 1980s, Netheads had to resolve how to interconnect networks. Netheads wanted to interconnect ARPANET with an NSF sponsored CSNET, but what should be the terms and who should pay whom? One model before them was the byzantine Bell accounting scheme which one engineer described as so complicated it was akin to drinking coffee one molecule at a time; a marvelous feat of engineering if not utterly pointless. [Moore 1999] [Jacobson 1999 (discussing how Internet culture is different than BellHead culture and the implications for the future of the internet)] [Kleinrock 1994 at 67 (bemoaning the additional cost and complexity of accounting necessary for usage-based pricing); [MacKie-Mason 1993 at 14 (describing the cost of phone company style billing and accounting as applied to the packet-switched Internet as “astronomical”).] Netheads' incentive was to grow network effect and continue innovation at the ends. To achieve that incentive, they wanted the lowest possible barriers to the expansion of the network. The solution was settlement-free interconnection, devoid of complicated Bell accounting. A paper by Lyman Chapman and Chris Owens describes how the arrangement came to be:
In modern terms, we would say that the customers of one ISP (ARPAnet) could not communicate with the customers of another ISP (CSNet), because no mechanism existed to reconcile the different Acceptable Use Policies of the two networks. This disconnect persisted as both sides assumed that any agreement to exchange traffic would necessarily involve the settlement of administrative, financial, contractual, and a host of other issues, the bureaucratic complexity of which daunted even the most fervent advocates of interconnection—until the CSNet managers came up with the idea that we now call “peering,” or interconnection without explicit accounting or settlement. A landmark agreement between NSF and ARPA allowed NSF grantees and affiliated industry research labs access to ARPAnet, as long as no commercial traffic flowed through ARPAnet.
[Chapin 2005 at 9] [Norton Chap. 8]

NSF followed CSNET's example for interconnection settlements; interconnections between NSFNET and regional networks was on a settlement-free basis.  [MacKie-Mason 1993 at 18 (“The full costs of NSFNET have been paid by NSF, IBM, MCI and the State of Michigan”).]

Commercial networks emerged in the early 1990s, but they could not exchange traffic through the academic NSFNET. On the one hand, Advanced Network Services(ANS), the contractor that operated the NSFNET, established a commercial backbone service and offered to sell interconnection to nascent commercial networks. ANS had all of the NSFNET clients as end-users including regional networks and academic networks; ANS as the largest network could leverage network effect. But the other commercial networks were not interested in paying ANS for the right to access end-users or helping ANS become the AT&T-monopoly of the Internet. Instead they established the Commercial Internet eXchange (CIX) and exchanged traffic on a settlement-free basis. For these early commercial networks, connectivity was paramount and growth of access services was king. [Brock, Economics of Interconnection at ii ("Commercial Internet service providers agreed that interchange of traffic among them was of mutual benefit and that each should accept traffic from the other without settlements payments or interconnection charges. The CIX members therefore agreed to exchange traffic on a "sender keep all" basis in which each provider charges it own customers for originating traffic and agrees to terminate traffic for other providers without charge.").] First UUNETPSINET, and CERFNET joined CIX. Then Sprint joined. Soon most of the Internet could be reached through CIX. Connectivity grew network-effect which grew the value of the access service that these commercial networks were selling to end-users. CIX become the model of commercial interconnection, while ANS became isolated. [Greenstein 2015 at 81 ("Just a little less than a year later, CIX essentially had everyone except ANS. By the time Boucher held his hearing, ANS had become isolated, substantially eroding their negotiating leverage with others. By June 1992 ANS's settlement proposals no longer appeared viable. In a very public surrender of its strategy, it agreed to interconnect with the CIX on a seemingly short-term basis and retained the right to leave on a moment's notice.)] [Noam 2001 at 63 (""Soon the relative use by the commercial and nonprofit sectors kept shifting, and the power over interconnection moved to the former. By 1993, approximately 80 percent of all Internet sites could be accessed outside the NSFNET structure. CIX blocked ANS traffic from routing through the CIX router, thus depriving ANS users of connectivity to CIX members. Humbled, ANS joined CIX in 1994"")] [Srinagesh at 143 ("In October 1993, CIX, apparently without warning, blocked ANS traffic from transiting the CIX router. At this point, ANS (through its subsidiary CO-RE) joined the CIX and full connectivity was restored.")]. In 1994, ANS' assets were sold off to AOL. [History, Advanced Network Services (2004)] [Salus 1995 at 200]  

In the late 1990s, academic and government networks focused on research, development, and innovation, not commercial competition; they followed settlement-free peering as a simple accounting scheme for interconnection. Commercial backbone providers adopted settlement-free peering as a means of rapidly growing their business plans.

Access networks, which needed to provide full Internet service to their customers, interconnected with and paid transit to commercial backbone providers. Controversy swirled during the late 1990s as the commercial backbones matured their business plans, converting smaller networks that were dependant on the backbone networks' services from settlement-free peers into paying transit customers. Appeals for intervention reached the FCC, which declined to intercede, finding the Internet backbone market competitive.

1968 :: Douglas Engelbart :: The Mother of All Demos

"The Mother of All Demos is a name given retrospectively to Douglas Engelbart's December 9, 1968, demonstration of experimental computer technologies that are now commonplace. The live demonstration featured the introduction of the computer mouse, video conferencing, teleconferencing, hypertext, word processing, hypermedia, object addressing and dynamic file linking, bootstrapping, and a collaborative real-time editor."

Tuesday, December 06, 2016

📽 Silicon Flatirons :: Privacy :: Lorrie Cranor Keynote

Historic Evolution of Internet Interconnection

In the beginning was ARPANET. ARPANET was an end-to-end packet-switched network. ARPANET was The Network, thus interconnection was not a tremendous concern. 

ARPANET’s success begat ALOHANET, SATNET, PRNET and other packet switched networks. It was clear that ARPANET's network protocol would have to be revised in order to facilitate interconnection. In 1972, Vint Cerf and Bob Kahn released A Protocol for Packet Network Interconnection. The design objective of the Internet was to enable interconnection between otherwise incompatible networks, promoting the research and innovation occurring at the edges, and leveraging "network effect." IP made interconnection easy and coordination unnecessary. According to Bob Kahn,
The idea of the Internet was that you would have multiple networks all under autonomous control. By putting this box in the middle, which we eventually called a gateway, it would allow for the federation of arbitrary numbers of networks without the need for any change made to any particular network. So if BBN had one network and AT&T had another, it would be possible to just plug the two together with a [gateway] box in the middle, and they wouldn't have to do anything to make that work other than to agree to let their networks be plugged in.
[SEGALLER, NERDS 2.0.1: A BRIEF HISTORY OF THE INTERNET at 111]

The Internet reflected a culture where connectivity was paramount. With each interconnection of an additional network, the value of the Internet grew. [Carpenter, Architectural Principles of the Internet, IETF RFC 1958,  Sec. 2.1  ("the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network. The current exponential growth of the network seems to show that connectivity is its own reward.")] [THE INTERNET'S COMING OF AGE, COMPUTER SCIENCE AND TELECOMMUNICATIONS BOARD, NATIONAL RESEARCH COUNCIL 35 (2001) ("the value placed on connectivity as its own reward favors gateways and interconnections over restrictions on connectivity")] [REALIZING THE INFORMATION FUTURE: THE INTERNET AND BEYOND, COMPUTER SCIENCE AND TELECOMMUNICATIONS BOARD, NATIONAL RESEARCH COUNCIL 3 (1994) (setting forth vision for Open Data Networks, stating in first principle that network should "permit[] universal connectivity.")] [David Clark, A Cloudy Crystal Ball: Visions of the Future, Presentation at the IETF, Slide 4 (July 1992) ("Our best success was not computing, but hooking people together").]

In 1985, the National Science Foundation concluded that the Internet was good, and sought to expand its reach to the greater academic community. Instead of providing the entire end-to-end network, NSF elected to supply a crucial piece: the first dedicated nationwide Internet backbone. The NSFNET interconnected with regional networks, and the regional networks interconnected with local networks. In so doing, NSFNET offered long-distance transit as well as traffic exchange between networks.

Commercial networks concluded that the Internet was good. Commercial traffic, however, could not be exchanged across NSFNET. Therefore, the early commercial networks established the Commercial Internet eXchange (CIX) and exchanged traffic on a settlement-free basis. For these early commercial networks, connectivity was paramount and growth of access services was king. [Brock, Economics of Interconnection at ii ("Commercial Internet service providers agreed that interchange of traffic among them was of mutual benefit and that each should accept traffic from the other without settlements payments or interconnection charges. The CIX members therefore agreed to exchange traffic on a "sender keep all" basis in which each provider charges it own customers for originating traffic and agrees to terminate traffic for other providers without charge.").]

The US Government concluded that the Internet was good, and sought to make it available to all. NSF was tasked with privatizing the Internet. In order to be successful, NSF had to establish a means for emerging commercial networks to exchange traffic. Following the CIX model, NSF built four Internet exchange points, known as NAPs, in Washington, D.C., New York, Chicago, and San Jose. [National Science Foundation Solicitation 93-52, Solicitation for Network Access Point Manager, Routing Arbiter, Regional Network Providers, and Very High Speed Backbone Network Services Provider for NSFNET and NREN Program (May 6, 1993) (setting forth NSF's plan for privatizing NSFNET).] [GREENSTEIN, HOW THE INTERNET BECAME COMMERCIAL at 82 (NAPs were modeled on CIX and "helped the commercial Internet operate as a competitive market after the NSFNET shut down.").]


In 1995, NSF decommissioned the NSFNET and the commercial Internet was born in its image. There were Tier 1 backbone networks (WANs) that provided nationwide or global service, Tier 2 networks (MANs, Metro) that provided regional service, and Tier 3 networks (LANs, Local) that provided access service. Traffic from one end-user to another end-user would travel up the topology from Tier 3 networks to be exchanged at the Tier 1 level, and then travel back down. Within networks was robust capacity moving traffic. [See David Young, Why is Netflix Buffering? Dispelling the Congestion Myth, VERIZON PUBLIC POLICY BLOG (July 10, 2014) (diagramming Verizon network with backbone, metro and local networks)] Between networks was interconnection where capacity could be constrained. Interconnection capacity constituted an aggregation of all traffic from the different end-users, services, and firms to which a network provider offered service. It could become a traffic pinch-point. In order to avoid congestion, interconnection partners would have to cooperate. [See KLEINROCK, REALIZING THE INFORMATION FUTURE: THE INTERNET AND BEYOND at 183 ("Because the network is not implemented as one monolithic entity but is made of parts implemented by different operators, each of these entities must be separately concerned with achieving good loading of its links, avoiding congestion, and making a profit. The issue of sharing and congestion arises particularly at the point of connection between providers. At this point, the offered traffic represents the aggregation of many individual users, and thus cost-effective sharing of the link can be assumed. However, if congestion occurs, one must eventually push back on original sources. Options include pricing and technical controls on congestion; there may be other mechanisms for shaping consumer behavior as well.”)]

Early academic and government networks focused on research, development, and innovation, not commercial competition; they developed a simple accounting scheme for interconnection: settlement-free peering. Early commercial backbone providers adopted settlement-free peering as a means of rapidly growing their business plans. Access networks, which needed to provide full Internet service to their customers, interconnected with and paid transit to commercial backbone providers.

The Internet interconnection market would then undergo seismic evolution. By 2010, large access providers had reestablished a strong position in the network ecosystem, and leveraged network effect and interconnection to implement strategic goals. Broadband Internet access service providers were able to reverse the flow of interconnection settlement fees, going from transit customers to gatekeepers, charging paid peering for access to their end-users. The interconnection market had departed from the characteristics of the early commercial Internet, where backbone networks were king and the market was hyper-competitive, and returned to traditional consolidated network market economics.  [See generally WU, THE MASTER SWITCH: THE RISE AND FALL OF INFORMATION EMPIRES (discussing how communications markets go through cycles of innovation and disruption, competition, and then consolidation and market power).]



Tuesday, November 15, 2016

The Sharing Economy and Sec. 230(c) of the Communications Decency Act

The sharing economy is a challenge for local communities. On the good, it creates economic opportunity and reduces price. On the bad, it circumvents public safety and welfare protection.

Such is the clash between Airbnb and local jurisdictions. San Francisco implemented a local ordinance that permits short-term rentals on the condition that the rental property is registered. In order to register the property, the resident must provide proof of liability insurance and compliance with local code, usage reporting, tax payments, and a few other things. San Francisco then enacted another ordinance that makes it a misdemeanor crime to collect a booking fee for unregistered properties.

Airbnb and Homeaway sued arguing that plaintiffs' businesses are protected by 47 U.S.C. § 230(c) of the Communications Decency Act (and some other arguments ignored here). EFF, CDT, The Internet Association and some other usual suspects intervened ~ this case is attracting lots of attention. AIRBNB, INC. v. City and County of San Francisco, Dist. Court, ND California 2016.

Before delving into the application of the law to this case, let's review a few key facts. Airbnb is a website where property owners can list available rentals, and guests can arrange for accommodations. Airbnb does not own the properties in question. Airbnb makes its money by charging a service fee to the property owner and the guest.

Sec. 230(c) protects interactive computer services from liability for third party content. Specifically, "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." 

Plaintiffs sued, arguing that the San Francisco regulation is preempted by Sec. 230(c).

Plaintiffs lost, and it's surprising because plaintiff is here being held accountable for the actions of a third party. But listen to the rational of the court, which reflects careful crafting by San Francisco.

First, plaintiffs are not being held liable for a third party's speech. If a property owner publishes an advertisement for a property, that is perfectly fine. The property owner can do sue, and plaintiffs have no liability. Plaintiffs are not being called on to "monitor, edit, withdraw, or block" any listings. Hosting of those rental announcements, in and of itself, is not actionable.

It's the next step that plaintiffs cannot do. Having received the content and hosted it, plaintiffs themselves cannot take the action of collecting a booking fee for an unregistered property. That is something the plaintiffs are doing, not the third party. The regulation is placing an obligation on the plaintiffs to confirm that the properties are registered. The compliance of plaintiffs is what is actionable, not the content of the property owners' listings.

Plaintiffs and intervenors scramble and cite to the long litany of caselaw that establish that Sec. 230(c) provides broad immunity. Sec. 230(c) is the favorite go-to statute for establishing that online services cannot be held liable for what other people say. The problem, according to the Court, is that in each of those cited cases, the website is being held liable for its role as a publisher of third party content. There is something illegal, offensive, or problematic with the content, and the solution of the government, or the plaintiff who believed that he or she was being defamed, was to make the website liable for the third-party content. But that is exactly the type of liability Sec. 230(c) was enacted to prevent. Online services are interactive manifestations of the engagements of multiple sources, including the host and third-parties. Sec. 230(c) establishes that interactive computer services are not liable for third-party content.

In this case, an obligation is placed on plaintiffs to confirm that rental properties are registered, and only collect booking fees from those properties that are registered. Plaintiffs can host any third party content they want.

Tuesday, November 08, 2016

🚴 NIST Cybersecurity Practice Guide, Special Publication 1800-6: “Domain Name Systems-Based Electronic Mail Security”



Business Challenge

"Email has become the dominant method of electronic communication for both private and public sector organizations, fueled by low costs and fast delivery.  Securing these transactions has been less of a priority, which is one reason why email attacks have increased.
"Whether the goal is authentication of the source of an email message or assurance that the message has not been altered by or disclosed to an unauthorized party, organizations must employ some cryptographic protection mechanism. Economies of scale and a need for uniform security implementation drive most enterprises to rely on mail servers and/or Internet service providers (ISPs) to provide security to all members of an enterprise. Many current server-based email security mechanisms are vulnerable to, and have been defeated by, attacks on the integrity of the cryptographic implementations on which they depend. The consequences of these vulnerabilities frequently involve unauthorized parties being able to read or modify supposedly secure information, or introduce malware to gain access to enterprise systems or information. Protocols exist that are capable of providing needed email security and privacy, but impediments such as unavailability of easily implemented software libraries and operational issues stemming from some software applications have limited adoption of existing security and privacy protocols.

Solution

"This project has resulted in NIST Special Publication 1800-6, “Domain Name Systems-Based Electronic Mail Security,” which illustrates how commercially available technologies can meet an organization’s needs to improve email security and defend against email-based attacks such as phishing and man-in-the-middle types of attacks.
"This draft practice guide describes a proof of concept security platform that demonstrates trustworthy email exchanges across organizational boundaries and includes authentication of mail servers, signing and encryption of email, and binding cryptographic key certificates to the servers.
The goal of this project is to help organizations:
  • Encrypt emails between mail servers
  • Allow individual email users to digitally sign and/or encrypt email messages
  • Allow email users to identify valid email senders as well as send digitally signed messages and validate signatures of received messages
"The example solution uses Domain Name System Security Extension (DNSSEC) protocol to authenticate server addresses and certificates used for Transport Layer Security (TLS) to DNS names.
The project's demonstrated security platform can provide organizations with improved privacy and security protection for users' operations and improved support for implementation and use of the protection technologies. The platform also improves the usability of available DNS security applications and encourages wider implementation of DNSSEC, TLS and S/MIME to protect electronic communications.
NIST SP 1800-6 is in draft form and open for public comment until December 19, 2016. Please share your comments and feedback on this project and its example solution.

:: RFC :: Copyright Office Seeks Additional Comments for Section 512 Study

Copyright Newsletter: "The U.S. Copyright Office is conducting a study to evaluate the operation of the ISP safe harbor provisions of section 512 of title 17 and has reviewed public input from the first round of written comments and from roundtable participation. You may access the comments and a transcript of the roundtables on the Copyright Office website here.
"To further aid the analysis, the Copyright Office is now soliciting additional written comments on a subset of issues. These include questions relating to the characteristics of the current Internet ecosystem, operation of the current DMCA safe harbor system, potential future evolution of the DMCA safe harbor system, and other developments relevant to this study. The Copyright Office is also seeking submissions of empirical research on any topics that are likely to provide useful data to assess and/or improve the operation of section 512. 
"You may access the Federal Register notice here. Written comments are to be submitted electronically using the regulations.gov system. Specific instructions for submitting comments are available on the Copyright Office website at http://copyright.gov/policy/section512/comment-submission/
"Comments must be received no later than 11:59 p.m. Eastern Time on February 6, 2017. Empirical research studies must be received no later than 11:59 p.m. Eastern Time on March 8, 2017.

When CDA Immunity is not CDA Immunity

Here's a question:  If 47 USC 230(c) (the Good Samaritan provision of the Communications Decency Act) says that online services are not liable for third party content, then can you even sue the online service?  Shouldn't the online service be immune from lawsuit? Because, after all, what would be the point of being sued for something for which you cannot be liable?

This is a question which courts have pondered.  Why does it matter?  With immunity, you can file a Rule 12(c) Motion for Judgment on the Pleading - saying "Judge, there just aint nothing here."  With protection from liability, the litigation proceeds a bit further and you file a Rule 12(b)(6) Motion for Failure to State a Claim - saying "Judge, there just aint nothing here." See the difference? One lets the litigation out of the gates; the other does not.  Both have the same result (potentially).

We visit this question in GENERAL STEEL DOMESTIC SALES, LLC v. Chumley, Court of Appeals, 10th Circuit 2016, where two companies were in the business of prefrabricating steel buildings. 
PLAINTIFF employed Mr. DEFENDANT until 2005, when he left to start his own competing steel building company. The parties have been engaged in numerous legal disputes ever since. 
The underlying dispute involves DEFENDANT Steel's negative online advertising campaign against PLAINTIFF Steel. When internet users searched for "PLAINTIFF Steel," negative advertisements from DEFENDANT Steel would appear on the results page. Clicking on the advertisements would direct users to DEFENDANT Steel's web page entitled, "Industry Related Legal Matters". The IRLM Page contained thirty-seven posts, twenty of which form the basis of General Steel's complaint. To varying degrees, the twenty posts summarize, quote, and reference lawsuits involving PLAINTIFF Steel. Each lawsuit is listed with a title, a brief description of the case, and a link, by which the reader could access the accompanying court document. The majority of the case descriptions contained quotes that were selectively copied and pasted from the underlying legal documents. 
Plaintiff sued. Defendant moved to dismiss under Sec. 230(c), arguing that "the CDA bars not just liability, but also suit."
The district court found that DEFENDANT Steel was entitled to immunity for three posts because those posts simply contained links to content created by third parties. The court refused, however, to extend CDA immunity to the remaining seventeen posts and the internet search ads. The court found that the "defendants created and developed the content of those ads," and were therefore not entitled to immunity. With respect to the remaining seventeen posts, the court found that the defendants developed the content by selectively quoting and summarizing court documents in a deceiving way.  

So, right away, there's a problem.  Sec. 230(c) protects online services from liability for third party content.  But not from liability for their own content. And not so much from liability when the online service has a hand in the creation of that third party content.  As we have seen in cases like Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1164-65 (9th Cir. 2008) (en banc) (holding website operator was not entitled to § 230(c)(1) protection where it made users' answer discriminatory questions a condition of doing business, thereby participating in the "development" of the users' submissions); Fed. Trade Comm'n v. Accusearch Inc., 570 F.3d 1187, 1198 (10th Cir. 2009). ("The Tenth Circuit held that a website could not claim immunity under the CDA if it was "responsible for the development of the specific content that was the source of the alleged liability.")

So there is a bit of a question.  Is this third party content or not? And even if it is third party content, what hand did Defendant have in cultivating that content? These are factual questions upon which liability could turn.

But the court seems to want to nix the "immunity" discussion.  It states, "Whether Section 230 provides immunity from suit or liability such that a denial would permit an interlocutory appeal is an issue of first impression for this court."

Um, no.  Sixteen years ago I am pretty sure the 10th Circuit said, "We hold that America Online ... is immune from suit under § 230." Ben Ezra, Weinstein, & Co., Inc. v. Am. Online Inc., 206 F.3d 980, 986 (10th Cir. 2000).

But much to the chagrin of my fellow professionals who can't understand how lawyers write, tucked down in a footnote the Court states "Our description of the CDA as providing immunity from suit in our case of Ben Ezra, Weinstein, & Co. v. America Online Inc., 206 F.3d 980, 983 (10th Cir. 2000), did not resolve this question, as this issue was not before us in that case."

Um.  Okay.  What are we talking about?  What "issue" was not before the court?  Well, in Ben Ezra, Defendant won on a Motion for Summary Judgment.  That is a pleading on the facts, frequently after discovery has been completed.  This is a motion on the pleadings that the litigation cannot proceed at all.

In other words, what does the word "immune" mean? Does it mean "not liable" because of an affirmative defense?  Or does it mean you cannot even sue the Defendant in the first place?  Same word; two different meanings. 

If the question is whether you cannot even sue the Defendant, that's a pretty high bar, says the Court. The statute in question, Sec. 230(c), must itself contain a statutory or constitutional bar.  We are talking not being able to sue government officials or not being able to sue the federal government (unless it gives you permission).  It's not common.  It's normally protection government grants itself.  And.... as the Court points out.... Defendant is not the government.  The Court concludes, Defendant "has not identified a historical basis for providing private parties immunity from suit under the CDA."

In short, Sec. 230(c) is not a bar to lawsuit.  Sec. 230(c) does, however, provide an affirmative defense to liability for third party content. Defendants still gotta defend.  

Wednesday, November 02, 2016

1998, Nov. 2: Morris Worm Unleashed on Internet

"In the fall of 1988, Morris was a first-year graduate student in Cornell University's computer science Ph.D. program. Through undergraduate work at Harvard and in various jobs he had acquired significant computer experience and expertise. When Morris entered Cornell, he was given an account on the computer at the Computer Science Division. This account gave him explicit authorization to use computers at Cornell. Morris engaged in various discussions with fellow graduate students about the security of computer networks and his ability to penetrate it.



Morris Internet Worm Source Code by Go Boston Card
"In October 1988, Morris began work on a computer program, later known as the Internet "worm" or "virus." The goal of this program was to demonstrate the inadequacies of current security measures on computer networks by exploiting the security defects that Morris had discovered. The tactic he selected was release of a worm into network computers. Morris designed the program to spread across a national network of computers after being inserted at one computer location connected to the network. Morris released the worm into Internet, which is a group of national networks that connect university, governmental, and military computers around the country. The network permits communication and transfer of information between computers on the network.

"Morris sought to program the Internet worm to spread widely without drawing attention to itself. The worm was supposed to occupy little computer operation time, and thus not interfere with normal use of the computers. Morris programmed the worm to make it difficult to detect and read, so that other programmers would not be able to "kill" the worm easily. Morris also wanted to ensure that the worm did not copy itself onto a computer that already had a copy. Multiple copies of the worm on a computer would make the worm easier to detect and would bog down the system and ultimately cause the computer to crash. Therefore, Morris designed the worm to "ask" each computer whether it already had a copy of the worm. If it responded "no," then the worm would copy onto the computer; if it responded "yes," the worm would not duplicate. However, Morris was concerned that other programmers could kill the worm by programming their own computers to falsely respond "yes" to the question. To circumvent this protection, Morris programmed the worm to duplicate itself every seventh time it received a "yes" response. As it turned out, Morris underestimated the number of times a computer would be asked the question, and his one-out-of-seven ratio resulted in far more copying than he had anticipated. The worm was also designed so that it would be killed when a computer was shut down, an event that typically occurs once every week or two. This would have prevented the worm from accumulating on one computer, had Morris correctly estimated the likely rate of reinfection.

"Morris identified four ways in which the worm could break into computers on the network: (1) through a "hole" or "bug" (an error) in SEND MAIL, a computer program that transfers and receives electronic mail on a computer; (2) through a bug in the "finger demon" program, a program that permits a person to obtain limited information about the users of another computer; (3) through the "trusted hosts" feature, which permits a user with certain privileges on one computer to have equivalent privileges on another computer without using a password; and (4) through a program of password guessing, whereby various combinations of letters are tried out in rapid sequence in the hope that one will be an authorized user's password, which is entered to permit whatever level of activity that user is authorized to perform.

"On November 2, 1988, Morris released the worm from a computer at the Massachusetts Institute of Technology. MIT was selected to disguise the fact that the worm came from Morris at Cornell. Morris soon discovered that the worm was replicating and reinfecting machines at a much faster rate than he had anticipated. Ultimately, many machines at locations around the country either crashed or became "catatonic." When Morris realized what was happening, he contacted a friend at Harvard to discuss a solution. Eventually, they sent an anonymous message from Harvard over the network, instructing programmers how to kill the worm and prevent reinfection. However, because the network route was clogged, this message did not get through until it was too late. Computers were affected at numerous installations, including leading universities, military sites, and medical research facilities. The estimated cost of dealing with the worm at each installation ranged from $200 to more than $53,000.

"Morris was found guilty, following a jury trial, of violating 18 U.S.C. Section 1030(a)(5)(A). He was sentenced to three years of probation, 400 hours of community service, a fine of $10,050, and the costs of his supervision."
- U.S. v. Morris, 928 F.2d 504 (2nd Cir. 1991)

See more at Cybertelecom :: Morris Worm

Monday, October 24, 2016

Why the Internet is the way it is (and why it will be very different in ten years) - David Clark

The History of the IANA Transition



NANOG 68: Scott Bradner will discuss the history of Internet Governance leading up to the transition of oversight of the IANA function from NTIA to the internet's multistakeholder community.

Geoff Houston, How Did We Get Here? A Look Back at the History of IANA, CIRCLEID Oct. 23, 2016 ("At the start of the month, the United States Government let its residual oversight arrangements with ICANN (the Internet Corporation for Assigned Names and Numbers) over the operation of the Internet Assigned Numbers Authority (IANA) lapse. No single government now has a unique relationship with the governance of the protocol elements of the Internet, and it is now in the hands of a community of interested parties in a so-called Multi-Stakeholder framework. This is a unique step for the Internet and not without its attendant risks. How did we get here?")