Frontiers in Security & Privacy
The objective of this seminar series is to bring together some of the world’s top researchers and practitioners in security & privacy, and together construct “the big picture” of where the field is at today, and what are the challenges it faces for tomorrow.
We focus on the key ideas shaping this field. As pointed out by TED, there is no greater force for changing the world than a powerful idea: an idea can be created out of nothing except an inspired imagination; an idea weighs nothing; it can be transferred across the world at the speed of light for virtually zero cost; and yet an idea, when received by a prepared mind, can have extraordinary impact; it can reshape that mind’s view of the world; it can dramatically alter the behavior of the mind’s owner; and it can cause the mind to pass on the idea to others.
The talks strive to be accessible to a broad computer science audience, and thus touch as many prepared minds as possible. The speakers focus less on advertising their latest results but more on bringing out the fundamental insights and ideas behind their work.
Summer 2013 Program
(all talks take place in room BC 420)
Fall 2013 Program
|Sept 30||William Binney||The Government Is Profiling You|
Protecting Sensitive Data in Web Browsers with ScriptPolice
Prof. Brad Karp (University College London)
Checking the World’s Software for Exploitable Bugs
Prof. David Brumley (CMU)
Attackers only need to find a single exploitable bug in order to install worms, bots, and other malware on vulnerable computers. Unfortunately, developers rarely have the time or resources to fix all bugs. This raises a serious security question: which bugs are exploitable, and thus should be fixed first?
My research team’s vision is to automatically check the world’s software for exploitable bugs. Our approach is based on program verification, but with a twist. Traditional verification takes a program and a specification of safety as inputs, and checks that all execution paths of the program meet the safety specification. The twist in AEG is we replace typical safety properties with an “un-exploitability” property, and the “verification” process becomes finding a program path in which the un-exploitability property does not hold. Our analysis generates working control flow hijack and command injection exploits for exploitable paths. I’ll discuss our results with a data set of over 1,000 programs and over 370 days of analysis time. Despite the large amount of analysis, there is still much to be done. In the last part of this talk, I’ll describe several of the remaining research challenges.
Learning a Zonotope and More: Cryptanalysis of NTRUSign Countermeasures
Léo Ducas (ENS)
Lattices have attracted a lot of interest in the domain of Public Key Cryptography; and they are now well understood tools to build scheme which security can be reduced to the hardness of lattice problems such as finding the shortest vector. Yet, the early signature scheme NTRUSign, from 2003, does not rely on those recent tools, and its security was an open question. Despite the existence of provably secure schemes, this question remains essential in practice because its efficiency is far better than provably secure ones.
A first step was done by Nguyen and Regev in 2006, showing that a “raw” version of NTRUSign was subject to a statistical attack. Precisely, they showed that the signature belong to a parallelepiped, which is related to the secret key; and that it is possible to learn that parallelepiped given enough signatures.
Yet the full version of NTRUSign contained a preventive countermeasure against this kind of attack, consisting of a adding a randomized perturbation, hoping to prevent any statistical attacks. In this work will first show that this perturbation results in a Zonotope, and that it is still possible to learn that zonotope; this attack was implemented and the full secret key could be recovered from about 5000 signatures. We will also tackle alternative perturbation techniques, interestingly leading to the famous Graph Isomorphism problem.
A Mobile Platform and Social Stack for Personal Data: Open Mustard Seed
Dr. John Clippinger (MIT Media Lab)
According to a recent World Economic Forum report, personal data has become a new asset class and the “new oil of the Internet” . As such, personal data both need to be protected and shared, analyzed, as well as, monetized. Regulatory practices have been slow to keep pace with the changing nature of data capture, analysis, and use. As a consequence, innovations in digital legal, regulatory and governance practices and mechanisms are needed to keep pace with advances in sensor, machine learning, and “Big Data” technologies. This talk presents an approach that gives individuals and groups control over their personal data while enabling the trusted exchange of data. Project Open Mustard Seed, a collaboration of ID3 and the MIT Media Lab, provides an open source platform for innovations in governance, authentication, identity management, access control, auditing, analytics and visualization technologies for highly scalable forms of coordinated action and exchange. The goal is to enable new forms of “data banking’, collective action and digital institution building and experimentation that are self-governing and correcting.
Whole Genome Sequencing: Innovation Dream or Privacy Nightmare?
Dr. Emiliano De Cristofaro (Xerox PARC)
Recent advances in DNA sequencing technologies have put ubiquitous availability of whole human genomes within reach. It is no longer hard to imagine the day when everyone will have the means to obtain and store one’s own DNA sequence. Widespread and affordable availability of whole genomes immediately opens up important opportunities in a number of health-related fields. In particular, common genomic applications and tests performed in vitro today will soon be conducted computationally, using digitized genomes. New applications will be developed as genome-enabled medicine becomes increasingly preventive and personalized. However, the very same progress also amplifies worrisome privacy concerns, since a genome represents a treasure trove of highly personal and sensitive information.
In this talk, we will overview biomedical advances in genomics and discuss associated privacy, ethical, and security challenges. We begin to address privacy-respecting genomic tests by focusing on some important applications, such as, Personalized Medicine, Paternity Tests, Ancestry Testing, and Genetic Compatibility Tests. After carefully analyzing these applications and their requirements, we propose a set of efficient privacy-enhancing techniques based on private set operations. This allows us to implement, in silico, some operations that are currently performed via in vitro methods, in a secure fashion. Experimental results demonstrate that proposed techniques are both feasible and practical today. Finally, we explore a few alternatives to securely store human genomes and allow authorized parties to run tests in such a way that only the required minimum amount of information is disclosed, and present an Android API framework geared for privacy-preserving genomic testing.
System-wide Intrusion Recovery and Running Applications over Encrypted Data
Prof. Nickolai Zeldovich (MIT)
Computer systems are routinely compromised – as a result of software vulnerabilities, mis-configuration by administrators, or insecure choices by end users – and compromises seem inevitable in almost any system. This talk will describe two of our recent research projects to provide security despite inevitable compromises. First, for integrity, we have been building systems that provide “system-wide undo”, which allows users or administrators to recover the integrity of a system after an intrusion, by undoing the attacker’s actions and all causal effects thereof, while preserving legitimate user changes. Second, to protect confidentiality, we have been building systems that run applications over encrypted data, so that even if a server is compromised, an adversary learns only encrypted data, and cannot obtain plaintext confidential information.
Protecting Confidentiality in Cloud Data Processing: Europe’s Keyser Söze Strategy
Strong economic incentives to adopt the Cloud processing paradigm presents obvious risks to the confidentiality of data exposed to foreign jurisdictions. Homomorphic encryption seems unlikely to be practical and current ‘trusted computing’ technology is designed against consumer-grade adversaries. European policymakers have been proposing frameworks for legal certification which would permit unlimited export of personal data, subject to a commercial security audit of the Cloud platform against external threats. However there appears to be a dissonance between regulator expectations that foreign ‘requests’ for data will be discrete and follow due process, and evidence from whistle-blowers that apparatus for continuous mass-surveillance is already systematically deployed. Moreover the small-print of these frameworks appears to have been crafted to turn a blind-eye to secret ‘national security’ access to data, even though relevant foreign laws do not comply with European human rights standards, for example by discriminating by nationality and allowing purely political purposes unrelated to criminality. This talk will describe the recent policy history, from the Safe Harbour Agreement to current controversies over the new draft EU Data Protection Regulation.
A High-Level Language for Secure Distributed Computation
Prof. Andrew Myers (Cornell University)
People exchange code and data increasingly freely across the Internet and the Web, but both code and data are vectors for attacks on confidentiality and integrity. The Fabric project is developing higher-level programming models and programming languages that get us closer to programming the Internet Computer directly. Fabric supports the free exchange of code and data across a decentralized, distributed system. But unlike the Web, Fabric has a principled basis for compositional security: language-based information flow. Fabric raises the level of abstraction for programmers, which simplifies programming, but also makes it easier to reason clearly about security, even in the presence of distrusted mobile code.
Censorship Circumvention: Staying Ahead in a Cat-and-Mouse Game
Prof. Nikita Borisov (UIUC)
The Internet enables access to a wide variety of information sources; many countries and organizations, however, try to restrict such access for political and social reasons. People whose access has been censored make use of a variety of circumvention technologies to find the information they need; in turn, the censors use increasingly sophisticated tools to render these technologies ineffective. One of the most powerful techniques available to the censors has been the insider attack, wherein the censor pretends to be a user of a system in order to learn secret information about its functions. For example, censors continually update a blacklist of IP addresses belonging to circumvention proxies. I will discuss some new techniques designed to specifically resist this insider threat.
rBridge focuses on the distribution of proxy addresses to users. It tracks the reputation score of each user, representing the likelihood of this user revealing a proxy address to the censors, and uses an introduction mechanism to resist Sybil attacks. A particular challenge of rBridge is to preserve the privacy of its users by keeping the knowledge about which users know which proxies secret.
Cirripede is an alternate approach that seeks to eliminate the insider threat entirely. It uses redirection proxies that are activated by a special cryptographic signal, which can be generated using only public information but can only be recognized by the proxies. Instead of hiding the location of the proxies, Cirripede places them in highly connected ISPs, such that blocking Cirripede would result in high collateral damage.
Accountable Key Infrastructure (AKI): A Proposal for a Public-Key Validation Infrastructure
Prof. Adrian Perrig (ETHZ/CMU)
Recent trends in public-key infrastructure research explore the tradeoff between decreased trust in Certificate Authorities (CAs), resilience against attacks, ommunication overhead (bandwidth and latency) for setting up an SSL/TLS connection, and availability with respect to verifiability of public key information. In this paper, we propose AKI as a new public-key validation infrastructure, to reduce the level of trust in CAs. AKI integrates an architecture for key revocation of all entities (e.g., CAs, domains) with an architecture for accountability of all infrastructure parties through checks-and-balances. AKI efficiently handles common certification operations, and gracefully handles catastrophic events such as domain key loss or compromise. We propose AKI to make progress towards a public-key validation infrastructure with key revocation that reduces trust in any single entity.
Detecting Zero-Day Attacks with Cognitive Security
Dr. Michal Pechoucek (Cognitive Security/Cisco)
The current worldwide network infrastructure is under continual attack. Custom-built, sophisticated mallware is exploited for economic purposes and is very difficult to detect once it successfully penetrates a perimeter. Based on state-of-the-art research in the field of artificial intelligence, machine learning, game theory, and agent-based computing from the Agent Technology Centre, Czech Technical University, the company Cognitive Security (COSE) has designed and developed an anomaly detection system providing introspection into vulnerability of customers’ networks. The Cognitive One (C1) system processes packet headers in the form of NETFLOW data, combines several custom-built statistical classifiers, and is optimized for delivering an extremely low false positive rate while maintaining high detection capability. COSE, a startup from the Czech Technical University, was acquired by CISCO Systems in January 2013, and the Prague-based team became a core of the newly formed CISCO R&D laboratory building next-generation threat defence technologies.
Cryptosense: Security Analysis for Cryptographic APIs
Dr. Graham Steel (INRIA)
In practice, most developers use cryptography via an application program interface (API) either to a software library or a hardware device where keys are stored and all cryptographic operations take place. Designing such interfaces so that they offer flexible functionality but cannot be abused to reveal keys or secrets has proved to be extremely difficult, with a number of published vulnerabilities in widely-used APIs appearing over the last decade.
This talk will discuss research on the use of formal methods to specify and verify such interfaces in order to either detect flaws or prove security properties. We will focus on the example of RSA PKCS#11, the most widely used interface for cryptographic devices, and show how research has progressed from initial theoretical results through to a powerful tool, the Cryptosense Analyzer, which can reverse engineer the particular configuration of PKCS#11 in use on some device under test, construct a model of the device’s functionality, and call a model checker to search for attacks. If an attack is found, it can be executed automatically on the device, and advice for secure configuration is given. The talk will conclude with a live demonstration.
Surveillance, Censorship, and the Tor Network
Jake Appelbaum (T&A)
Surveillance and censorship is an increasing concern for people worldwide. I will describe the Tor network, one of many tools that users may employ to protect themselves – it is a distributed peer-to-peer system that utilizes strong cryptography to protect users online.
Unsupervised Network Anomaly Detection
_Philippe Owezarski (CNRS) _
Network anomaly detection is a critical aspect of network management for instance for QoS, security, etc. The continuous arising of new anomalies and attacks create a continuous challenge to cope with events that put the network integrity at risk. Most network anomaly detection systems proposed so far employ a supervised strategy to accomplish the task, using either signature-based detection methods or supervised-learning techniques. However, both approaches present major limitations: the former fails to detect and characterize unknown anomalies (letting the network unprotected for long periods), the latter requires training and labeled traffic, which is difficult and expensive to produce. Such limitations impose a serious bottleneck to the previously presented problem. We introduce an unsupervised approach to detect and characterize network anomalies, without relying on signatures, statistical training, or labeled traffic, which represents a significant step towards the autonomy of networks. Unsupervised detection is accomplished by means of robust data-clustering techniques, combining Sub-Space clustering with Evidence Accumulation or Inter-Clustering Results Association, to blindly identify anomalies in traffic flows. Several post-processing techniques such as correlation, ranking and characterization, are applied on extracted anomalies to improve results and reduce operator workload. The detection and characterization performances of the unsupervised approach are evaluated on real network traffic.
Internet Privacy: Towards More Transparency
Balachander Krishnamurthy (AT&T Labs)
Internet privacy has become a hot topic recently with the radical growth of Online Social Networks (OSN) and attendant publicity about various leakages. For the last several years, we have been examining aggregation of user’s information by a steadily decreasing number of entities as unrelated Web sites are browsed. I will present results from several studies on leakage of personally identifiable information (PII) via Online Social Networks and popular non-OSN sites. Linkage of information gleaned from different sources presents a challenging problem to technologists, privacy advocates, government agencies, and the multi-billion dollar online advertising industry. Economics might hold the key in increasing transparency of the largely hidden exchange of data in return for access of so-called free services. I will also talk briefly about doing privacy research at scale.
Developing Security Protocols by Refinement
Prof. David Basin (ETHZ)
We survey recent work of ours on developing security protocols by step-wise refinement. We present a refinement strategy that guides the transformation of abstract security goals into protocols that are secure when operating over an insecure channel controlled by a Dolev-Yao-style intruder. The refinement steps used successively introduce local states, an intruder, communication channels with security properties, and cryptographic operations realizing these channels. The abstractions used provide insights on how the protocols work and foster the development of families of protocols sharing a common structure and properties. In contrast to post-hoc verification methods, protocols are developed together with their correctness proofs. We have implemented our method in Isabelle/HOL and used it to develop a number of entity authentication and key transport protocols. (Joint work with Christoph Sprenger, ETH Zurich)