Guest: Keant Seamons (BYU) - Securing Webmail for the Masses
More info Abstract
The need for users to encrypt their email has never been clearer. Recent revelations have shown that not only are users' online communications and data available to malicious insiders, but also to hackers and government-level surveillance. Unfortunately, prior research has shown that existing approaches to secure email are not compatible with the needs and capabilities of the masses. In this talk, we describe our work at building usable, secure email for the masses. First, we discuss user interface designs that enable users to correctly use secure email. Second, we discuss the details from the empirical user studies we completed to validate our work, including a novel methodology to recruit pairs of novice users to install and use several recent secure email systems. Finally, we discuss our plans for future work into comparing the usability of various key management schemes by every-day users.
Scott Ruoti is a PhD candidate in Computer Science at Brigham Young University. Scott's research interests are in usable security and protocol design. His research projects include usable, secure e-mail, user-to-user encryption for cloud services, strong password protocols, safe password entry, TLS authentication, and human-factors in security. Scott is especially interested in research that creates secure systems which can be successfully used by business and non-security experts. Scott's PhD research is currently funded through an Internship at Sandia National Laboratories. He has previously worked full time for a startup (Computer Lab Solutions) and Brigham Young University developing lab management technologies deployed to hundreds of universities. Additionally, he has successfully completed internships with Microsoft, Microsoft Research, Google, Blue Coat Systems, and is currently an Intern at Sandia National Laboratories. Scott has accepted a full-time job at MIT Lincoln Lab.
NSF WATCH: The Citizen Lab's Mixed Methods Approach to Research on Information Controls (Deibert)
More info Abstract:
The Citizen Lab is an interdisciplinary research laboratory based at
the Munk School of Global Affairs, University of Toronto, that
investigates the intersection of human rights, global security, and
the digital world. For over a decade, we have used a mixed methods
approach that combines techniques from network measurement,
information security, law, and the social sciences to research and
document information controls (e.g., Internet censorship,
surveillance, targeted digital attacks; commercial spyware) that
impact the openness and security of digital communications and pose
threats to human rights. Director and Founder Professor Ron Deibert
will provide an overview of the Citizen Lab's approach, highlight
several reports and their outcomes, and discuss some of the ways
rigorous, evidence-based, and peer-reviewed research can inform public
policy, advocacy and human rights based on the experiences of the
Ronald J. Deibert is Professor of Political Science and Director of
the Citizen Lab at the Munk School of Global Affairs, University of
Toronto. The Citizen Lab undertakes interdisciplinary research at the
intersection of global security, ICTs, and human rights. He is a
former founder and principal investigator of the OpenNet Initiative
(2003-2014) and a founder of Psiphon, a world leader in providing open
access to the Internet. Deibert is the author of Black Code:
Surveillance, Privacy, and the Dark Side of the Internet (Random
House: 2013), as well as numerous books, chapters, and articles on
Internet censorship, surveillance, and cyber security. He was one of
the authors of the landmark Tracking Ghostnet cyber espionage (2009)
and Great Cannon (2015) reports, and co-editor of three major volumes
with MIT Press on information controls (the "Access" series). He is on
the steering committee for the World Movement for Democracy, the board
of advisors for Pen Canada, Access, and Privacy International, and on
the technical advisory groups for Amnesty International and Human
Rights Watch. He is co-chair of the University of Toronto's
Information Security Council. In 2013, he was appointed to the Order
of Ontario and awarded the Queen Elizabeth II Diamond Jubilee medal,
for being "among the first to recognize and take measures to mitigate
growing threats to communications rights, openness and security
Guest: Chad Brubaker, Google (nogotofail, Android Network Security Config)
More info Why should I talk to Chad / what should I talk about with Chad?
A. He works in the Android Security group at Google, concentrating on
hardening the OS.
B. nogotofail - a tool that lets you test your network traffic for
TLS/SSL vulnerabilities and misconfigurations via client and/or a VPN
C. "there is also the Android Network Security Config I made for Android N (http://developer.android.com/preview/features/security-config.html), its the tock to the tick-tock of the "find and understand issues"/"kill root cause of issues" that nogotofail started and allows for developers to do all the customization that we saw people trying to do but in a way that's hard to get wrong and safe."
D. Using Frankencerts for Automated Adversarial Testing of Certificate
Validation in SSL/TLS Implementations
Modern network security rests on the Secure Sockets Layer (SSL) and
Transport Layer Security (TLS) protocols. Distributed systems, mobile
and desktop applications, embedded devices, and all of secure Web rely
on SSL/TLS for protection against network attacks. This protection
critically depends on whether SSL/TLS clients correctly validate X.509
certificates presented by servers during the SSL/TLS handshake
protocol. We design, implement, and apply the first methodology for
large-scale testing of certificate validation logic in SSL/TLS
implementations. Our first ingredient is "frankencerts," synthetic
certificates that are randomly mutated from parts of real certificates
and thus include unusual combinations of extensions and constraints.
Our second ingredient is differential testing: if one SSL/TLS
implementation accepts a certificate while another rejects the same
certificate, we use the discrepancy as an oracle for finding flaws in
individual implementations. Differential testing with frankencerts
uncovered 208 discrepancies between popular SSL/TLS implementations
such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many
of them are caused by serious security vulnerabilities. For example,
any server with a valid X.509 version1 certificate can act as a rogue
certificate authority and issue fake certificates for any domain,
enabling man-in-the-middle attacks against MatrixSSL and GnuTLS.
Several implementations also accept certificate authorities created by
unauthorized issuers, as well as certificates not intended for server
authentication. We also found serious vulnerabilities in how users are
warned about certificate validation errors. When presented with an
expired, self-signed certificate, NSS, Safari, and Chrome (on Linux)
report that the certificate has expired - a low-risk, often ignored
error - but not that the connection is insecure against a
man-in-the-middle attack. These results demonstrate that automated
adversarial testing with frankencerts is a powerful methodology for
discovering security flaws in SSL/TLS implementations.
E. There's free lunch catered from Gourmandise
How to Use Bitcoin to Incentivize Correct Computations (Kumaresan and Bentov), CCS 2014
More info We study a model of incentivizing correct computations in a variety of cryptographic tasks. For each of these tasks we propose a formal model and design protocols satisfying our model's constraints in a hybrid model where parties have access to special ideal functionalities that enable monetary transactions. We summarize our results:
* Verifiable computation. We consider a setting where a delegator outsources computation to a worker who expects to get paid in return for delivering correct outputs. We design protocols that compile both public and private verification schemes to support incentivizations described above.
* Secure computation with restricted leakage. Building on the recent work of Huang et al. (Security and Privacy 2012), we show an efficient secure computation protocol that monetarily penalizes an adversary that attempts to learn one bit of information but gets detected in the process.
* Fair secure computation. Inspired by recent work, we consider a model of secure computation where a party that aborts after learning the output is monetarily penalized. We then propose an ideal transaction functionality FML and show a constant-round realization on the Bitcoin network. Then, in the FML-hybrid world we design a constant round protocol for secure computation in this model.
* Noninteractive bounties. We provide formal definitions and candidate realizations of noninteractive bounty mechanisms on the Bitcoin network which (1) allow a bounty maker to place a bounty for the solution of a hard problem by sending a single message, and (2) allow a bounty collector (unknown at the time of bounty creation) with the solution to claim the bounty, while (3) ensuring that the bounty maker can learn the solution whenever its bounty is collected, and (4) preventing malicious eavesdropping parties from both claiming the bounty as well as learning the solution.
All our protocol realizations (except those realizing fair secure computation) rely on a special ideal functionality that is not currently supported in Bitcoin due to limitations imposed on Bitcoin scripts. Motivated by this, we propose validation complexity of a protocol, a formal complexity measure that captures the amount of computational effort required to validate Bitcoin transactions required to implement it in Bitcoin. Our protocols are also designed to take advantage of optimistic scenarios where participating parties behave honestly.
Do You See What I See: Differential Treatment of Anonymous Users (Khattak et al), NDSS 2016
More info The utility of anonymous communication is undermined
by a growing number of websites treating users of
such services in a degraded fashion. The second-class treatment
of anonymous users ranges from outright rejection to limiting
their access to a subset of the serviceâ€™s functionality or imposing
hurdles such as CAPTCHA-solving. To date, the observation of
such practices has relied upon anecdotal reports catalogued by
frustrated anonymity users. We present a study to methodically
enumerate and characterize, in the context of Tor, the treatment
of anonymous users as second-class Web citizens.
We focus on first-line blocking: at the transport layer, through
reset or dropped connections; and at the application layer,
through explicit blocks served from website home pages. Our
study draws upon several data sources: comparisons of Internetwide
port scans from Tor exit nodes versus from control hosts;
scans of the home pages of top-1,000 Alexa websites through every
Tor exit; and analysis of nearly a year of historic HTTP crawls
from Tor network and control hosts. We develop a methodology
to distinguish censorship events from incidental failures such as
those caused by packet loss or network outages, and incorporate
consideration of the endemic churn in web-accessible services
over both time and geographic diversity. We find clear evidence
of Tor blocking on the Web, including 3.67% of the top-1,000
Alexa sites. Some blocks specifically target Tor, while others result
from fate-sharing when abuse-based automated blockers trigger
due to misbehaving Web sessions sharing the same exit node.
How Polymorphic Warnings Reduce Habituation in the Brain: Insights from an fMRI Study (Anderson et al), CHI 2015
More info Research on security warnings consistently points to habituation as a key reason why users ignore security warnings. However, because habituation as a mental state is difficult to observe, previous research has examined habituation indirectly by observing its influence on security behaviors. This study addresses this gap by using functional magnetic resonance imaging (fMRI) to open the "black box" of the brain to observe habituation as it develops in response to security warnings. Our results show a dramatic drop in the visual processing centers of the brain after only the second exposure to a warning, with further decreases with subsequent exposures. To combat the problem of habituation, we designed a polymorphic warning that changes its appearance. We show in two separate experiments using fMRI and mouse cursor tracking that our polymorphic warning is substantially more resistant to habituation than conventional warnings. Together, our neurophysiological findings illustrate the considerable influence of human biology on users' habituation to security warnings.
'My Data Just Goes Everywhere:' User Mental Models of the Internet and Implications for Privacy and Security (Kang et al), SOUPS 2015
More info Many people use the Internet every day yet know little about how it really works. Prior literature diverges on how peopleâ€™s Internet knowledge affects their privacy and security decisions. We undertook a qualitative study to understand what people do and do not know about the Internet and how that knowledge affects their responses to privacy and security risks. Lay people, as compared to those with computer science or related backgrounds, had simpler mental models that omitted Internet levels, organizations, and entities. People with more articulated technical models perceived more privacy threats, possibly driven by their more accurate understanding of where specific risks could occur in the network. Despite these differences, we did not find a direct relationship between peopleâ€™s technical background and the actions they took to control their privacy or increase their security online. Consistent with other work on user knowledge and experience, our study suggests a greater emphasis on policies and systems that protect privacy and security without relying too much on usersâ€™ security practices.
Accountable Wiretapping -or- I Know They Can Hear You Now (Bates et al), NDSS 2012
More info In many democratic countries, CALEA1 wiretaps are
used by law enforcement agencies to perform investigations
and gather evidence for legal procedures. However, existing
CALEA wiretap implementations are often engineered
with the assumption that wiretap operators are trustworthy
and wiretap targets do not attempt to evade the wiretap. Although
it may be possible to construct more robust wiretap
architectures by reengineering significant portions of
the telecommunications infrastructure, such efforts are prohibitively
costly. This paper instead proposes a lightweight
accountable wiretapping system for enabling secure audits
of existing CALEA wiretapping systems. Our proposed system
maintains a tamper-evident encrypted log over wiretap
events, enforces access controls over wiretap records, and
enables privacy-preserving aggregate queries and compliance
checks. We demonstrate using campus-wide telephone
trace data from a large university that our approach provides
efficient auditing functionalities while incurring only
modest overhead. Based on publicly available wiretap reporting
statistics, we conservatively estimate that our architecture
can support tamper-evident logging for all of the
United Statesâ€™ ongoing CALEA wiretaps using three commodity
Estimating types in binaries using predictive modeling (Katz et al), POPL 2016
More info Reverse engineering is an important tool in mitigating vulnerabilities in binaries. As a lot of software is developed in object-oriented languages, reverse engineering of object-oriented code is of critical importance. One of the major hurdles in reverse engineering binaries compiled from object-oriented code is the use of dynamic dispatch. In the absence of debug information, any dynamic dispatch may seem to jump to many possible targets, posing a significant challenge to a reverse engineer trying to track the program flow. We present a novel technique that allows us to statically determine the likely targets of virtual function calls. Our technique uses object tracelets â€“ statically constructed sequences of operations performed on an object â€“ to capture potential runtime behaviors of the object. Our analysis automatically pre-labels some of the object tracelets by relying on instances where the type of an object is known. The resulting type-labeled tracelets are then used to train a statistical language model (SLM) for each type.We then use the resulting ensemble of SLMs over unlabeled tracelets to generate a ranking of their most likely types, from which we deduce the likely targets of dynamic dispatches.We have implemented our technique and evaluated it over real-world C++ binaries. Our evaluation shows that when there are multiple alternative targets, our approach can drastically reduce the number of targets that have to be considered by a reverse engineer.
Bringing Software Defined Radio to the Penetration Testing Community (Picod et al), Blackhat 2014
More info The large adoption of wireless devices goes further than WiFi networks: smartmeters, wearable devices, etc.
The engineers behind these new types of devices may not have a deep security background and it can lead to security and
privacy issues when a particular technology is stressed. However, to assess the security of these devices, the only current
solution would be a dedicated hardware component with an appropriate radio interface for each available technology.
Such components are not easy to engineer and this is why we developed Scapy-radio, a generic wireless monitor/injector
tool based on Software Defined Radio using GNU Radio and the well-known Scapy framework. In this paper, we present
this tool we developed for a wide range of wireless security assessments. The main goal of our tool is to provide effective
penetration testing capabilities to security auditors with little to no knowledge of radio communication systems.
The Spring 2016 offering of CS 7936 will focus on reading and discussing papers from recent security conferences on a variety of topics.
The goal is to increase participants' familiarity with recent and important results in the area of computer security & privacy research. Attendees will read and discuss papers from recent top-tier security conferences. Attendees will typically discuss one paper each week. Papers will be selected by presenters based on their interests.
Class announcements are sent out on firstname.lastname@example.org. You can subscribe at http://mailman.cs.utah.edu/mailman/listinfo/security-privacy.
Students may enroll for one (1) credit. Although the University lists the course as “variable credit,” the two- and three-credit options are not currently available.
Students enrolled in the seminar are expected to read the papers prior to the seminar. Additionally, students are expected to sign up to lead the discussion on one or more seminar meeting. Leading the disucssion means:
Upcoming and recent conference proceedings are good sources of papers for discussion. Below are links to some relevant conference series.
And the following is a curated list of papers of possible interest:
It can be useful to look up the video of the presentation (if it was at USENIX, the video was recorded and is available online) and/or the slides (which may be available on the presenting author's page).
The following questions (some of which are pulled from Writing for Computer Science) can be useful to keep in mind when reading a paper (although not all questions will apply to all papers):