Flayer: Exposing Application Internals
More info Flayer is a tool for dynamically exposing application
innards for security testing and analysis. It is imple-
mented on the dynamic binary instrumentation frame-
work Valgrind  and its memory error detection plug-
in, Memcheck . This paper focuses on the implemen-
tation of Flayer, its supporting libraries, and their appli-
cation to software security.
Flayer provides tainted, or marked, data flow analy-
sis and instrumentation mechanisms for arbitrarily alter-
ing that flow. Flayer improves upon prior taint tracing
tools with bit-precision. Taint propagation calculations
are performed for each value-creating memory or regis-
ter operation. These calculations are embedded in the
target application’s running code using dynamic instru-
mentation. The same technique has been employed to al-
low the user to control the outcome of conditional jumps
and step over function calls.
Flayer’s functionality provides a robust foundation for
the implementation of security tools and techniques. In
particular, this paper presents an effective fault injection
testing technique and an automation library, LibFlayer.
Alongside these contributions, it explores techniques for
vulnerability patch analysis and guided source code au-
Flayer finds errors in real software. In the past year, its
use has yielded the expedient discovery of flaws in secu-
rity critical software including OpenSSH and OpenSSL.
World-Driven Access Control for Continuous Sensing (Roesner et al)
More info Modern applications increasingly rely on continuous monitoring of video, audio, or other sensor data to provide their functionality, particularly in platforms such as the Microsoft Kinect and Google Glass. Continuous sensing by untrusted applications poses significant privacy challenges for both device users and bystanders. Even honest users will struggle to manage application permissions using existing approaches. We propose a general, extensible framework for controlling access to sensor data on multi-application continuous sensing platforms. Our approach, world-driven access control, allows real-world objects to explicitly specify access policies. This approach relieves the user’s permission management burden while mediating access at the granularity of objects rather than full sensor streams. A trusted policy module on the platform senses policies in the world and modifies applications’ “views” accordingly. For example, world-driven access control allows the system to automatically stop recording in bathrooms or remove bystanders from video frames, without the user prompted to specify or activate such policies. To convey and authenticate policies, we introduce passports, a new kind of certificate that includes both a policy and optionally the code for recognizing a real-world object. We implement a prototype system and use it to study the feasibility of world-driven access control in practice. Our evaluation suggests that world-driven access control can effectively reduce the user’s permission management burden in emerging continuous sensing systems. Our investigation also surfaces key challenges for future access control mechanisms for continuous sensing applications.
When eHealth Meets the Internet of Things: Pervasive Security and Privacy Challenges (Omoogun et al)
More info eHealth mobile technologies are becoming increasingly
prevalent in both the personal and medical world, assisting
healthcare professionals to monitor the progress and current
condition of patients. These devices often gather, transmit and
analyse personal data. Healthcare data has rigid requirements for
security, confidentiality, and availability, whilst access traceability
and control, and long-term preservation are also highly desirable,
particularly when exposed to cloud computing environments.
This article explores some of the security and privacy challenges
eHealth devices currently face. Legislative implications of data
breaches are considered, as well as service provider accountability.
The work also provides numerous security and privacy
recommendations, in order to improve future implementations.
Characterizing Social Insider Attacks on Facebook (Usmani et al)
More info Facebook accounts are secured against unauthorized access
through passwords and device-level security. Those defenses,
however, may not be sufficient to prevent social insider attacks,
where attackers know their victims, and gain access to
a victim?~@~Ys account by interacting directly with their device.
To characterize these attacks, we ran two MTurk studies. In
the first (n = 1,308), using the list experiment method, we
estimated that 24% of participants had perpetrated social insider
attacks and that 21% had been victims (and knew about
it). In the second study (n = 45), participants wrote stories
detailing personal experiences with such attacks. Using thematic
analysis, we typified attacks around five motivations
(fun, curiosity, jealousy, animosity, and utility), and explored
dimensions associated with each type. Our combined findings
indicate that social insider attacks are common, often
have serious emotional consequences, and have no simple
FALL BREAK - No Seminar
Privacy, Anonymity, and Perceived Risk in Open Collaboration: A Study of Tor Users and Wikipedians
More info This qualitative study examines privacy practices and concerns among contributors to open collaboration projects. We collected interview data from people who use the anonymity network Tor who also contribute to online projects and from Wikipedia editors who are concerned about their privacy to better understand how privacy concerns impact participation in open collaboration projects. We found that risks perceived by contributors to open collaboration projects include threats of surveillance, violence, harassment, opportunity loss, reputation loss, and fear for loved ones. We explain participants' operational and technical strategies for mitigating these risks and how these strategies affect their contributions. Finally, we discuss chilling effects associated with privacy loss, the need for open collaboration projects to go beyond attracting and educating participants to consider their privacy, and some of the social and technical approaches that could be explored to mitigate risk at a project or community level.
Towards Understanding Differential Privacy: When Do People Trust Randomized Response Technique?
More info As a consequence of living in a data ecosystem, we often relinquish personal information to be used in contexts in which we have no control. In this paper, we begin to examine the usability of differential privacy, a mechanism that proposes to promise privacy with a mathematical "proof" to the data donor. Do people trust this promise and adjust their privacy decisions if the interfaces through which they interact make differential privacy less opaque? In a study with 228 participants, we measured comfort, understanding, and trust using a variant of differential privacy known as Randomized Response Technique (RRT). We found that allowing individuals to see the amount of obfuscation applied to their responses increased their trust in the privacy-protecting mechanism. However, participants who associated obfuscating privacy mechanisms with deception did not make the "safest" privacy decisions, even as they demonstrated an understanding of RRT. We demonstrate that prudent privacy-related decisions can be cultivated with simple explanations of usable privacy.
Deriving genomic diagnoses without revealing patient genomes (Jagadeesh et al)
More info Patient genomes are interpretable only in the context of other genomes; however, genome sharing enables discrimination. Thousands of monogenic diseases have yielded definitive genomic diagnoses and potential gene therapy targets. Here we show how to provide such diagnoses while preserving participant privacy through the use of secure multiparty computation. In multiple real scenarios (small patient cohorts, trio analysis, two-hospital collaboration), we used our methods to identify the causal variant and discover previously unrecognized disease genes and variants while keeping up to 99.7% of all participants’ most sensitive genomic information private.
Chad Brubaker - Android Platform Hardening
More info Why should I talk to Chad / what should I talk about with Chad?
A. He works in the Android Security group at Google, concentrating on
hardening the OS.
B. nogotofail - a tool that lets you test your network traffic for
TLS/SSL vulnerabilities and misconfigurations via client and/or a VPN
C. "there is also the Android Network Security Config I made for Android N (http://developer.android.com/preview/features/security-config.html), its the tock to the tick-tock of the "find and understand issues"/"kill root cause of issues" that nogotofail started and allows for developers to do all the customization that we saw people trying to do but in a way that's hard to get wrong and safe."
D. Using Frankencerts for Automated Adversarial Testing of Certificate
Validation in SSL/TLS Implementations
Modern network security rests on the Secure Sockets Layer (SSL) and
Transport Layer Security (TLS) protocols. Distributed systems, mobile
and desktop applications, embedded devices, and all of secure Web rely
on SSL/TLS for protection against network attacks. This protection
critically depends on whether SSL/TLS clients correctly validate X.509
certificates presented by servers during the SSL/TLS handshake
protocol. We design, implement, and apply the first methodology for
large-scale testing of certificate validation logic in SSL/TLS
implementations. Our first ingredient is "frankencerts," synthetic
certificates that are randomly mutated from parts of real certificates
and thus include unusual combinations of extensions and constraints.
Our second ingredient is differential testing: if one SSL/TLS
implementation accepts a certificate while another rejects the same
certificate, we use the discrepancy as an oracle for finding flaws in
individual implementations. Differential testing with frankencerts
uncovered 208 discrepancies between popular SSL/TLS implementations
such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many
of them are caused by serious security vulnerabilities. For example,
any server with a valid X.509 version1 certificate can act as a rogue
certificate authority and issue fake certificates for any domain,
enabling man-in-the-middle attacks against MatrixSSL and GnuTLS.
Several implementations also accept certificate authorities created by
unauthorized issuers, as well as certificates not intended for server
authentication. We also found serious vulnerabilities in how users are
warned about certificate validation errors. When presented with an
expired, self-signed certificate, NSS, Safari, and Chrome (on Linux)
report that the certificate has expired - a low-risk, often ignored
error - but not that the connection is insecure against a
man-in-the-middle attack. These results demonstrate that automated
adversarial testing with frankencerts is a powerful methodology for
discovering security flaws in SSL/TLS implementations.
An Experimental Security Analysis of an Industrial Robot Controller (Quarta et al)
More info Industrial robots, automated manufacturing, and
efficient logistics processes are at the heart of the upcoming
fourth industrial revolution. While there are seminal studies on
the vulnerabilities of cyber-physical systems in the industry, as
of today there has been no systematic analysis of the security of
industrial robot controllers.
We examine the standard architecture of an industrial robot
and analyze a concrete deployment from a systems security
standpoint. Then, we propose an attacker model and confront
it with the minimal set of requirements that industrial robots
should honor: precision in sensing the environment, correctness
in execution of control logic, and safety for human operators.
Following an experimental and practical approach, we then
show how our modeled attacker can subvert such requirements
through the exploitation of software vulnerabilities, leading to
severe consequences that are unique to the robotics domain.
We conclude by discussing safety standards and security
challenges in industrial robotics.
Design and Evaluation of a Data-Driven Password Meter (Ur et al)
More info Abstract:
Despite their ubiquity, many password meters provide inaccurate
strength estimates. Furthermore, they do not explain to users what is
wrong with their password or how to improve it. We describe the
development and evaluation of a data-driven password meter that
provides accurate strength measurement and actionable, detailed
feedback to users. This meter combines neural networks and numerous
carefully combined heuristics to score passwords and generate
data-driven text feedback about the user's password. We describe the
meter's iterative development and final design. We detail the security
and usability impact of the meter's design dimensions, examined
through a 4,509-participant online study. Under the more common
password-composition policy we tested, we found that the data-driven
meter with detailed feedback led users to create more secure, and no
less memorable, passwords than a meter with only a bar as a strength
The Fall 2017 offering of CS 7936 will focus on reading and discussing recent papers in security and privacy research from conferences such as:
Class announcements are sent out on email@example.com. You can subscribe at http://mailman.cs.utah.edu/mailman/listinfo/security-privacy.
Students may enroll for one (1) credit. Although the University lists the course as “variable credit,” the two- and three-credit options are not currently available.
Students enrolled in the seminar are expected to read the papers prior to the seminar. Additionally, students are expected to sign up to lead the discussion on one or more seminar meeting. Leading the disucssion means:
Some papers are free to access, while others are behind paywalls. The university has a paid subscription to most of the libraries where those papers can be found. There are several ways to access those papers: