Front entry of Building 99 on Microsoft Campus
July 1, 2021 February 22, 2024

Cryptography and Privacy Colloquium | Redmond

Lieu: Microsoft Research Redmond

  • Can we cast a ballot as intended and be receipt free?

    Olivier Pereira, Microsoft Research and UCLouvain

    February 22, 2024 | 10:30 AM | Microsoft Building 99

    Abstract

    We explore the interaction between two important properties of ballot submission in elections: cast-as-intended verifiability and receipt-freeness. The first property, cast-as-intended verifiability, expresses that a ballot submission process should offer a way for the voters to verify that the ballots they submitted accurately reflect their vote intent, even if they used a corrupted device to prepare their ballot. The second property, receipt-freeness, expresses that the ballot submission process should not offer any evidence that a voter could use to demonstrate the content of her vote to a third party. These two properties have been abundantly studied in the past, but most of the time in separation.

    Our first result is negative: we demonstrate that, in the absence of a trusted authority, it is impossible to obtain a receipt-free voting protocol with cast-as-intended verifiability if the vote submission process is non-interactive.

    On the positive side, we demonstrate that, if a trusted voter registration authority is available, or if the ballot submission process can be made interactive, then cast-as-intended verifiability and receipt-freeness can be obtained together. This result is constructive, and we propose examples of ballot submission processes that achieve the desired properties in both settings. These submission processes are quite demanding, and it is an intriguing question to see if more convenient submission processes can be found.

    This talk is based on joint work with Henri Devillez, Thomas Peters and Quentin Yang.

    Biography

    Olivier Pereira is a visiting scientist at Microsoft Research during the 2023-2024 academic year and a professor of cryptography at UCLouvain, Belgium. His research explores cryptographic protocols, including their design and analysis. He contributed to the design of several verifiable voting systems, including Helios and STAR-Vote, and to the analysis of several others, including the Swiss and Estonian Internet voting systems. He is now contributing to the design of the ElectionGuard SDK. He consulted on election technologies for various government and international institutions, in Europe and elsewhere. He is also interested in leakage resilient cryptography and anonymous communication networks.

  • Might I Get Pwned: A Second Generation Compromised Credential Checking Service

    Bijeeta Pal, Snapchat

    Abstract

    Credential stuffing attacks use stolen passwords to log into victim accounts. To defend against these attacks, recently deployed compromised credential checking (C3) services provide APIs that help users and companies check whether a username, password pair is exposed. These services however only check if the exact password is leaked, and therefore do not mitigate credential tweaking attacks — attempts to compromise a user account with variants of a user’s leaked passwords. Recent work has shown credential tweaking attacks can compromise accounts quite effectively even when the credential stuffing countermeasures are in place. We initiate work on C3 services that protect users from credential tweaking attacks. The core underlying challenge is how to identify passwords that are similar to their leaked passwords while preserving honest clients’ privacy and also preventing malicious clients from extracting breach data from the service. We formalize the problem and explore ways to measure password similarity that balance efficacy, performance, and security. Based on this study, we design “Might I Get Pwned” (MIGP), a new kind of breach alerting service. Our simulations show that MIGP reduces the efficacy of state-of-the-art 1000-guess credential tweaking attacks by 94%. MIGP preserves user privacy and limits potential exposure of sensitive breach entries. We show that the protocol is fast, with response time close to existing C3 services. We worked with Cloudflare to deploy MIGP in practice.

    Biography

    Bijeeta Pal is a Privacy Engineer at Snapchat. Pal recently graduated with a Ph.D. in computer science from Cornell University.Her research interest lies in the application of cryptographic techniques to build secure, private, and scalable systems empowering users. Her recent project focused on designing a similarity-aware and privacy-preserving credential checking service, Might I Get Pwned, that warns users from selecting passwords similar to a breached password (deployed at Cloudflare). She is also a recipient of the 2021-22 J.P. Morgan Ph.D. Fellowship.

  • Authentication for Augmented and Virtual Reality

    Sophie Stephenson, U Wisconsin-Madison

    Wednesday, October 5, 2022 | 9:05 AM PT | remote talk

    Abstract

    Augmented reality (AR) and virtual reality (VR) devices are emerging as prominent contenders to today’s personal computers. As personal devices, users will use AR and VR to store and access their sensitive data and thus will need secure and usable ways to authenticate. Unfortunately, it is not yet clear which authentication methods are best-suited for these novel devices. In this talk, I’ll share how we evaluated the state-of-the-art of authentication methods for AR and VR. First, through a survey of AR/VR users, we defined 20 user- and developer-desired properties for any AR/VR authentication method. We then used these properties to perform a comprehensive evaluation of AR/VR authentication methods both proposed in literature and currently used in practice. Our work synthesizes the current state of authentication for AR/VR devices and provides advice for designing and evaluating future authentication methods. (Link to paper (opens in new tab))

    Biography

    Sophie Stephenson (opens in new tab) (she/her) is a PhD student at the University of Wisconsin—Madison, advised by Rahul Chatterjee. Her work connects security & privacy and human-centered computing by investigating computer security for at-risk user populations. In particular, her recent work aims to understand and mitigate the use of IoT devices in intimate partner violence. Previously, she earned a Bachelor’s degree in Mathematics at Vassar College.

  • The Exact Security of BIP32 Wallets

    Poulami Das, TU Darmstadt

    Wednesday, August 24, 2022 | 10:30 AM PT | hybrid talk

    Abstract

    In many cryptocurrencies, the problem of key management has become one of the most fundamental security challenges. Typically, keys are kept in designated schemes called ‘Wallets’, whose main purpose is to store these keys securely. One such system is the BIP32 wallet (Bitcoin Improvement Proposal 32), which since its introduction in 2012 has been adopted by countless Bitcoin users and is one of the most frequently used wallet system today. Surprisingly, very little is known about the concrete security properties offered by this system. In this work, we propose the first formal analysis of the BIP32 system in its entirety and without any modification. Building on the recent work of Das et al. (CCS `19), we put forth a formal model for hierarchical deterministic wallet systems (such as BIP32) and give a security reduction in this model from the existential unforgeability of the ECDSA signature algorithm that is used in BIP32. We conclude by giving concrete security parameter estimates achieved by the BIP32 standard, and show that by moving to an alternative key derivation method we can achieve a tighter reduction offering an additional 20 bits of security (111 vs. 91 bits of security) at no additional costs.

    Biography

    Poulami Das is a final year PhD student at TU Darmstadt, Germany, working under the supervision of Sebastian Faust. Her current research interests are advanced signature schemes, consensus protocols and password-authenticated protocols.

  • Aligning Technical Data Privacy Protections with User Concerns: A Differential Privacy Case Study

    Elissa Redmiles, Max Planck Institute

    Wednesday, July 20, 2022 | 9:05 AM PT | remote talk

    Abstract

    People’s privacy concerns significantly influence their adoption of, engagement with, and ability to benefit from technology. A number of technical data privacy protections (e.g., differential privacy, multi-party computation) have been developed and deployed to address these privacy concerns. However, there has been little effort to meaningfully explain the privacy guarantees offered by these approaches to the people they aim to protect, nor have we evaluated whether such approaches meet the privacy needs of the people whose data is being processed. Perhaps as a result, despite growing deployment of technical data privacy protections, people feel less control over their data than ever.

    In the work presented in this talk, we take a first step toward evaluating the alignment between technical privacy protections and people’s privacy concerns. I will present the results of a rigorous survey-based investigation of: (1) whether users care about the protections afforded by differential privacy, and (2) whether they are therefore more willing to share their data with differentially private systems. I will conclude with a forward-look at future work on evaluating the alignment between end-user privacy concerns and other technical privacy protections.

    Biography

    Dr. Elissa M. Redmiles is a faculty member and research group leader at the Max Planck Institute for Software Systems and a Visiting Scholar at the Berkman Klein Center for Internet & Society at Harvard University. She uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been recognized with multiple paper awards at USENIX Security, ACM CCS and ACM CHI and has been featured in popular press publications such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, Business Insider, and CNET.

  • SNARKBlock: Federated Anonymous Blocklisting from Hidden Common Input Aggregate Proofs

    Michael Rosenberg, University of Maryland

    Friday, July 8, 2022 | 9:05 AM PT | remote talk

    Abstract

    Moderation is an essential tool to fight harassment and prevent spam. The use of strong user identities makes moderation easier, but trends towards strong identity pose serious privacy issues, especially when identities are linked across social media platforms. Zero-knowledge blocklists allow cross-platform blocking of users but, counter-intuitively, do not link users identities inter- or intra-platform, or to the fact they were blocked. Unfortunately, existing approaches (Tsang et al. ’10), require that servers do work linear in the size of the blocklist for each verification of a non-membership proof. We design and implement SNARKBlock, a new protocol for zero-knowledge blocklisting with server-side verification that is logarithmic in the size of the blocklist. SnarkBlock is also the first approach to support ad-hoc, federated blocklisting: websites can mix and match their own blocklists from other blocklists and dynamically choose which identity providers they trust. Our paper can be found at eprint.iacr.org/2021/1577 (opens in new tab).

    Biography

    Michael Rosenberg is a 4th year cryptography PhD student at the University of Maryland, living in NYC. His research interests include zero-knowledge proofs, fully homomorphic encryption, secure messaging, privacy-enhancing technologies, an accessibility. Other info about Michael, in order of importance: he is an avid birdwatcher, he can make a Thai curry in 10min flat, he has never done karaoke, and he blows about $35/week on improv shows.

  • Prio: Private, Robust, and Scalable Computation of Aggregate Statistics

    Henry Corrigan-Gibbs, MIT

    Thursday, June 23, 2022 | 11 AM PT | hybrid talk

    Abstract

    This talk will present Prio, a privacy-preserving system for the collection of aggregate statistics. Each Prio client holds a private data value (e.g., its current location), and a small set of servers compute statistical functions over the values of all clients (e.g., the most popular location). As long as at least one server is honest, the Prio servers learn nearly nothing about the clients’ private data, except what they can infer from the aggregate statistics that the system computes. To protect functionality in the face of faulty or malicious clients, Prio uses zero-knowledge proofs on secret-shared data, a new cryptographic technique that yields a hundred-fold performance improvement over conventional zero-knowledge approaches. Prio extends classic private aggregation techniques to enable the collection of a large class of useful statistics. For example, Prio can perform a least-squares regression on high-dimensional client-provided data without ever seeing the data in the clear.

    Biography

    Henry Corrigan-Gibbs (he/him) is an assistant professor at MIT. Henry builds computer systems that provide strong security and privacy properties using ideas from cryptography, computer security, and computer systems. Henry completed his PhD in the Applied Cryptography Group at Stanford, where he was advised by Dan Boneh. After that, he was a postdoc in Bryan Ford’s research group at EPFL in Lausanne, Switzerland.

    Henry has received an honorable mention for the 2020 ACM Doctoral Dissertation Award, three IACR Best Young Researcher Paper Awards (at Eurocrypt in 2020, the Theory of Cryptography Conference in 2019 and at Eurocrypt in 2018), the 2016 Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, and the 2015 IEEE Security and Privacy Distinguished Paper Award. Henry’s work has been cited by IETF and NIST, and his Prio system for privacy-preserving telemetry data collection is used today in the Firefox web browser, Apple’s iOS, and Google’s Android operating system

  • Anonymous tokens: single-use anonymous credentials for the web

    Michele Orrù, UC Berkeley

    Friday, June 17, 2022 | 10:05 AM PT | remote talk

    Abstract

    This talk is about anonymous tokens, a cryptographic primitive that enables a server to issue a client with lightweight, single-use credentials. Anonymous tokens satisfy two security properties: unforgeability and unlinkability. The former guarantees that no token can be double-spent, while the latter that the client can anonymously redeem the received tokens. We will focus on tokens with public and private metadata and how these affect security guarantees. We will discuss how they are used to limit tracking on the web.

    Biography

    Michele a postdoctoral researcher at UC Berkeley. He helped Google develop Trust Tokens for removing third-party cookies form the browser. His research interests include cryptography and privacy-enhancing technologies: he focuses on cryptographic protocols and implementation, zero-knowledge proofs, confidential transactions, and multi-party computation. He holds a Ph.D. in Computer Science from École Normale Supérieure. In the past, He contributed to Python, Debian, and Tor. He helped build Globaleaks, an open-source whistleblowing platform.

  • Interoperable Private Attribution

    Erik Taubeneck and Daniel Masny, Meta

    Friday, June 10, 2022 | 11:05 AM PT | hybrid talk

    Abstract

    Erik Taubeneck and Daniel Masny (both Research Scientists at Meta) will present Interoperable Private Attribution (IPA), a proposal in the W3C Private Ad Technology Community Group (PATCG). IPA is a proposal for attribution measurement, enabling advertisers and websites to understand the performance of digital advertising, in an aggregated and anonymous manner. We will present the motivating use cases, as well as the ideal functionality for the cryptographic protocol that purpose limits the system to aggregated and anonymous attribution measurement.

    Biography

    Erik and Daniel are both Research Scientists at Meta, working on industry solutions, applying privacy enhancing technologies to digital advertising.

  • Leakage and Protection of Dataset Properties

    Olya Ohrimenko, The University of Melbourne

    Friday, May 20, 2022 | 9:05 AM PT | hybrid talk

    Abstract

    Data privacy in computer science has been mostly concerned with protecting individual’s data when releasing a result of a computation on a larger dataset (e.g., differential privacy). In this talk, I will depart from individual privacy and consider privacy of dataset properties (e.g., race or gender distribution in a dataset). First, I will show that global properties about dataset attributes can be leaked when one releases machine learning models computed on this data. Then, I will discuss definitions of privacy for dataset properties and describe mechanisms that can meet these definitions.

    This talk is based on joint work with Michelle Chen (The University of Melbourne), Rachel Cummings (Columbia University), Shruti Tople (Microsoft Research) and Wanrong Zhang (Harvard University).

    Biography

    Olya Ohrimenko is an Associate Professor at The University of Melbourne which she joined in 2020. Prior to that she was a Principal Researcher at Microsoft Research in Cambridge, UK, where she started as a Postdoctoral Researcher in 2014. Her research interests include privacy and integrity of machine learning algorithms, data analysis tools and cloud computing, including topics such as differential privacy, verifiable and data-oblivious computation, trusted execution environments, side-channel attacks and mitigations. Recently Olya has worked with the Australian Bureau of Statistics and National Bank Australia. She has received solo and joint research grants from Facebook and Oracle and is currently a PI on an AUSMURI grant. Olya holds a Ph.D. degree from Brown University and a B.CS. (Hons) degree from the University of Melbourne. See https://people.eng.unimelb.edu.au/oohrimenko/ (opens in new tab) for more information.

  • Zero-Knowledge Middleboxes

    Paul Grubbs, University of Michigan

    Friday, May 6, 2022 | 9:05 AM PT | remote talk

    Abstract

    This talk will discuss a novel application of cryptography, the zero-knowledge middlebox. There is an inherent tension between ubiquitous encryption of network traffic and the ability of middleboxes to enforce network usage restrictions. An emerging battleground that epitomizes this tension is DNS filtering. Encrypted DNS (DNS-over-HTTPS and DNS-over-TLS) was recently rolled out by default in Firefox, with Google, Cloudflare, Quad9 and others running encrypted DNS resolvers. This is a major privacy win, protecting users from local network administrators observing which domains they are communicating with. However, administrators have traditionally filtered DNS to enforce network usage policies (e.g. blocking access to adult websites). Such filtering is legally required in many networks, such as US schools up to grade 12. As a result, Mozilla was forced to compromise, building a special flag for local administrators to instruct Firefox not to use Encrypted DNS.

    This example points to an open question of general importance, namely: can we resolve such tensions, enabling network policy enforcement while giving users the maximum possible privacy? Prior work has attempted to balance these goals by either revealing client traffic to trusted hardware run by the middlebox (e.g. Endbox) or using special searchable encryption protocols which enable some policy enforcement on encrypted traffic (e.g. Blindbox) by leaking information to the middlebox. Instead, we propose utilizing zero-knowledge proofs for clients to prove to middleboxes that their encrypted traffic is policy-compliant, without revealing any other additional information. Critically, such zero-knowledge middleboxes don’t require trusted hardware or any modifications to existing TLS servers. We implemented a prototype of our protocol using Groth16 proofs which can prove statements about an encrypted TLS 1.3 connection such as “the domain being queried in this encrypted DNS packet is not a member of the specified blocklist.” With current tools, our prototype adds around fifteen seconds of latency to opening a new TLS 1.3 connection, and at least three seconds to produce one proof of policy-compliance. While this is too slow for use with interactive web-browsing, it is close enough that we consider it a tantalizing target for future optimization.

    This talk will cover the tension between encryption and policy-enforcing middleboxes, including recent developments in Encrypted DNS and the necessity of DNS filtering. It will briefly survey existing solutions before presenting and arguing for the new zero-knowledge middlebox paradigm. Finally, the talk will describe our prototype implementation and several optimizations developed for it, as well as future avenues for improvement and open research questions. Our paper can be found at https://eprint.iacr.org/2021/1022 (opens in new tab).

    Biography

    Paul Grubbs is an Assistant Professor at the University of Michigan. In his research in applied cryptography and security, he uses a wide array of theoretical and practical tools to help make computer systems more secure and private. He received his PhD from Cornell University, and did a postdoc at NYU. He has received an NSF Graduate Research Fellowship, a Cornell CS Dissertation Award, and a Distinguished Paper Award from Usenix Security.

  • Aggregatability and asynchrony in distributed key generation

    Sarah Meiklejohn, Google / University College London

    Tuesday, Mar 29, 2022 | 9:05 AM PT | remote talk

    Abstract

    This talk will cover some recent research in distributed key generation (DKG) protocols, focusing on its definitions of security and on aggregatable DKG, in which the parties can produce an aggregated and publicly verifiable transcript. It will also explore the applications of DKG and, if time permits, how DKG can be achieved in asynchronous environments.

    Biography

    Sarah Meiklejohn is a Professor in Cryptography and Security at University College London and a Research Scientist at Google, where she works on the Certificate Transparency team.

  • Automated Attack Synthesis by Extracting Finite State Machines from Protocol Specification Documents

    Maria Pacheco, Purdue University and Max von Hippel, Northeastern University

    Friday, Mar 18, 2022 | 9:05 AM PDT | remote talk

    Abstract

    Automated attack discovery techniques, such as attacker synthesis or model-based fuzzing, provide powerful ways to ensure network protocols operate correctly and securely. Such techniques, in general, require a formal representation of the protocol, often in the form of a finite state machine (FSM). Unfortunately, many protocols are only described in English prose, and implementing even a simple network protocol as an FSM is time-consuming and prone to subtle logical errors. Automatically extracting protocol FSMs from documentation can significantly contribute to increased use of these techniques and result in more robust and secure protocol implementations.

    In this work we focus on attacker synthesis as a representative technique for protocol security, and on RFCs as a representative format for protocol prose description. Unlike other works that rely on rule-based approaches or use off-the-shelf NLP tools directly, we suggest a data-driven approach for extracting FSMs from RFC documents. Specifically, we use a hybrid approach consisting of three key steps: (1) large-scale word-representation learning for technical language, (2) focused zero-shot learning for mapping protocol text to a protocol-independent information language, and (3) rule-based mapping from protocol-independent information to a specific protocol FSM. We show the generalizability of our FSM extraction by using the RFCs for six different protocols: BGPv4, DCCP, LTP, PPTP, SCTP and TCP. We demonstrate how automated extraction of an FSM from an RFC can be applied to the synthesis of attacks, with TCP and DCCP as case-studies. Our approach shows that it is possible to automate attacker synthesis against protocols by using textual specifications such as RFCs.

    Biography

    Maria Pacheco is a PhD Candidate in the Department of Computer Science at Purdue University. Her research focuses broadly on neural-symbolic methods to model natural language discourse scenarios. Before joining Purdue, she spent a couple of years working as a data scientist for various startups in her hometown of Caracas, Venezuela. She has published in top Natural Language Processing conferences and journals and has delivered tutorials on neural-symbolic modeling for NLP to diverse audiences, including an IJCAI ‘21 tutorial and an upcoming COLING ‘22 tutorial. Maria is a recipient of the 2021 Microsoft Research Dissertation Grant, and one of the main organizers of the LatinX in AI events in NLP.

    Max von Hippel is a 3rd year PhD student in the Khoury College of Computer Science at Northeastern University, advised by Dr. Cristina Nita-Rotaru.  His research focuses on the application of light-weight formal methods to protocol security, with a particular focus on the automatic discovery of vulnerabilities and attacks.  Max is an NSF GRFP Fellow, a Shelby Davis Scholar, and co-organizer of the Boston Computation Club.  He previously completed a Bachelor of Science in Pure Mathematics at the University of Arizona, with a Minor in Computer Science, and he grew up in Anchorage, Alaska.

  • Differentially Private Resource Allocator

    Joann Qiongna Chen, UC Irvine

    Friday, Mar 11, 2022 | 9:05 AM PDT | remote talk

    Abstract

    Despite isolation techniques to enforce clients’ privacy in resource-sharing systems, one kind of side-channel from the resource allocation (RA) is frequently ignored.  That is, an attacker can request all available resources and use the fulfillment ratio to determine the existence of other clients, which is considered as private information in many systems, such as metadata-private messengers. To defend against such attacks, not assigning anything achieves perfect privacy, yet the resources are totally wasted; on the other hand, any greedy resource allocation/assignment method that allocates maximum possible resources is non-private in the worst case.

    We use the formal guarantee of differential privacy (DP) to balance privacy and utility in RA.  To satisfy DP in RA, the server (resource allocator) follows the noisy allocation paradigm by adding some noise to the received requests (e.g., adding dummy requests or deleting some requests), and subsequently randomly assigning resources to the noisy requests.  The intuition is that noisy requests incur some waste, but also confound attackers.

    Prior work draws a number of dummy requests from a biased Laplace distribution. As Laplace distribution satisfies DP, by a post-processing argument, this approach also satisfies DP.  In this case, we need a large positive bias to make sure the noise is always positive. But the bias also leads to poor utility. To provide better privacy-utility tradeoff in RA, first, we propose a new notion called Allocation DP (ADP), which follows the traditional DP notion, but can better model the process of RA. We then present a thorough study of the noisy allocation paradigm by considering different types, scales, and biases of noise.

    Biography

    Joann Chen is a third-year Ph.D. student in the EECS department at UC Irvine, advised by Zhou Li. Her research interests center around Differential Privacy (DP), privacy-enhancing technologies, and privacy in machine learning. She has experience in working on differentially private DNS resolution, DP for data stream release, differentially private resource allocator, and quantifying privacy risks in machine learning.

  • Account Recovery and Delegation in WebAuthn

    Nick Frymann, University of Surrey

    Friday, Feb 11, 2022 | 9:05 AM PDT | remote talk

    Abstract

    WebAuthn, forming part of FIDO2, is a W3C standard for strong authentication, which employs digital signatures to authenticate web users whilst preserving their privacy. Owned by users, WebAuthn authenticators generate attested and unlinkable public-key credentials for each web service to authenticate users. Since the loss of authenticators prevents users from accessing web services, usable recovery solutions preserving the original WebAuthn design choices and security objectives are urgently needed. Additionally, these properties introduce challenges when account owners want to delegate certain rights to a proxy user, such as to access their accounts or perform actions on their behalf, as delegation must not undermine decentralisation, unlinkability, and attestation provided by WebAuthn.

    We first analyse the cryptographic core of Yubico’s recent recovery proposal by modelling a new primitive, called Asynchronous Remote Key Generation (ARKG), which allows some primary authenticator to generate unlinkable public keys for which the backup authenticator may later recover corresponding private keys. Both processes occur asynchronously without the need for authenticators to export or share secrets, adhering to WebAuthn’s attestation requirements. We conclude our analysis by discussing concrete instantiations behind Yubico’s ARKG protocol, its integration with the WebAuthn standard, performance, and usability aspects.

    We then discuss how this primitive may be extended for use in delegation. This gives two approaches, called remote and direct, to achieve credential-wise delegation in WebAuthn, whilst maintaining the standard’s properties. In remote delegation, the account owner generates delegation credentials using ARKG and stores them at the relying party which provides interfaces to manage the delegation. The direct variant uses a delegation-by-warrant approach, through which the proxy receives delegation credentials from the account owner and presents them later to the relying party. To realise direct delegation, we introduce Proxy Signature with Unlinkable Warrants (PSUW), a new type of proxy signature that extends WebAuthn’s unlinkability property to proxy users and can be constructed generically from ARKG.

    We also describe the implementation considerations of ARKG and both delegation approaches, along with their possible WebAuthn integration, including the extensions required for the CTAP and WebAuthn API. Our discussion extends to additional functionality, such as revocation and permissions management, as well as usability.

    Biography

    Nick Frymann is a post-graduate researcher in the Surrey Centre for Cyber Security at the University of Surrey. His research focuses on multi-factor authentication on the web, with a view to improve security, functionality, and usability. Along with researchers in the SCCS group and partners from Wire and Yubico, Asynchronous Remote Key Generation was published in ACM CCS 2020, which offers more flexible and usable authentication backups in WebAuthn without undermining its security properties.

  • Privacy Pass: Architecture and Protocol Deep Dive

    Christopher Wood, Cloudflare

    Friday, Jan 28, 2022 | 9:05 AM PDT | remote talk

    Biography

    Christopher Wood is a Research Lead at Cloudflare Research. Outside of Cloudflare, he is co-chair of the TLS and MASQUE working groups at the IETF, as well as the PEARG research group in the IRTF. Before joining Cloudflare, Christopher worked on transport security, privacy, and cryptography engineering at Apple, as well as future Internet architectures at Xerox PARC. His interests lay at the intersection of network protocol design, communications security, privacy, and applied cryptography. At Cloudflare, he leads projects focused on security and privacy enhancements to a variety of systems, protocols, and applications. Christopher holds a Ph.D. in computer science from UC Irvine.

  • Mitigating Membership Inference Attacks Through Self-Distillation

    Amir Houmansadr, University of Massachusetts Amherst

    Friday, Jan 14, 2022 | 9:05 AM PDT | remote talk

    Abstract

    Membership inference attacks (MIA) are a key measure to evaluate privacy leakage in machine learning (ML) models. These attacks aim to distinguish training members from non-members by exploiting differential behavior of the models on member and non-member inputs. In this talk, I will start by introducing MIAs and their variations. Then, I will present SELENA, a novel architecture that aims to train ML models that have membership privacy while largely preserving their utility. SELENA relies on an ensemble architecture, and leverages self-distillation to train MIA-resistant models. Through extensive experiments on major benchmark datasets we show that SELENA presents a superior trade-off between membership privacy and utility compared to the state of the art.

    Biography

    Amir Houmansadr is an associate professor of computer science at UMass Amherst. He received his Ph.D. from the University of Illinois at Urbana-Champaign in 2012, and spent two years at the University of Texas at Austin as a postdoctoral scholar. Amir is broadly interested in the security and privacy of networked systems. To that end, he designs and deploys privacy-enhancing technologies, analyzes network protocols and services (e.g., messaging apps and machine learning APIs) for privacy leakage, and performs theoretical analysis to derive bounds on privacy (e.g., using game theory and information theory). Amir has received several awards including an NSF CAREER Award in 2016, a Google Faculty Research Award in 2015, and the 2013 IEEE S&P Best Practical Paper Award.

  • What is the Exact Security of the Signal Protocol?

    Alexander Bienstock, New York University

    Friday, Dec 3, 2021 | 10:05 AM PDT | remote talk

    Abstract

    In this work we develop comprehensive definitions in the Universal Composability framework to study the Signal Double Ratchet (Signal for short) protocol. Our definitions enable a more fine-grained and rigorous analysis of the exact security of Signal by explicitly capturing many new security guarantees, in addition to the ones that were already identified in the state-of-art work by Alwen, Coretti and Dodis [Eurocrypt 2019]. Moreover, our definitions provide the ability to more easily build on top of Signal, using the UC Composition Theorem. The Signal protocol, as it is described in the whitepaper, securely realizes our ideal functionality F_Signal. However, as we interpret from the high-level description in the whitepaper, the guarantees of F_Signal seem slightly weaker than those one would expect Signal to satisfy. Therefore we provide a stronger, more natural definition, formalized through the ideal functionality F_Signal+ . Based on our definitions we are able to make many important insights as follows:

    • We observe several shortcomings of Alwen et al.’s game-based security notions. To demonstrate them, we construct four different modified versions of the Signal protocol, all of which are insecure according to our weaker F_Signal definition, but remain secure according to their definitions. Among them, one variant was suggested for use by Alwen et al.; another one was suggested for use by the Signal whitepaper itself.
    • We identify the exact assumptions required for the full security of Signal. In particular, our security proofs use the gap-Diffie-Hellman assumption and the random oracle model, as opposed to the DDH assumption and standard model used in Alwen et al.
    • We demonstrate the shortcomings of Signal with respect to our stronger functionality F_Signal+ by showing a non-trivial (albeit minor) weakness with respect to that definition.
    • Finally, we complement the above weakness by providing a minimalistic modification to Signal (that we call the Triple Ratchet) and show that the resulting protocol securely realizes the stronger functionality F_Signal+ . Remarkably, the modification incurs no additional communication cost and virtually no additional computational cost.

    Joint work with Jaiden Fairoze, Sanjam Garg, Pratyay Mukherjee, and Srinivasan Raghuraman.

    Biography

    Alexander is a third-year PhD student in the Computer Science department at New York University, where he is part of the Cryptography group and is advised by Yevgeniy Dodis. He is supported by an NSF Graduate Research Fellowship. He received his B.S. in Computer Science from Columbia University in May 2019. His primary interests lie in formally analyzing real-world crypto protocols, specifically Secure (Group) Messaging protocols and private outsourced storage protocols.

  • Attacking the Brain: Security and Privacy Case Studies in Online Advertising, Misinformation, and Augmented Reality

    Franziska Roesner, University of Washington

    Friday, Oct 22, 2021 | 9:05 AM PDT | remote talk

    Abstract

    People who use modern technologies are inundated with content and information from many sources, including advertisements on the web, posts on social media, and (looking to the future) content in augmented or virtual reality. While these technologies are transforming our lives and communications in many positive ways, they also come with serious risks to users’ security, privacy, and the trustworthiness of content they see: the online advertising ecosystem tracks individual users and may serve misleading or deceptive ads, social media feeds are full of potential mis/disinformation, and emerging augmented reality technologies can directly modify users’ perceptions of the physical world in undesirable ways. In this talk, I will discuss several lines of research from our lab that explore these issues from a broad computer security and privacy perspective, leveraging methodologies ranging from qualitative user studies to systematic measurement studies to system design and evaluation. What unites these efforts is a key question: how are our brains “under attack” in today’s and tomorrow’s information environments, and how can we design platforms and ecosystems more robust to these risks?

    Biography

    Franziska (Franzi) Roesner is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she co-directs the Security and Privacy Research Lab. Her research focuses broadly on computer security and privacy for end users of existing and emerging technologies. Her work has studied topics including online tracking and advertising, security and privacy for sensitive user groups, security and privacy in emerging augmented reality (AR) and IoT platforms, and online mis/disinformation. She is the recipient of a Consumer Reports Digital Lab Fellowship, an MIT Technology Review “Innovators Under 35” Award, an Emerging Leader Alumni Award from the University of Texas at Austin, a Google Security and Privacy Research Award, and an NSF CAREER Award. She serves on the USENIX Security and USENIX Enigma Steering Committees. She received her PhD from the University of Washington in 2014 and her BS from UT Austin in 2008. Her website is at https://www.franziroesner.com (opens in new tab).

  • MAGE: Nearly Zero-Cost Virtual Memory for Secure Computation

    Sam Kumar, UC Berkeley

    Friday, Oct 8, 2021 | 10:05 AM PDT | remote talk

    Abstract

    Secure Computation (SC) is a family of cryptographic primitives for computing on encrypted data in single-party and multi-party settings. SC is being increasingly adopted by industry for a variety of applications. A significant obstacle to using SC for practical applications is the memory overhead of the underlying cryptography. We develop MAGE, an execution engine for SC that efficiently runs SC computations that do not fit in memory. We observe that, due to their intended security guarantees, SC schemes are inherently oblivious—their memory access patterns are independent of the input data. Using this property, MAGE calculates the memory access pattern ahead of time and uses it to produce a memory management plan. This formulation of memory management, which we call memory programming, is a generalization of paging that allows MAGE to provide a highly efficient virtual memory abstraction for SC. MAGE outperforms the OS virtual memory system by up to an order of magnitude, and in many cases, runs SC computations that do not fit in memory at nearly the same speed as if the underlying machines had unbounded physical memory to fit the entire computation

    A paper describing this research appeared at OSDI 2021, where it received a Jay Lepreau Best Paper Award. The paper is available at https://people.eecs.berkeley.edu/~samkumar/papers/mage_osdi2021.pdf (opens in new tab).

    Biography

    Sam is a fifth-year PhD student at UC Berkeley, advised by David Culler and Raluca Ada Popa. He is interested broadly in systems, networking, and security. Sam’s research focuses on using system design to manage the overhead of using cryptography. He is supported by an NSF GRFP fellowship. More information is available at his website: https://people.eecs.berkeley.edu/~samkumar/ (opens in new tab).

  • CanDID: Can-Do Decentralized Identity with Legacy Compatibility, Sybil-Resistance, and Accountability

    Harjasleen Malvai, UIUC

    Friday, Sep 3, 2021 | 9:05 AM PDT | remote talk

    Abstract

    In this talk, I will present CanDID, a platform for practical, user-friendly realization of decentralized identity. Decentralized identity (DID) is the idea of empowering end users with management of their own credentials. While decentralized identity promises to give users greater control over their private data, it burdens them with management of private keys, creating a significant risk of key loss. Existing and proposed approaches also presume the spontaneous availability of a credential-issuance ecosystem, creating a bootstrapping problem. Further, they omit essential functionality, such as resistance to Sybil attacks and the ability to detect misbehaving or sanctioned users, in a privacy-preserving way.

    CanDID addresses these challenges by issuing credentials in a user-friendly way that draws on data from existing, unmodified web service providers — in a secure and private way. Such legacy compatibility similarly enables CanDID users to leverage their existing online accounts for recovery of lost keys. Using a decentralized committee of nodes, CanDID provides strong confidentiality for user’s keys, real-world identities, and data, while preventing users from spawning multiple identities and allowing identification (and blacklisting) of sanctioned users.

    Biography

    Jasleen is a PhD student at UIUC, working with Prof. Andrew Miller. She is interested in the interplay between traditional cryptography and blockchains. Recently, Jasleen completed her Masters at Cornell University, prior to which she got a bachelor’s degree from Brown University.

  • Attestation and Isolation Mechanisms of AMD SEV

    Luca Wilke, University of Lübeck

    Friday, Aug 27, 2021 | 9:05 AM PDT | remote talk

    Abstract

    The ongoing trend of moving data and computation to the cloud is met with concerns regarding privacy and protection of intellectual property. To alleviate these concerns, cloud providers offer execution in isolated and trusted environments, removing themselves from the trust base. Trusted Execution Environments are built on two main pillars: attestation and isolation. The variety in the implementation of these two features is broad and ranges from pure hardware or software enforced access protection, to sophisticated schemes with complex cryptographic protection. However, the downside of stronger isolation and protection usually comes in terms of decreased performance. In this talk, we will focus on the attestation and isolation mechanisms of AMD SEV and explore two attacks on SEV, highlighting why integrity protection is paramount to secure TEEs in untrusted environments.

    Biography

    Luca Wilke is a PhD student advised by Thomas Eisenbarth at University of Lübeck, Germany, where he also received a master’s degree on computer science in 2020. Luca’s research focuses on system security, especially Trusted Execution Environments like AMD SEV, Intel SGX, and Keystone.

  • On Banquet and Rainier – Smaller and Faster Signatures from MPC-in-the-Head

    Daniel Kales, TU Graz

    Friday, Aug 20, 2021 | 9:05 AM PDT | remote talk

    Abstract

    Banquet is a signature scheme build on the MPC-in-the-Head paradigm and is intended to provide security against both classical and quantum attackers. The security against quantum attackers stems from the use of symmetric-key cryptography such as block ciphers and hash functions, without relying on certain number-theoretic hardness assumptions that are broken in the presence of quantum attackers. In contrast to similar designs such as Picnic, Banquet relies only on standardized symmetric-key primitives such as AES and SHAKE. Building on Banquet, we improve certain symmetric-key primitives to better fit signature schemes, and we also propose a new signature scheme by co-designing a proof system and a new block cipher. We show how to provably remove the need to include the key schedule of block ciphers. This simplifies schemes like Picnic and it also leads to the fastest and smallest AES-based signatures.

    Second, we investigate a variant of AES with larger S-boxes we call LSAES, for which we can argue that it is very likely at least as strong as AES, further reducing the size of AES-based signatures. Finally, we present a new signature scheme, Rainier, based on a new block cipher called Rain combined with a Banquet-like proof system. To the best of our knowledge, it is the first MPCitH-based signature scheme which can produce signatures that are less than 5 KB in size; it also outperforms previous Picnic and Banquet instances in all performance metrics.

    Biography

    Daniel Kales is a Ph.D. student in the Cryptology and Privacy group at IAIK, TU Graz. His research focus is on secure multiparty computation and post-quantum signatures. He is a contributor to the Picnic post-quantum signature scheme, currently an alternate candidate in the NIST standardization process, and one of the main authors of the optimized implementation of Picnic. One of his additional areas of interest is high-performance implementations for cryptographic protocols such as zero-knowledge proofs based on multiparty computation techniques and private set intersection.

  • DORY: An Encrypted Search System with Distributed Trust

    Emma Dauterman, UC Berkeley

    Friday, Aug 6, 2021 | 9:05 AM PDT | remote talk

    Abstract

    Efficient, leakage-free search on encrypted data has remained an unsolved problem for the last two decades; efficient schemes are vulnerable to leakage-abuse attacks, and schemes that eliminate leakage are impractical to deploy. To overcome this tradeoff, we reexamine the system model. We surveyed five companies providing end-to-end encrypted filesharing to better understand what they require from an encrypted search system. Based on our findings, we design and build DORY, an encrypted search system that addresses real-world requirements and protects search access patterns; namely, when a user searches for a keyword over the files within a folder, the server learns only that a search happens in that folder, but does not learn which documents match the search, the number of documents that match, or other information about the keyword. DORY splits trust between multiple servers to protect against a malicious attacker who controls all but one of the servers. We develop new cryptographic and systems techniques to meet the efficiency and trust model requirements outlined by the companies we surveyed. We implement DORY and show that it performs orders of magnitude better than a baseline built on ORAM. Parallelized across 8 servers, each with 16 CPUs, DORY takes 116ms to search roughly 50K documents and 862ms to search over 1M documents. The paper appeared at OSDI’20 and is joint work with Eric Feng, Ellen Luo, Raluca Ada Popa, and Ion Stoica.

    Biography

    Emma Dauterman is a third-year Ph.D. student in the UC Berkeley RISELab where she is advised by Raluca Ada Popa and Ion Stoica. She is broadly interested in building secure systems using cryptography. Emma completed her B.S. and M.S. at Stanford University in computer science where she was advised by David Mazieres. Her work is supported by a NSF GRFP fellowship and a Microsoft Ada Lovelace research fellowship.

  • Machine Learning Integrity in Adversarial Environments

    Alina Oprea, Northeastern University in the Khoury College of Computer Sciences

    Friday, July 23, 2021 | 9:05 AM PDT | remote talk

    Abstract

    Machine learning is increasingly being used for automated decisions in applications such as health care, finance, cyber security, and personalized recommendations. These critical applications require strong guarantees on the integrity of the machine learning  models used to make predictions. The area of adversarial machine learning studies the effect of adversarial attacks against machine learning models and aims to design robust defense algorithms. In this talk I will describe our work on poisoning attacks against machine learning at training time. I will introduce  several types of poisoning attacks, with different objectives and adversarial capabilities, and show their impact in multiple applications.  I will also discuss the challenges for designing machine learning models robust against poisoning attacks.

    Biography

    Alina Oprea is an Associate Professor at Northeastern University in the Khoury College of Computer Sciences. She joined Northeastern University in Fall 2016 after spending 9 years as a research scientist at RSA Laboratories. Her research interests are broadly in cyber security, with a focus on adversarial machine learning, AI for threat detection, cloud security, and applied cryptography. She is the recipient of the Technology Review TR35 award for research in cloud security in 2011 and the recipient of the Google Security and Privacy Award in 2019. Alina served as Program Committee co-chair of the IEEE Security and Privacy Symposium in 2020 and 2021, and is currently an Associate Editor for the ACM Transactions of Privacy and Security (TOPS) journal and the IEEE Security and Privacy Magazine.

  • Towards Effective and Efficient Privacy Preserving Machine Learning

    Nitin Agrawal, University of Oxford

    Friday, July 16, 2021 | 9:05 AM PDT | remote talk

    Abstract

    Machine learning has proven to be a great asset empowering human race, in recent times. However, there are areas such healthcare and finance that have benefited from all of this in a very limited sense, if at all. This could primarily be attributed to the sensitivity and privacy concerns over the data to be processed. Privacy preserving machine learning mitigates these privacy challenges through private computation over sensitive data. However, the process is not entirely trivial or without trade-offs.

    In this talk, I will focus on our proposals around designing effective and efficient protocols for facilitating end-to-end privacy preserving machine learning, particularly for Neural Networks. I will also be talking about our user study focused on enhancing usability, efficiency, assisting design and ensuring equitability in privacy preserving computation frameworks at large [CHI’21]. This study took the form of semi-structured interviews with various stakeholders in the privacy preserving computation food chain. I intend to motivate design, development and further research in effective, efficient, and equitable privacy preserving machine learning in this talk, and more generally in my work.

    Biography

    Nitin is a doctoral student at the Department of Computer Science, University of Oxford. His area of research involves privacy aware machine learning. He works on designing privacy-preserving and equitable data driven inference and training architectures, primarily employing Secure Multi-Party Computation protocols and Federated Learning. Nitin is a commonwealth scholar and holds a B.Tech in information technology and mathematical innovations from University of Delhi.

  • Fawkes: Protecting Privacy against Unauthorized Deep Learning Models

    Emily Wenger, University of Chicago

    Friday, June 25, 2021 | 9:05 AM PDT | remote talk

    Abstract

    Today’s proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate facial recognition models of individuals without their knowledge. We need tools to protect ourselves from potential misuses of unauthorized facial recognition systems. Our system, Fawkes, helps individuals inoculate their images against unauthorized facial recognition models. This talk will explain the techniques behind Fawkes and its performance. It will then discuss the future of Fawkes-like tools in the complicated fight for individual privacy.

    Biography

    Emily Wenger is a 3rd year PhD student in computer science at the University of Chicago. Her research explores the limitations, vulnerabilities, and privacy implications of deep neural networks. She is passionate about developing user-centric protections against intrusive machine learning systems. Her work has been reported on by the New York Times, MIT Tech Review, and the Verge, among others. Emily is the recipient of GFSD Fellowship, Harvey Fellowship, and University of Chicago Neubauer Fellowship. She holds a BS in mathematics and physics from Wheaton College (IL).

  • Online Tracking and Inferencing: Using Transparency to Obtain User Perspectives

    Michelle Mazurek, University of Maryland

    Thursday, June 9, 2021 | 11:10 AM PDT | remote talk

    Abstract

    Online tracking and associated behavioral advertising is pervasive, but the extent and mechanics of tracking and inferencing are often opaque to users. In this talk, I will discuss a line of research investigating user perspectives on tracking and inferencing. We use transparency — in particular users’ real data from web browsing and Twitter use — as a prompt for increasing users’ understanding and measuring their opinions of different kinds of tracking and inferencing. Our work sheds light on what users do and don’t understand, as well as what they find useful, creepy, and unacceptable. Our work sheds light on lesser-known kinds of inferencing and ad targeting that go beyond interests, such as event targeting, follower lookalikes, and tailored audiences. Our results have implications for helping users exercise control (when possible) over the ways their information is used; for platforms that are concerned with user comfort; and for advocates and policymakers who are considering new transparency requirements as well as regulations on when and how users can be tracked.

    Biography

    Michelle Mazurek is an Associate Professor in the Computer Science Department and the Institute for Advanced Computer Studies at the University of Maryland, College Park, where she directs the Maryland Cybersecurity Center. Her research aims to understand and improve the human elements of security- and privacy-related decision making. Recent projects include examining how and why developers make security and privacy mistakes; investigating the vulnerability-discovery process; evaluating the use of threat-modeling in large-scale organizations; and analyzing how users learn about and decide whether to adopt security advice. Her work has been recognized with an NSA Best Scientific Cybersecurity Paper award and three USENIX Security Distinguished Paper awards. She was Program Chair for the Symposium on Usable Privacy and Security (SOUPS) for 2019 and 2020 and will be Program Chair for the Privacy Enhancing Technologies Symposium (PETS) for 2022 and 2023.

  • Full Database Reconstruction in Two Dimensions

    Francesca Falzon, University of Chicago

    Tuesday, Nov 24, 2020 | 9:10 AM PDT | remote talk

    Abstract

    In the past few years, we have seen multiple attacks on one-dimensional databases that support range queries. These attacks achieve full database reconstruction by exploiting access pattern leakage along with known query distribution or search pattern leakage. We are the first to go beyond one dimension, exploring this threat in two dimensions. We unveil an intrinsic limitation of reconstruction attacks by showing that there can be an exponential number of distinct databases that produce equivalent leakage. Next, we present a full database reconstruction attack. Our algorithm runs in polynomial time and returns a poly-size encoding of all databases consistent with the given leakage profile. We implement our algorithm and observe real-world databases that admit a large number of equivalent databases, which aligns with our theoretical results.

    Biography

    Francesca Falzon is currently a third year Ph.D. student in the Department of Computer Science at the University of Chicago. Prior to this, she completed her bachelors in mathematics at Rutgers University. She is advised by Prof. David Cash and is currently working on projects relating to encrypted databases and searchable encryption.

  • Side-channel-resistant Storage for SGX enclaves

    Adil Ahmad, Purdue University

    Tuesday, Nov 3, 2020 | 9:10 AM PDT | remote talk

    Abstract

    Nowadays cloud machines process highly confidential data, including personally identifiable information, healthcare, and financial data, but are challenging to secure, given software complexity, untrusted system administrators, and multi-tenancy. A promising solution to this problem is to use a hardware feature called Intel SGX, which secures confidential data within isolated execution environments, called enclaves. However, SGX does not secure memory-based side-channels including page faults, page access bits, and cache attacks. Prior research has shown that these side-channels reveal enclave access patterns, which leaks confidential enclave data (e.g., cryptographic keys).

    In this talk, I will focus on designing side-channel-resistant yet efficient storage for SGX enclaves. First, I will briefly introduce my prior research in this area: Obliviate [NDSS18] and Obfuscuro [NDSS19], which employ Oblivious RAM (ORAM) to secure filesystem services and obfuscate enclave program execution, respectively. Then, I will show that scalability issues surrounding ORAM-based systems like Obliviate and Obfuscuro can limit their applicability. Building on these results, I will dive into the specifics of our latest research, TrustOre [CCS20], which outsources security-sensitive data to an external FPGA device, providing strong side-channel protection. My talk will discuss the challenges introduced by an external to the enclave trusted component (i.e., the FPGA device) and introduce new attestation, channel establishment, and communication designs to tackle these challenges. Furthermore, I will present TrustOre’s evaluation results which show that it incurs a significantly lower data access latency (almost two degrees better) and constant normalized throughput, unlike ORAM-based SGX systems. Finally, I will end my talk with a discussion on the open side-channel problems yet to be successfully addressed by TrustOre.

    Biography

    Adil Ahmad is a PhD student at Purdue University, advised by Dr. Pedro Fonseca (Purdue University) and Dr. Byoungyoung Lee (Seoul National University). His research interests are broadly in systems and security, with a focus on securing remote data and computation using hardware-assisted trusted computing solutions (e.g., Intel SGX and AMD SEV). His research has featured in top-tier computer security venues including NDSS, CCS, and USENIX Security.

  • Cryptε: Crypto-Assisted Differential Privacy on Untrusted Servers

    Amrita Roy Chowdhury, University of Wisconsin-Madison

    Tuesday, Oct 13, 2020 | 9:00 AM PDT | remote talk

    Abstract

    Differential privacy (DP) is currently the de-facto standard for achieving privacy in data analysis, which is typically implemented either in the ”central” or ”local” model. The local model has been more popular for commercial deployments as it does not require a trusted data collector. This increased privacy, however, comes at the cost of utility and algorithmic expressibility as compared to the central model.

    In this talk, I will be presenting Cryptε, a system and programming framework that (1) achieves the accuracy guarantees and algorithmic expressibility of the central model (2) without any trusted data collector like in the local model. Cryptε achieves the ”best of both worlds” by employing two non-colluding untrusted servers that run DP programs on encrypted data from the data owners. In theory, straightforward implementations of DP programs using off-the-shelf secure multi-party computation tools can achieve the above goal. However, in practice, they are beset with many challenges like poor performance and tricky security proofs. To this end, Cryptε allows data analysts to author logical DP programs that are automatically translated to secure protocols that work on encrypted data. These protocols ensure that the untrusted servers learn nothing more than the noisy outputs, thereby guaranteeing DP for all Cryptε programs. Cryptε supports a rich class of DP programs that can be expressed via a small set of transformation and measurement operators followed by arbitrary post-processing. Further, I will talk about a novel performance optimization that leverages the fact that the output is noisy. As a result, Cryptε achieves performance that is practical for real-world usage.

    Biography

    Amrita Roy Chowdhury is a PhD student at the University of Wisconsin-Madison and is advised by Prof. Somesh Jha. She completed her Bachelor of Engineering in Computer Science from the Indian Institute of Engineering Science and Technology, Shibpur where she was awarded the President of India Gold Medal. Her research work focusses on exploring the associations between differential privacy and cryptography and proposing novel algorithms that bring to light the rich interrelations between the two areas, both in theory and practice.

  • Reducing Metadata Leakage from Encrypted Files and Communication with PURBs

    Ludovic Barman, EPFL

    Tuesday, Sept 8, 2020 | 9:00 AM PDT | remote talk

    Abstract

    Most encrypted data formats leak metadata via their plaintext headers, such as format version, encryption schemes used, number of recipients who can decrypt the data, and even the recipients’ identities. This leakage can pose security and privacy risks to users, e.g., by revealing the full membership of a group of collaborators from a single encrypted e-mail, or by enabling an eavesdropper to fingerprint the precise encryption software version and configuration the sender used.

    We propose that future encrypted data formats improve security and privacy hygiene by producing Padded Uniform Random Blobs or PURBs:ciphertexts indistinguishable from random bit strings to anyone without a decryption key. A PURB’s content leaks nothing at all, even the application that created it, and is padded such that even its length leaks as little as possible.

    Biography

    Ludovic is a 5th-year Ph.D. student at EPFL, working with prof. Jean-Pierre Hubaux and prof. Bryan Ford on the system aspects of security and privacy. His focus has been on anonymous communication systems, quantifying and reducing metadata from communications, and exploiting metadata in traffic-analysis attacks.

  • EnclaveDom: Privilege Separation for Large-TCB Applications in Trusted Execution Environments

    Marcela Melara, Intel Labs

    Tuesday, Sept 1, 2020 | 9:00 AM PDT | remote talk

    Abstract

    Trusted executions environments (TEEs) such as Intel(R) SGX provide hardware-isolated execution areas in memory, called enclaves. Porting existing applications to TEEs often requires considerable refactoring efforts, as TEEs provide a restricted interface to standard OS features. To ease development efforts, TEE application developers often choose to run their unmodified application in a library OS container that provides a full in-enclave OS interface. Yet, prior research [1] suggests that this large-TCB development approach may leave sensitive in-enclave data exposed to potential bugs or vulnerabilities in third-party code imported into the application. Importantly, because the TEE libOS and the application run in the same enclave address space, previous work indicates that even the libOS management data structures (e.g. file descriptor table) may be vulnerable to attack, where in traditional OSes these data structures may be protected via privilege isolation.

    In this talk, we present EnclaveDom, a privilege separation system for large-TCB TEE applications that partitions an enclave into tagged memory regions and enforces per-region access rules. EnclaveDom is implemented on Intel SGX using Memory Protection Keys (MPK) for memory tagging, and we evaluate EnclaveDom in an experimental integration with the Graphene-SGX library OS. Our prototype helps protect internal libOS management data structures against tampering by application-level code.

    [1] Y. Shen, Y. Chen, K. Chen, H. Tian, and S. Yan. “To Isolate, or to Share? That is a Question for Intel SGX.” In Proc. APSys’ 18, Aug 2018.

    Biography

    Marcela Melara is a Research Scientist in the Security and Privacy Group at Intel Labs. Her current research focuses on trustworthy distributed systems and cloud computing security. Prior to joining Intel, she received her M.S.E. and PhD in Computer Science from Princeton University, and her Bachelor’s degree from Hobart and William Smith Colleges. She is a Siebel Scholar, a member of Phi Beta Kappa, and her research on CONIKS was awarded the Caspar Bowden PET Award in 2017. Marcela is also a mentor for Cientifico Latino and a volunteer for the Future Scientists Program.

  • Ghostor: Toward a Secure Data-Sharing System from Decentralized Trust

    Yuncong Hu, UC Berkeley

    Tuesday, Aug 25, 2020 | 9:00 AM PDT | remote talk

    Abstract

    Data-sharing systems are often used to store sensitive data. Both academia and industry have proposed numerous solutions to protect the user privacy and data integrity from a compromised server. Practical state-of-the-art solutions, however, use weak threat models based on centralized trust—they assume that part of the server will remain uncompromised, or that the adversary will not perform active attacks. We propose Ghostor, a data-sharing system that, using only decentralized trust, (1) hides user identities from the server, and (2) allows users to detect server-side integrity violations. To achieve (1), Ghostor avoids keeping any per-user state at the server, requiring us to redesign the system to avoid common paradigms like per-user authentication and user-specific mailboxes. To achieve (2), Ghostor develops a technique called verifiable anonymous history. Ghostor leverages a blockchain rarely, publishing only a single hash to the blockchain for the entire system once every epoch. We measured that Ghostor incurs a 4–5x throughput overhead compared to an insecure baseline. Although significant, Ghostor’s overhead may be worth it for security- and privacy-sensitive applications.

    Based on joint work with Sam Kumar, and Raluca Ada Popa: https://eprint.iacr.org/2020/648.pdf (opens in new tab)

    Biography

    Yuncong Hu is a Ph.D. student at UC Berkeley, advised by Prof. Raluca Ada Popa and Prof. Alessandro Chiesa. He is primarily interested in the area of cryptography and security, including applied cryptography, decentralized systems, and zero-knowledge proofs. His research focuses on building systems based on decentralized trust and developing cryptographic tools for decentralized applications.

  • Secure computation for contact tracing

    Ni Trieu, UC Berkeley/Arizona State University

    Tuesday, Aug 11, 2020 | 9:00 AM PDT | remote talk

    Abstract

    Contact tracing is an essential tool in containing infectious diseases such as COVID-19. It enables users to search over released tokens in order to learn if they have recently been in the proximity of an infectious user. However, prior approaches have several weaknesses, including: (1) do not provide end-to-end privacy in the collection and querying of tokens, (2) do not utilize the huge volume of data stored in business databases and individual digital devices, or (3) impose heavy bandwidth or computational demands on client devices. In this talk, I will discuss the existing cryptographic attacks and privacy concerns in deploying contact tracing applications, how secure computation can tackle these problems. Moreover, I will describe our system design to improve the security guarantee and performance of the existing contact tracing frameworks. Along with this, I will present our new protocol for the delegated private intersection set cardinality that allows clients to delegate their contact tracing computation to cloud servers without privacy compromise.

    Biography

    I am an incoming assistant professor of computer science at Arizona State University this fall. I am currently a postdoc at UC Berkeley. I received my doctorate from Oregon State University. My research interests are in the area of cryptography and security, with a specific focus on secure computation and its applications such as private set intersection, private database queries, and privacy-preserving machine learning.

  • Auditing Differentially Private Machine Learning: How Private is Private SGD?

    Matthew Jagielski, Northeastern University

    Tuesday, Aug 4, 2020 | 9:00 AM PDT | remote

    Abstract

    We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks. While previous work (Ma et al., arXiv 2019) proposed this connection between differential privacy and data poisoning as a defense against data poisoning, our use as a tool for understanding the privacy of a specific mechanism is new. More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy.

    Biography

    Matthew Jagielski is a 4th year PhD student at Northeastern University, advised by Cristina Nita-Rotaru and Alina Oprea. He works at the intersection between machine learning, security, and privacy, with work appearing at venues including USENIX Security, IEEE S&P, and ICML. He is also an avid runner, biker, and swimmer.

  • Analyzing the Leaky Cauldron via Membership Inference Attacks

    Bargav Jayaraman, University of VirginiaTuesday, July 28, 2020 | 9:10 AM PDT | remote talk

    Abstract

    Machine learning models can leak sensitive information about the training data, ranging from revealing the presence or absence of a particular individual to leaking latent properties of the training data set. Differential privacy provides strong theoretical guarantees that bound the information leaked by a model, and can be achieved by adding randomized noise during the training process. The noise magnitude is controlled by the privacy loss budget parameter such that smaller privacy loss budgets lead to high privacy but at the potential loss of model utility. In practice, implementations tend to use high privacy loss budgets to obtain useful models. I’ll first report on experiments we have done using membership inference attacks to measure the privacy leakage of various differentially private training methods. Next, I will talk about how we improve the prior membership inference attacks by showing a general method to adaptively select confidence threshold for determining if a candidate is a member.Finally, I will discuss more realistic scenarios where the prior sampling probability is imbalanced and report the evaluation results in such scenarios.

    Biography

    Bargav Jayaraman is a PhD student at the Department of Computer Science, University of Virginia. He works in the area of privacy preserving machine learning and is advised by Professor David Evans. His dissertation focuses on evaluating privacy leakage of differentially private machine learning.

  • Silent OT Extension and More via Pseudorandom Correlation Generators

    Lisa Kohl, Technion

    Tuesday, July 21, 2020 | 9:00 AM PDT | remote

    Abstract

    Secure multiparty computation can often utilize a trusted source of correlated randomness, such as random oblivious transfer (OT), to achieve better asymptotic and concrete efficiency. A natural tool to securely realize this source of randomness with only a small amount of communication is a pseudorandom correlation generator (PCG). A PCG is a deterministic function that stretches short correlated seeds into long instances of the target correlation, thereby allowing two or more parties to generate correlated randomness using a cheap, one-time interaction followed by only “silent” local computation.

    In this talk, I will introduce the concept of PCGs and show how to construct a PCG for OT based on the Learning Parity with Noise (LPN) assumption. I will further outline how this approach yields the first efficient protocol for generating n random OTs with o(n) communication and the first concretely efficient 2-round OT extension protocol of any kind. Finally, I will give a brief outlook on how to obtain efficient PCGs for more general correlations such as oblivious linear evaluations over large fields from a variant of the LPN assumption.

    Based on joint works with Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Peter Rindal and Peter Scholl.

    Biography

    Lisa Kohl is currently a postdoctoral researcher at Technion hosted by Yuval Ishai. In 2019 she completed her PhD at Karlsruhe Institute of Technology under the supervision of Dennis Hofheinz. During her PhD she spent eight months in the FACT center at IDC Herzliya for a research visit with Elette Boyle. She wrote her master’s thesis in the Cryptology Group at CWI Amsterdam headed by Ronald Cramer. Her research interests lie in the foundations of cryptography with a particular focus on exploring new directions in secure computation.

  • Security and noise growth in homomorphic encryption

    Rachel Player, Royal Holloway, University of LondonTuesday, July 14, 2020 | 9:00 AM PDT | remote talk

    Abstract

    In November 2018, the HomomorphicEncryption.org consortium published the Homomorphic Encryption Security Standard. The Standard recommends several sets of Learning with Errors (LWE) parameters that can be selected by application developers to achieve a target security level. However, these parameter sets do not always reflect implementation choices. For example, all known implementations for bootstrapping for the CKKS, BFV and BGV schemes use a sparse secret and a large power-of-two ring dimension.

    In the first part of the talk, we contextualise the selection of parameters for security in LWE-based cryptographic schemes by illustrating choices made in the ongoing NIST post-quantum standardisation process. We then narrow our focus to the homomorphic encryption setting and explore the security of sparse-secret LWE parameter sets, taking into account hybrid attacks.

    In the second part of the talk, we turn our attention to noise growth in homomorphic encryption, which is essential for choosing parameters to ensure correctness and optimal performance. We evaluate the effectiveness of heuristic worst-case noise analyses in homomorphic encryption and find that this inherently leads to a gap between the heuristic estimate of the noise and the observed noise in practice. We also report on an updated comparison of the BGV and FV schemes and find that FV slightly outperforms BGV, even for large plaintext moduli, well beyond the crossover point reported in prior work of Costache and Smart.

    The talk draws on the following joint works:

    Martin R. Albrecht, Benjamin R. Curtis, Amit Deo, Alex Davidson, Rachel Player, Eamonn W. Postlethwaite, Fernando Virdia, Thomas Wunderer. Estimate all the {LWE,NTRU} schemes! SCN 2018.

    Benjamin R. Curtis, Rachel Player. On the Feasibility and Impact of Standardising Sparse-secret LWE Parameter Sets for Homomorphic Encryption. WAHC@CCS 2019.

    Anamaria Costache, Kim Laine, Rachel Player. Evaluating the effectiveness of heuristic worst-case noise analysis in FHE. To appear at ESORICS 2020.

    Biography

    Dr Rachel Player is a lecturer in the Information Security Group at Royal Holloway, University of London. Previously, she was a postdoc in the same group, and even before that she was a postdoc at LIP6 in Paris. Rachel’s main research interests are in post-quantum cryptography, especially in the design and cryptanalysis of lattice-based cryptographic schemes. Rachel is interested in Privacy Enhancing Technologies and she has worked extensively in the area of applied homomorphic encryption. In 2019, Rachel was honoured by ISRG with the Radiant Award for Advancing Internet Security. In Spring 2016, Rachel was an intern in the Cryptography group at MSR where she contributed to Microsoft SEAL. More information can be found at: https://rachelplayer.github.io

  • Reproducible Codes and Cryptographic Applications

    Edoardo Persichetti, Florida Atlantic University

    Wednesday, August 14, 2019 | 1:00 PM PDT | 99/1927 von Neumann

    Abstract

    In this talk I will present a work in progress on structured linear block codes. The investigation starts from well-known examples and generalizes them to a wide class of codes that we call reproducible codes. These codes have the property that they can be entirely generated from a small number of signature vectors, and consequently admit matrices that can be described in a very compact way. I will show some cryptographic applications of this class of codes and explain why the general framework introduced may pave the way for future developments of code-based cryptography.

    Biography

    Edoardo Persichetti is currently Assistant Professor in the Department of Mathematical Sciences at Florida Atlantic University. Before moving to Florida, he was Postdoc (Adiunkt Naukowy) in the Cryptography and Data Security Group at Warsaw University in Poland. He completed his PhD in Mathematics in late 2012 at University of Auckland, New Zealand under the supervision of Steven Galbraith.

    His research interests are public-key cryptography (post-quantum, provable security) and number theory (mainly coding theory), with a particular focus on code-based cryptography.

    Dr. Persichetti is co-author of 4 submissions to NIST’s Post-Quantum Standardization, and participates actively in the community, serving regularly as peer-reviewer for many internationally-recognized conferences and journals.

  • Fully Homomorphic NIZK and NIWI Proofs

    Apoorvaa Deshpande, Brown University

    Tuesday, August 6, 2019 | 1:15 PM PDT | 99/1927 von Neumann

    Abstract

    In this work, we define and construct fully homomorphic non-interactive zero knowledge (FH-NIZK) and non-interactive witness-indistinguishable(FH-NIWI) proof systems. Specifically, given non-interactive zero-knowledge (NIZK) or witness-indistinguishable (NIWI) proofs for different statements, we give a framework to homomorphically evaluate on them to compute proofs for new arbitrarily inferred statements. The security guarantee along with soundness and ZK or WI (respectively) is that of unlinkability; an inferred proof should be indistinguishable from a fresh proof for a combined statement.

    Our first result, under the Decision Linear Assumption (DLIN), is a fully homomorphic NIZK proof system in the common random string model. Our more surprising second result (under a new decisional assumption on groups with bilinear maps) is a fully homomorphic NIWI proof system that requires no setup.[Joint work with Prabhanjan Ananth, Yael Kalai and Anna Lysyanskaya]

    Biography

    Apoorvaa Deshpande is a PhD student at Brown University, advised my Prof. Anna Lysyanskaya. Her research has been around zero-knowledge proofs and its applications to the blockchain space.

  • Resilience and Vulnerability of Ring-LWE Cryptosystems to Leakage

    Dana Dachman-Soled, University of Maryland

    Thursday, June 27, 2019 | 10:30 AM PDT | 99/1915 Hopper

    Abstract

    One of the foremost avenues for viable post-quantum cryptography is lattice-based cryptography, which primarily involves cryptosystems based on (variants of) the learning with errors (LWE) assumption. While formal leakage resilience guarantees have been well-studied for LWE-based cryptosystems since 2009, the techniques do not carry over to the more efficient, Ring-LWE setting, which is the preferred setting in practice.

    In this talk, we will describe information-theoretic and computational approaches for establishing the leakage resilience of Ring-LWE-based cryptosystems. We also present a type of leakage attack known as a partial key exposure attack on Ring-LWE. This attack exploits the algebraic structure of the Ring-LWE setting and is not applicable in the standard LWE setting.

    Based on joint work with: Huijing Gong, Mukul Kulkarni and Aria Shahverdi

    Biography

    Dana Dachman-Soled is an assistant professor in the Department of Electrical and Computer Engineering at the University of Maryland, College Park and is affiliated with the Maryland Cybersecurity Center. She is the recipient of an NSF CAREER award, a Ralph E. Powe Junior Faculty Enhancement award, and a University of Maryland Research and Scholarship (RASA) award. Prior to joining University of Maryland, Dana spent two years as a postdoc at Microsoft Research New England. Before that, she completed her PhD at Columbia University.

  • PASTA: PASsword-based Threshold Authentication

    Peihan Miao, UC Berkeley

    Monday, June 24, 2019 | 11:00 AM PDT | 99/1927 von Neumann

    Abstract

    We introduce and formalize a new notion of password-based threshold token authentication, which protects password-based authentication against single point of failures. Specifically, we distribute the role of a single server among n servers and allow any t servers to collectively verify clients’ passwords and generate tokens, while no t-1 servers can forge a valid token or mount offline dictionary attacks. We then introduce PASTA, a general framework wherein clients can sign on using a two-round (optimal) protocol that meets our strong security guarantees.Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naive single-server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. We show, however, that this cost is inherent by proving a symmetric-key only solution impossible.

    Based on joint work with Shashank Agrawal, Payman Mohassel, and Pratyay Mukherjee: https://eprint.iacr.org/2018/885.pdf.

    Biography

    Peihan Miao just received her PhD in Cryptography from UC Berkeley (advised by Prof. Sanjam Garg) and will join VISA Research as a research scientist in July. She is primarily interested in the area of secure computation. Her research focuses on building cryptographic tools for achieving the optimal computation, communication, and round complexity in secure computation, aiming for bridging the gap between theory and practice.

  • Pathfinding in isogeny graphs and computing endomorphism rings

    Travis Morrison, University of Waterloo

    Friday, June 21, 2019 | 1:30 PM PDT | 99/1915 Hopper

    Abstract

    A large enough quantum computer could run Shor’s algorithm, rendering elliptic curve cryptography insecure. To prepare for a “post-quantum” world, NIST is currently administering a process to standardize one or more cryptographic protocols which should be secure against an attacker with a quantum computer. One submission, SIKE (based on Jao and de Feo’s SIDH) relies on the hardness of pathfinding in supersingular isogeny graphs as an assumption for its security. Supersingular isogeny graphs first appeared in cryptography in the work of Charles, Goren, and Lauter, who used these graphs to build a cryptographic hash function. There is a rich interplay between the geometry of isogenies and the arithmetic of endomorphism rings of elliptic curves, as shown in work of Waterhouse and Kohel, suggesting that one avenue for solving the pathfinding problem is to compute supersingular endomorphism rings.

    In this talk, I will discuss heuristic reductions between the pathfinding and endomorphism ring problems. Then, I will discuss recent work with Eisentraeger, Hallgren, Leonardi and Park in which we give several new algorithms for computing supersingular endomorphism rings. I will also discuss to what extent these algorithms may impact the security of SIDH or the CGL isogeny hash.

    Biography

    Travis Morrison is a post-doctoral fellow working with David Jao and Michele Mosca in the Institute for Quantum Computing at the University of Waterloo. He received his B.A. at the University of California, Santa Cruz. During the summer of 2017, he was an intern at Microsoft Research and was mentored by Kristin Lauter. He received his PhD from the Pennsylvania State University under the supervision of Kirsten Eisentraeger.

  • Multidimensional scalar point multiplication algorithms

    Koray Karabina, Florida Atlantic University

    Tuesday, June 18, 2019 | 1:15 PM PDT | 99/1927 von Neumann

    Abstract

    Elliptic curve groups have been a popular choice in the implementation of traditional and post-quantum cryptographic schemes, including Diffie-Hellman type key-exchange protocols and digital signature algorithms. While some of these applications require to perform multidimensional scalar point multiplication (d-Mul), some of the others enjoy extra speed-ups from utilizing d-Mul with some precomputation or efficiently computable endomorphisms. When the underlying scalars in d-Mul are secret, it is crucial to follow a regular pattern of operations in the implementation because otherwise scalars may be recovered through side channel attacks. Therefore, it has been of great interest to design efficient and secure scalar multiplication algorithms. In this talk, I will give a survey of d-Mul algorithms with some recent theoretical and implementation results.

    Biography

    Koray Karabina’s research interests are in the areas of design of new cryptographic primitives and algorithms, efficient implementation, and cryptanalysis. He received his Ph.D. degree in Combinatorics and Optimization from the University of Waterloo (UW) in 2010. Koray is currently working as an associate professor in the Department of Mathematical Sciences at Florida Atlantic University (FAU), and he is a member of the Center for Cryptology and Information Security at FAU. Koray’s current research in biometrics is supported by the National Science Foundation, and his research in elliptic curve cryptography is supported by the Army Research Office and the National Institute of Standards and Technology.

  • SIKE in Hardware for IoT

    Reza Azarderakhsh, Florida Atlantic University

    Monday, June 17, 2019 | 3:00 PM PDT | 99/1915 Hopper

    Abstract

    Supersingular Isogeny Key Encapsulation (SIKE) is one of the candidates submitted to NIST’s post-quantum cryptography standardization process. SIKE is the only key exchange mechanism based on isogenies on elliptic curves submitted to NIST standardization process which offers the smallest key-size in comparison to the other candidates. In this talk, we will start with the basics of isogeny-based cryptography and underlying finite field arithmetic computations and describe the SIKE algorithm. We will discuss hardware architectures for secret kernel computations for isogeny maps based on various curves. We will also provide higher level arithmetic overview for isogeny evaluations and isogeny computations which are required for SIKE computations. More specifically, we will provide discussions about isogeny graphs for both Alice and Bob and show how they walk on supersingular isogeny graphs and end up getting a shared secret. We will also provide hardware architectures and illustrate how higher level protocols could get translated and implemented in RTL and programmed into the FPGAs. We will provide performance and area usage results and compare them to the other quantum-safe candidates in similar platforms. Finally, applications will be discussed for IoT world and feasibility of SIKE will be discussed.

    Biography

    Dr. Reza Azarderakhsh is an assistant professor and I-SENSE Fellow in the Department of Computer Science and Engineering at Florida Atlantic University. He received his PhD. in Computer Engineering from Western University, Canada. After that, he was an post-doc research fellow at the Center for Applied Cryptographic Research, Department of Combinatorics and Optimization at the University of Waterloo which he is currently affiliated as a supervisor member of CryptoWorks21 program there. Dr. Azarderakhsh serves as an Associate Editor of IEEE Transactions on Circuits and Systems (TCAS-I) Cryptographic Engineering track. He is an author/coauthor of more than 80 journals and conference papers in the area of applied cryptography and cryptographic engineering. His research has been supported by NSF, NIST, ARO, NAVY, and Texas Instruments among others. He has developed several algorithms and architectures for classical and post-quantum cryptography including elliptic curve cryptography, isogeny-based cryptography, and lattice-based cryptography systems. He was a recipient of the prestigious NSERC Post-Doctoral Research Fellowship and the Texas Instruments Faculty Award.

  • Homomorphic Sorting with Better Scalability

    Gizem Cetin, WPI

    Thursday June 13, 201910:30 AM PDT | 99/1927 von Neumann

    Abstract

    Homomorphic sorting is an operation that blindly sorts a given set of encrypted numbers without decrypting them (thus, there is no need for the secret key). In this work,we propose a new, efficient, and scalable method for homomorphic sorting of numbers: polynomial rank sort algorithm. To put the new algorithm in acomparative perspective, we provide an extensive survey of classical sorting algorithms and networks that are not directly suitable for homomorphic computation. We also include, in our discussions, two of our previous algorithms specifically designed for homomorphic sorting operation: direct sort and greedy sort, and explain howthey evolve from classical sorting networks. We theoretically show that the new algorithm is superior in terms of multiplicative depth when compared with all other algorithms. When batched implementation is used, the number of comparisons is reduced from quadratic to linear in the set size, provided that the number of slots is larger than or equal to the number of elements in the set. Our software implementation results confirm that the new algorithm is several orders of magnitude faster than many methods in the literature. Also, the polynomial sort algorithm scalesbetter than the fastest algorithm in the literature to the best our knowledge although for small sets the execution times are comparable. The proposed algorithm is amenable to parallel implementation as most time consuming operations in the algorithm can naturally be performed concurrently.

    Biography

    Gizem S. Cetin completed her BS(’12) and MS(’14) degrees in Computer Science and Engineering, at Sabanci University. She was a member of Vernam Lab and recently got her PhD degree from the ECE Department at Worcester Polytechnic Institute. She spent the Summer 2016 as a research intern at Microsoft Research with the Cryptography Group in Redmond, WA. Gizem’s research interest includes software design of cryptosystems, somewhat, leveled and fully homomorphic encryption with a special focus in designing efficient algorithms for FHE/SWHE applications and their software implementations.

  • Lattice Attacks for Variants of LWE

    Shi Bai, Florida Atlantic University

    Tuesday June 4, 2019 | 1:15 PM PDT | 99/1927 von Neumann

    Abstract

    The learning with errors (LWE) problem introduced by Regev(STOC’05) is one of the fundamental problems in lattice-based cryptography. It has been used extensively as a security foundation, for public-key encryption, signatures, fully homomorphic encryption (FHE), pseudo-random functions (PRF) and many others. One standard strategy to solve the LWE problem is to reduce it to a unique SVP(uSVP) problem via Kannan’s embedding and then apply a lattice reduction to solve the uSVP problem. In this talk, we will discuss and compare various lattice algorithms for solving LWE, and then give some concrete estimates for breaking various variants of LWE (e.g. generic, small secrets, restricted samples). In the end, we will discuss some recent developments on algorithms for solving LWE.

    Biography

    Shi Bai is an assistant professor in the Department of Mathematical Sciences at Florida Atlantic University. His research fields are in cryptography and computational number theory. He received his PhD degree in Computer Science from the Australian National University in 2012. He worked as postdoctoral researchers at the University of Auckland (2013-2014) and then at the Ecole Normale Superieure Lyon (2015-2016). He is currently interested in number theoretical algorithms in the cryptanalysis for lattice-based cryptography.

  • Towards Developer-Centered Cryptographic Ecosystems

    Sasha Fahl, Leibniz University Hannover

    Thursday, May 23, 2019 | 2:00 PM PDT | 99/1915 Hopper

    Abstract

    Potentially dangerous cryptography errors are well documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable.

    In this talk, I will present some of my team’s work that addresses specific challenges around making cryptography easier to use for developers. I will discuss multiple studies with developers that focused on different aspects of cryptographic ecosystems, including API usability, the use of documentation, and a study on supporting developers inside their IDE.

    Biography

    Sascha Fahl is a Professor for Usable Security and Privacy at Leibniz University Hannover in Germany. Previously, he was Professor at Ruhr University Bochum, Germany, held the chair for Information Security at Leibniz University Hannover and was an independent research group leader at CISPA, Saarland University. Prof. Fahl studied Computer Science at Philipps University Marburg and received a Ph.D. in Computer Science. He worked with the Chrome Security team and was a researcher at Fraunhofer FKIE in Bonn. His research won the NSA’s best Scientific Cybersecurity Paper Competition, he received a Google Faculty Research Award and is a recipient of the Heinz Maier-Leibnitz Prize 2018.

  • Efficient Lattice Trapdoor Sampling and Its Applications

    Yuriy Polyakov, NJIT

    Wednesday, April 23, 2019 | 1:00 PM PDT | 99/1927 von Neumann

    Abstract

    Lattice trapdoors are used in a wide array of lattice-based cryptographic primitives, including identity-based encryption, attributed-based encryption, functional encryption, program obfuscation, and many others. The main computational bottleneck for many of these cryptographic primitives is lattice trapdoor sampling, which is the key reason why many of these constructions have been impractical. This talk presents efficient Residue-Number-System (RNS) algorithms for lattice trapdoor sampling, and discusses their implementation in the PALISADE library. Our state-of-the-art performance results for several applications based on these algorithms are also presented. These include Gentry-Peikert-Vaikuntanathan digital signature and identity-based encryption, ciphertext-policy and key-policy attribute-based encryption, and obfuscation of conjunction and general branching programs. Our results suggest that some of these implementations are already practical.

    Biography

    Yuriy Polyakov is an Associate Research Professor at the NJIT Department of Computer Science & Cybersecurity Research Center. His primary research interests are applied lattice-based cryptography, fully homomorphic encryption, program obfuscation, secure genome analysis, and secure computer systems. He is a co-founder and core contributor to the open-source PALISADE Lattice Cryptography Library. Prior to joining NJIT, he worked as a research scientist at MIT CSAIL and held several senior-level industrial positions. His prior research dealt with time series analysis using statistical physics methods and mathematical modeling of membrane technology processes. He received M.S. in Computer Science from NJIT (2003), PhD in Chemical Engineering from Moscow Polytechnic University (2004), and D.Sc.(Dr. Habil.) in Physics & Mathematics from Karpov Institute of Physical Chemistry (2007). He is a recipient of the Moscow Mayor’s Young Scientist Award (2005).

  • Strong Asymmetric PAKE based on Trapdoor CKEM

    Tatiana Bradley, University of California, Irvine

    Wednesday, April 17, 2019 | 11:00 AM – 12:00 PM PDT | 99/1915 Hopper

    Abstract

    Password-Authenticated Key Exchange (PAKE) protocols allow two parties that share a password to establish a shared key in a way that is immune to offline attacks. Asymmetric PAKE (aPAKE) adapts this notion to the common client-server setting, where the server stores a one-way hash of the password instead of the password itself, and server compromise allows the adversary to recover the password only via the (inevitable) offline dictionary attack. Most aPAKE protocols, however, allow an attacker to pre-compute a dictionary of hashes, thus instantly learning the user password on server compromise.

    Recently, Jarecki, Krawczyk, and Xu formalized a (Universally Composable) strong aPAKE (saPAKE) notion, which requires the password hash to be salted so that all dictionary attack computation must happen after the server compromise leaks the salt and the salted hash. They propose a protocol called OPAQUE, which requires 3 rounds, 3-4 exponentiations per party, and an interactive One-More Diffie-Hellman assumption in the Random Oracle Model (ROM) for security.

    We propose an alternative UC strong asymmetric PAKE construction based on a novel use of the encryption+SPHF paradigm for UC PAKE design. Though it also uses ROM, our protocol compares favorably to OPAQUE: it has only two rounds, uses fewer exponentiations, and is based on different assumptions: q-Strong Diffie-Hellman (q-SDH), and the assumptions required to show that the Boneh-Boyen signature function is a Salted Tight One-Way Function (STOWF).

    We formalize a UC model for STOWF and analyze the Boneh-Boyen function as UC STOWF in the generic group model (GGM) and ROM. Our saPAKE also uses an implicit-statement trapdoor Conditional Key Encapsulation Mechanism (CKEM), a generalization of SPHF, which forms an extractable commitment to the hashed statement in ROM. This strengthening of SPHF allows for a UC (sa)PAKE design where only the client commits to its password, and only the server performs an SPHF, compared to the standard UC PAKE design paradigm where the encrypt+SPHF subroutine is used symmetrically by both parties.

    This talk is based on joint work with Stanislaw Jarecki and Jiayu Xu that is currently under review at CRYPTO 2019.

    Biography

    Tatiana Bradley is a PhD candidate in Computer Science at UC Irvine. She received a Bachelors degree in Mathematics from Scripps College in 2015.Her research focus is applied cryptography, and, since joining Stanislaw Jarecki’s group at UCI, her research has focused on designing password-authenticated protocols. She has interned at IBM Research in Zurich, and will be interning at Google this summer.

  • Fully Bideniable Interactive Encryption

    Oxana Poburinnaya, Boston University

    Tuesday, March 19, 2019 | 1:30 PM – 2:45 PM PDT | 99/1919 Turing

    Abstract

    While standard encryption guarantees secrecy of the encrypted plaintext only against an attacker that has no knowledge of the communicating parties’ keys and randomness of encryption, deniable encryption [Canetti et al., Crypto’96] provides the additional guarantee that the plaintext remains secret even in face of entities that attempt to coerce (or bribe) the communicating parties to expose their internal states, including the plaintexts, keys and randomness. To achieve this guarantee, deniable encryption equips the parties with faking algorithms which allow them to generate fake keys and randomness that make the ciphertext appear consistent with any plaintext of the parties’ choice. To date, however, only partial results were known: Either deniability against coercing only the sender, or against coercing only the receiver [Sahai-Waters, STOC ‘14] or schemes satisfying weaker notions of deniability [O’Neil et al., Crypto ‘11].

    In this paper we present the first fully bideniable interactive encryption scheme, thus resolving the 20-years-old open problem. Our scheme also provides an additional and new guarantee: Even if the sender claims that one plaintext was used and the receiver claims a different one, the adversary has no way of figuring out who is lying the sender, the receiver, or both. This property, which we call off-the-record deniability, is useful when the parties don’t have means to agree on what fake plaintext to claim, or when one party defects against the other. Our protocol has three messages, which is optimal [Bendlin et al., Asiacrypt’11], and needs a globally available reference string. We assume subexponential indistinguishability obfuscation (IO) and one-way functions.

    Biography

    Oxana is a 6th year PhD student at Boston University, working in theoretical cryptography with Professor Ran Canetti. Prior to joining BU, she studied mathematics at Moscow State University.