The GNUnet Bibliography

The GNUnet Bibliography | Selected Papers in Meshnetworking

By topic | By date | By author



Publications by date

2018

Decentralized Authentication for Self-Sovereign Identities using Name Systems (PDF)
by Christian Grothoff, Martin Schanzenbach, Annett Laube, and Emmanuel Benoist.
In journal:??(847382), October 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The GNU Name System (GNS) is a fully decentralized public key infrastructure and name system with private information retrieval semantics. It serves a holistic approach to interact seamlessly with IoT ecosystems and enables people and their smart objects to prove their identity, membership and privileges - compatible with existing technologies. In this report we demonstrate how a wide range of private authentication and identity management scenarios are addressed by GNS in a cost-efficient, usable and secure manner. This simple, secure and privacy-friendly authentication method is a significant breakthrough when cyber peace, privacy and liability are the priorities for the benefit of a wide range of the population. After an introduction to GNS itself, we show how GNS can be used to authenticate servers, replacing the Domain Name System (DNS) and X.509 certificate authorities (CAs) with a more privacy-friendly but equally usable protocol which is trustworthy, human-centric and includes group authentication. We also built a demonstrator to highlight how GNS can be used in medical computing to simplify privacy-sensitive data processing in the Swiss health-care system. Combining GNS with attribute-based encryption, we created ReclaimID, a robust and reliable OpenID Connect-compatible authorization system. It includes simple, secure and privacy-friendly single sign-on to seamlessly share selected attributes with Web services, cloud ecosystems. Further, we demonstrate how ReclaimID can be used to solve the problem of addressing, authentication and data sharing for IoT devices. These applications are just the beginning for GNS; the versatility and extensibility of the protocol will lend itself to an even broader range of use-cases. GNS is an open standard with a complete free software reference implementation created by the GNU project. It can therefore be easily audited, adapted, enhanced, tailored, developed and/or integrated, as anyone is allowed to use the core protocols and implementations free of charge, and to adopt them to their needs under the terms of the GNU Affero General Public License, a free software license approved by the Free Software Foundation.

[Go to top]

reclaimID: Secure, Self-Sovereign Identities using Name Systems and Attribute-Based Encryption
by M. Schanzenbach, G. Bramm, and J. Schütte.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we present reclaimID: An architecture that allows users to reclaim their digital identities by securely sharing identity attributes without the need for a centralised service provider. We propose a design where user attributes are stored in and shared over a name system under user-owned namespaces. Attributes are encrypted using attribute-based encryption (ABE), allowing the user to selectively authorize and revoke access of requesting parties to subsets of his attributes. We present an implementation based on the decentralised GNU Name System (GNS) in combination with ciphertext-policy ABE using type-1 pairings. To show the practicality of our implementation, we carried out experimental evaluations of selected implementation aspects including attribute resolution performance. Finally, we show that our design can be used as a standard OpenID Connect Identity Provider allowing our implementation to be integrated into standard-compliant services

[Go to top]

Toward secure name resolution on the internet
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In Computers & Security, 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) provides crucial name resolution functions for most Internet services. As a result, DNS traffic provides an important attack vector for mass surveillance, as demonstrated by the QUANTUMDNS and MORECOWBELL programs of the NSA. This article reviews how DNS works and describes security considerations for next generation name resolution systems. We then describe DNS variations and analyze their impact on security and privacy. We also consider Namecoin, the GNU Name System and RAINS, which are more radical re-designs of name systems in that they both radically change the wire protocol and also eliminate the existing global consensus on TLDs provided by ICANN. Finally, we assess how the different systems stack up with respect to the goal of improving security and privacy of name resolution for the future Internet

[Go to top]

2017

The GNUnet System
by Christian Grothoff.
Habilitation à diriger des recherches, Université de Rennes 1, December 2017. (BibTeX entry) (Download bibtex record)
(direct link) (website)

GNUnet is an alternative network stack for building secure, decentralized and privacy-preserving distributed applications. Our goal is to replace the old insecure Internet protocol stack. Starting from an application for secure publication of files, it has grown to include all kinds of basic protocol components and applications towards the creation of a GNU internet. This habilitation provides an overview of the GNUnet architecture, including the development process, the network architecture and the software architecture. The goal of Part 1 is to provide an overview of how the various parts of the project work together today, and to then give ideas for future directions. The text is a first attempt to provide this kind of synthesis, and in return does not go into extensive technical depth on any particular topic. Part 2 then gives selected technical details based on eight publications covering many of the core components. This is a harsh selection; on the GNUnet website there are more than 50 published research papers and theses related to GNUnet, providing extensive and in-depth documentation. Finally, Part 3 gives an overview of current plans and future work

[Go to top]

Improving Voice over GNUnet (PDF)
by Christian Ulrich.
B.S, TU Berlin, July 2017. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In contrast to ubiquitous cloud-based solutions the telephony application GNUnet conversation provides fully-decentralized, secure voice communication and thus impedes mass surveillance. The aim of this thesis is to investigate why GNUnet conversation currently provides poor Quality of Experience under typical wide area network conditions and to propose optimization measures. After network shaping and the initialization of two isolated GNUnet peers had been automated, delay measurements were done. With emulated network characteristics network delay, cryptography delays and audio codec delays were measured and transmitted speech was recorded. An analysis of the measurement results and a subjective assessment of the speech recordings revealed that extreme outliers occur in most scenarios and impair QoE. Moreover it was shown that GNUnet conversation introduces a large delay that confines the environment in which good QoE is possible. In the measurement environment at least 23 ms always ocurred of which large parts are were caused by cryptography. It was shown that optimization in the cryptography part and other components are possible. Finally the conditions for currently reaching good QoE were determined and ideas for further investigations were presented

[Go to top]

Implementing Privacy Preserving Auction Protocols (PDF)
by Markus Teich.
Ph.D. thesis, TUM, February 2017. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this thesis we translate Brandt's privacy preserving sealed-bid online auction protocol from RSA to elliptic curve arithmetic and analyze the theoretical and practical benefits. With Brandt's protocol, the auction outcome is completely resolved by the bidders and the seller without the need for a trusted third party. Loosing bids are not revealed to anyone. We present libbrandt, our implementation of four algorithms with different outcome and pricing properties, and describe how they can be incorporated in a real-world online auction system. Our performance measurements show a reduction of computation time and prospective bandwidth cost of over 90 compared to an implementation of the RSA version of the same algorithms. We also evaluate how libbrandt scales in different dimensions and conclude that the system we have presented is promising with respect to an adoption in the real world

[Go to top]

2016

Enabling Secure Web Payments with GNU Taler (PDF)
by Jeffrey Burdges, Florian Dold, Christian Grothoff, and Marcello Stanisci.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

GNU Taler is a new electronic online payment system which provides privacy for customers and accountability for merchants. It uses an exchange service to issue digital coins using blind signatures, and is thus not subject to the performance issues that plague Byzantine fault-tolerant consensus-based solutions. The focus of this paper is addressing the challenges payment systems face in the context of the Web. We discuss how to address Web-specific challenges, such as handling bookmarks and sharing of links, as well as supporting users that have disabled JavaScript. Web payment systems must also navigate various constraints imposed by modern Web browser security architecture, such as same-origin policies and the separation between browser extensions and Web pages. While our analysis focuses on how Taler operates within the security infrastructure provided by the modern Web, the results partially generalize to other payment systems. We also include the perspective of merchants, as existing systems have often struggled with securing payment information at the merchant's side. Here, challenges include avoiding database transactions for customers that do not actually go through with the purchase, as well as cleanly separating security-critical functions of the payment system from the rest of the Web service

[Go to top]

Privacy-Preserving Abuse Detection in Future Decentralised Online Social Networks (PDF)
by Álvaro García-Recuero, Jeffrey Burdges, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Future online social networks need to not only protect sensitive data of their users, but also protect them from abusive behavior coming from malicious participants in the network. We investigate the use of supervised learning techniques to detect abusive behavior and describe privacy-preserving protocols to compute the feature set required by abuse classification algorithms in a secure and privacy-preserving way. While our method is not yet fully resilient against a strong adaptive adversary, our evaluation suggests that it will be useful to detect abusive behavior with a minimal impact on privacy

[Go to top]

Managing and Presenting User Attributes over a Decentralized Secure Name System
by Martin Schanzenbach and Christian Banse.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Today, user attributes are managed at centralized identity providers. However, two centralized identity providers dominate digital identity and access management on the web. This is increasingly becoming a privacy problem in times of mass surveillance and data mining for targeted advertisement. Existing systems for attribute sharing or credential presentation either rely on a trusted third party service or require the presentation to be online and synchronous. In this paper we propose a concept that allows the user to manage and share his attributes asynchronously with a requesting party using a secure, decentralized name system

[Go to top]

Byzantine Set-Union Consensus using Efficient Set Reconciliation (PDF)
by Florian Dold and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Applications of secure multiparty computation such as certain electronic voting or auction protocols require Byzantine agreement on large sets of elements. Implementations proposed in the literature so far have relied on state machine replication, and reach agreement on each individual set element in sequence. We introduce set-union consensus, a specialization of Byzantine consensus that reaches agreement over whole sets. This primitive admits an efficient and simple implementation by the composition of Eppstein's set reconciliation protocol with Ben-Or's ByzConsensus protocol. A free software implementation of this construction is available in GNUnet. Experimental results indicate that our approach results in an efficient protocol for very large sets, especially in the absence of Byzantine faults. We show the versatility of set-union consensus by using it to implement distributed key generation, ballot collection and cooperative decryption for an electronic voting protocol implemented in GNUnet

[Go to top]

GNUnet und Informationsmacht: Analyse einer P2P-Technologie und ihrer sozialen Wirkung (PDF)
by Christian Ricardo Kühne.
Diplomarbeit, Humboldt-Universität zu Berlin, April 2016. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis studies the GNUnet project comprising its history, ideas and the P2P network technology. It specifically investigates the question of emancipatory potentials with regard to forms of information power due to a widely deployed new Internet technology and tries to identify essential suspensions of power within the scope of an impact assessment. Moreover, we will see by contrasting the GNUnet project with the critical data protection project, founded on social theory, that both are heavily concerned about the problem of illegitimate and unrestrained information power, giving us additional insights for the assessment. Last but least I'll try to present a scheme of how both approaches may interact to realize their goals

[Go to top]

Zur Idee herrschaftsfreier kooperativer Internetdienste (PDF)
by Christian Ricardo Kühne.
In FIfF-Kommunikation, 2016. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Byzantine Set-Union Consensus using Efficient Set Reconciliation (PDF)
by Florian Dold and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

2015

Byzantine Fault Tolerant Set Consensus with Efficient Set Reconciliation (PDF)
by Florian Dold.
Master, Technische Universität München, December 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Byzantine consensus is a fundamental and well-studied problem in the area of distributed system. It requires a group of peers to reach agreement on some value, even if a fraction of the peers is controlled by an adversary. This thesis proposes set union consensus, an efficient generalization of Byzantine consensus from single elements to sets. This is practically motivated by Secure Multiparty Computation protocols such as electronic voting, where a large set of elements must be collected and agreed upon. Existing practical implementations of Byzantine consensus are typically based on state machine replication and not well-suited for agreement on sets, since they must process individual agreements on all set elements in sequence. We describe and evaluate our implementation of set union consensus in GNUnet, which is based on a composition of Eppstein set reconciliation protocol with the simple gradecast consensus prococol described by Ben-Or

[Go to top]

A Secure and Resilient Communication Infrastructure for Decentralized Networking Applications (PDF)
by Matthias Wachs.
PhD, Technische Universität München, February 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis provides the design and implementation of a secure and resilient communication infrastructure for decentralized peer-to-peer networks. The proposed communication infrastructure tries to overcome limitations to unrestricted communication on today's Internet and has the goal of re-establishing unhindered communication between users. With the GNU name system, we present a fully decentralized, resilient, and privacy-preserving alternative to DNS and existing security infrastructures

[Go to top]

NSA's MORECOWBELL: Knell for DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Le programme MORECOWBELL de la NSA Sonne le glas du NSA (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Ludovic Courtès.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Il programma MORECOWBELL della NSA: Campane a morto per il DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Luca Saiu.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

El programa MORECOWBELL de la NSA: Doblan las campanas para el DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

2014

A Decentralized and Autonomous Anomaly Detection Infrastructure for Decentralized Peer-to-Peer Networks (PDF)
by Omar Tarabai.
Master, October 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In decentralized networks, collecting and analysing information from the network is useful for developers and operators to monitor the behaviour and detect anomalies such as attacks or failures in both the overlay and underlay networks. But realizing such an infrastructure is hard to achieve due to the decentralized nature of the network especially if the anomaly occurs on systems not operated by developers or participants get separated from the collection points. In this thesis a decentralized monitoring infrastructure using a decentralized peer-to-peer network is developed to collect information and detect anomalies in a collaborative way without coordination by and in absence of a centralized infrastructure and report detected incidents to a monitoring infrastructure. We start by introducing background information about peer-to-peer networks, anomalies and anomaly detection techniques in literature. Then we present some of the related work regarding monitoring decentralized networks, anomaly detection and data aggregation in decentralized networks. Then we perform an analysis of the system objectives, target environment and the desired properties of the system. Then we design the system in terms of the overall structure and its individual components. We follow with details about the system implementation. Lastly, we evaluate the final system implementation against our desired objectives

[Go to top]

Automatic Transport Selection and Resource Allocation for Resilient Communication in Decentralised Networks (PDF)
by Matthias Wachs, Fabian Oehlmann, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Making communication more resilient is a main focus for modern decentralised networks. A current development to increase connectivity between participants and to be resilient against service degradation attempts is to support different communication protocols, and to switch between these protocols in case degradation or censorship are detected. Supporting multiple protocols with different properties and having to share resources for communication with multiple partners creates new challenges with respect to protocol selection and resource allocation to optimally satisfy the applications' requirements for communication. This paper presents a novel approach for automatic transport selection and resource allocation with a focus on decentralised networks. Our goal is to evaluate the communication mechanisms available for each communication partner and then allocate resources in line with the requirements of the applications. We begin by detailing the overall requirements for an algorithm for transport selection and resource allocation, and then compare three different solutions using (1) a heuristic, (2) linear optimisation, and (3) machine learning. To show the suitability and the specific benefits of each approach, we evaluate their performance with respect to usability, scalability and quality of the solution found in relation to application requirements

[Go to top]

An Approach for Home Routers to Securely Erase Sensitive Data (PDF)
by Nicolas Bene s.
Bachelor Thesis, Technische Universität München, October 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Home routers are always-on low power embedded systems and part of the Internet infrastructure. In addition to the basic router functionality, they can be used to operate sensitive personal services, such as for private web and email servers, secure peer-to-peer networking services like GNUnet and Tor, and encrypted network file system services. These services naturally involve cryptographic operations with the cleartext keys being stored in RAM. This makes router devices possible targets to physical attacks by home intruders. Attacks include interception of unprotected data on bus wires, alteration of firmware through exposed JTAG headers, or recovery of cryptographic keys through the cold boot attack. This thesis presents Panic!, a combination of open hardware design and free software to detect physical integrity attacks and to react by securely erasing cryptographic keys and other sensitive data from memory. To improve auditability and to allow cheap reproduction, the components of Panic! are kept simple in terms of conceptual design and lines of code. First, the motivation to use home routers for services besides routing and the need to protect their physical integrity is discussed. Second, the idea and functionality of the Panic! system is introduced and the high-level interactions between its components explained. Third, the software components to be run on the router are described. Fourth, the requirements of the measurement circuit are declared and a prototype is presented. Fifth, some characteristics of pressurized environments are discussed and the difficulties for finding adequate containments are explained. Finally, an outlook to tasks left for the future is given

[Go to top]

Experimental comparison of Byzantine fault tolerant distributed hash tables (PDF)
by Supriti Singh.
Masters, Saarland University, September 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Hash Tables (DHTs) are a key data structure for construction of a peer to peer systems. They provide an efficient way to distribute the storage and retrieval of key-data pairs among the participating peers. DHTs should be scalable, robust against churn and resilient to attacks. X-Vine is a DHT protocol which offers security against Sybil attacks. All communication among peers is performed over social network links, with the presumption that a friend can be trusted. This trust can be extended to a friend of a friend. It uses the tested Chord Ring topology as an overlay, which has been proven to be scalable and robust. The aim of the thesis is to experimentally compare two DHTs, R5 N and X-Vine. GNUnet is a free software secure peer to peer framework, which uses R 5N . In this thesis, we have presented the implementation of X-Vine on GNUnet, and compared the performance of R5 N and X-Vine

[Go to top]

Improved Kernel-Based Port-Knocking in Linux (PDF)
by Julian Kirsch.
Master's, August 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Port scanning is used to discover vulnerable services and launch attacks against network infrastructure. Port knocking is a well-known technique to hide TCP servers from port scanners. This thesis presents the design of TCP Stealth, a socket option to realize new port knocking variant with improved security and usability compared to previous designs. TCP Stealth replaces the traditional random TCP SQN number with a token that authenticates the client and (optionally) the first bytes of the TCP payload. Clients and servers can enable TCP Stealth by explicitly setting a socket option or linking against a library that wraps existing network system calls. This thesis also describes Knock, a free software implementation of TCP Stealth for the Linux kernel and libknockify, a shared library that wraps network system calls to activate Knock on GNU/Linux systems, allowing administrators to deploy Knock without recompilation. Finally, we present experimental results demonstrating that TCP Stealth is compatible with most existing middleboxes on the Internet

[Go to top]

Cryptographically Secure, Distributed Electronic Voting (PDF)
by Florian Dold.
Bachelor's, Technische Universität München, August 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Elections are a vital tool for decision-making in democratic societies. The past decade has witnessed a handful of attempts to apply modern technology to the election process in order to make it faster and more cost-effective. Most of the practical efforts in this area have focused on replacing traditional voting booths with electronic terminals, but did not attempt to apply cryptographic techniques able to guarantee critical properties of elections such as secrecy of ballot and verifiability. While such techniques were extensively researched in the past 30 years, practical implementation of cryptographically secure remote electronic voting schemes are not readily available. All existing implementation we are aware of either exhibit critical security flaws, are proprietary black-box systems or require additional physical assumptions such as a preparatory key ceremony executed by the election officials. The latter makes such systems unusable for purely digital communities. This thesis describes the design and implementation of an electronic voting system in GNUnet, a framework for secure and decentralized networking. We provide a short survey of voting schemes and existing implementations. The voting scheme we implemented makes use of threshold cryptography, a technique which requires agreement among a large subset of the election officials to execute certain cryptographic operations. Since such protocols have applications outside of electronic voting, we describe their design and implementation in GNUnet separately

[Go to top]

Control Flow Analysis for Event-Driven Programs (PDF)
by Florian Scheibner.
Bachelors, Technical University of Munich, July 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Static analysis is often used to automatically check for common bugs in programs. Compilers already check for some common programming errors and issue warnings; however, they do not do a very deep analysis because this would slow the compilation of the program down. Specialized tools like Coverity or Clang Static Analyzer look at possible runs of a program and track the state of variables in respect to function calls. This information helps to identify possible bugs. In event driven programs like GNUnet callbacks are registered for later execution. Normal static analysis cannot track these function calls. This thesis is an attempt to extend different static analysis tools so that they can handle this case as well. Different solutions were thought of and executed with Coverity and Clang. This thesis describes the theoretical background of model checking and static analysis, the practical usage of wide spread static analysis tools, and how these tools can be extended in order to improve their usefulness

[Go to top]

DP5: A Private Presence Service (PDF)
by Nikita Borisov, George Danezis, and Ian Goldberg.
In Centre for Applied Cryptographic Research (CACR), University of Waterloo, May 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The recent NSA revelations have shown that address book and buddy list information are routinely targeted for mass interception. As a response to this threat, we present DP5, a cryptographic service that provides privacy-friendly indication of presence to support real-time communications. DP5 allows clients to register and query the online presence of their list of friends while keeping this list secret. Besides presence, high-integrity status updates are supported, to facilitate key update and rendezvous protocols. While infrastructure services are required for DP5 to operate, they are designed to not require any long-term secrets and provide perfect forward secrecy in case of compromise. We provide security arguments for the indistinguishability properties of the protocol, as well as an evaluation of its performance

[Go to top]

Numerical Stability and Scalability of Secure Private Linear Programming (PDF)
by Raphael Arias.
Bachelor's, Technische Universität München, February 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Linear programming (LP) has numerous applications in different fields. In some scenarios, e.g. supply chain master planning (SCMP), the goal is solving linear programs involving multiple parties reluctant to sharing their private information. In this case, methods from the area of secure multi-party computation (SMC) can be used. Secure multi-party versions of LP solvers have been known to be impractical due to high communication complexity. To overcome this, solutions based on problem transformation have been put forward. In this thesis, one such algorithm, proposed by Dreier and Kerschbaum, is discussed, implemented, and evaluated with respect to numerical stability and scalability. Results obtained with different parameter sets and different test cases are presented and some problems are exposed. It was found that the algorithm has some unforeseen limitations, particularly when implemented within the bounds of normal primitive data types. Random numbers generated during the protocol have to be extremely small so as to not cause problems with overflows after a series of multiplications. The number of peers participating additionally limits the size of numbers. A positive finding was that results produced when none of the aforementioned problems occur are generally quite accurate. We discuss a few possibilities to overcome some of the problems with an implementation using arbitrary precision numbers

[Go to top]

Machine Learning for Bandwidth Management in Decentralized Networks (PDF)
by Fabian Oehlmann.
Masters, Technische Universität München, February 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The successful operation of a peer-to-peer network depends on the resilience of its peer's communications. On the Internet, direct connections between peers are often limited by restrictions like NATs and traffic filtering. Addressing such problems is particularly pressing for peer-to-peer networks that do not wish to rely on any trusted infrastructure, which might otherwise help the participants establish communication channels. Modern peer-to-peer networks employ various techniques to address the problem of restricted connectivity on the Internet. One interesting development is that various overlay networks now support multiple communication protocols to improve resilience and counteract service degradation. The support of multiple protocols causes a number of new challenges. A peer should evaluate which protocols fulfill the communication requirements best. Furthermore, limited resources, such as bandwidth, should be distributed among peers and protocols to match application requirements. Existing approaches to this problem of transport selection and resource allocation are rigid: they calculate the solution only from the current state of the environment, and do not adapt their strategy based on failures or successes of previous allocations. This thesis explores the feasibility of using machine learning to improve the quality of the transport selection and resource allocation over current approaches. The goal is to improve the solution process by learning selection and allocation strategies from the experience gathered in the course of many iterations of the algorithm. We compare the different approaches in the field of machine learning with respect to their properties and suitability for the problem. Based on this evaluation and an in-depth analysis of the requirements of the underlying problem, the thesis presents a design how reinforcement learning can be used and adapted to the given problem domain. The design is evaluated with the help of simulation and a realistic implementation in the GNUnet Peer-to-Peer framework. Our experimental results highlight some of the implications of the multitude of implementation choices, key challenges, and possible directions for the use of reinforcement learning in this domain

[Go to top]

The Internet is Broken: Idealistic Ideas for Building a GNU Network (PDF)
by Christian Grothoff, Bartlomiej Polot, and Carlo von Loesch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Cryogenic: Enabling Power-Aware Applications on Linux (PDF)
by Alejandra Morales.
Masters, Technische Universität München, February 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As a means of reducing power consumption, hardware devices are capable to enter into sleep-states that have low power consumption. Waking up from those states in order to return to work is typically a rather energy-intensive activity. Some existing applications have non-urgent tasks that currently force hardware to wake up needlessly or prevent it from going to sleep. It would be better if such non-urgent activities could be scheduled to execute when the respective devices are active to maximize the duration of sleep-states. This requires cooperation between applications and the kernel in order to determine when the execution of a task will not be expensive in terms of power consumption. This work presents the design and implementation of Cryogenic, a POSIX-compatible API that enables clustering tasks based on the hardware activity state. Specifically, Cryogenic's API allows applications to defer their execution until other tasks use the device they want to use. As a result, two actions that contribute to reduce the device energy consumption are achieved: reduce the number of hardware wake-ups and maximize the idle periods. The energy measurements enacted at the end of this thesis demonstrate that, for the specific setup and conditions present during our experimentation, Cryogenic is capable to achieve savings between 1 and 10 for a USB WiFi device. Although we ideally target mobile platforms, Cryogenic has been developed by means a new Linux module that integrates with the existing POSIX event loop system calls. This allows to use Cryogenic on many different platforms as long as they use a GNU/Linux distribution as the main operating system. An evidence of this can be found in this thesis, where we demonstrate the power savings on a single-board computer

[Go to top]

CADET: Confidential Ad-hoc Decentralized End-to-End Transport (PDF)
by Bartlomiej Polot and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes CADET, a new transport protocol for confidential and authenticated data transfer in decentralized networks. This transport protocol is designed to operate in restricted-route scenarios such as friend-to-friend or ad-hoc wireless networks. We have implemented CADET and evaluated its performance in various network scenarios, compared it to the well-known TCP/IP stack and tested its response to rapidly changing network topologies. While our current implementation is still significantly slower in high-speed low-latency networks, for typical Internet-usage our system provides much better connectivity and security with comparable performance to TCP/IP

[Go to top]

Forward-Secure Distributed Encryption (PDF)
by Wouter Lueks, Jaap-Henk Hoepman, and Klaus Kursawe.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed encryption is a cryptographic primitive that implements revocable privacy. The primitive allows a recipient of a message to decrypt it only if enough senders encrypted that same message. We present a new distributed encryption scheme that is simpler than the previous solution by Hoepman and Galindoin particular it does not rely on pairingsand that satisfies stronger security requirements. Moreover, we show how to achieve key evolution, which is necessary to ensure scalability in many practical applications, and prove that the resulting scheme is forward secure. Finally, we present a provably secure batched distributed encryption scheme that is much more efficient for small plaintext domains, but that requires more storage

[Go to top]

On the Effectiveness of Obfuscation Techniques in Online Social Networks (PDF)
by Terence Chen, Roksana Boreli, Mohamed-Ali Kaafar, and Arik Friedman.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Data obfuscation is a well-known technique for protecting user privacy against inference attacks, and it was studied in diverse settings, including search queries, recommender systems, location-based services and Online Social Networks (OSNs). However, these studies typically take the point of view of a single user who applies obfuscation, and focus on protection of a single target attribute. Unfortunately, while narrowing the scope simplifies the problem, it overlooks some significant challenges that effective obfuscation would need to address in a more realistic setting. First, correlations between attributes imply that obfuscation conducted to protect a certain attribute, may influence inference attacks targeted at other attributes. In addition, when multiple users conduct obfuscation simultaneously, the combined effect of their obfuscations may be significant enough to affect the inference mechanism to their detriment. In this work we focus on the OSN setting and use a dataset of 1.9 million Facebook profiles to demonstrate the severity of these problems and explore possible solutions. For example, we show that an obfuscation policy that would limit the accuracy of inference to 45 when applied by a single user, would result in an inference accuracy of 75 when applied by 10 of the users. We show that a dynamic policy, which is continuously adjusted to the most recent data in the OSN, may mitigate this problem. Finally, we report the results of a user study, which indicates that users are more willing to obfuscate their profiles using popular and high quality items. Accordingly, we propose and evaluate an obfuscation strategy that satisfies both user needs and privacy protection

[Go to top]

Do Dummies Pay Off? Limits of Dummy Traffic Protection in Anonymous Communications (PDF)
by Simon Oya, Carmela Troncoso, and Fernando Pérez-González.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous communication systems ensure that correspondence between senders and receivers cannot be inferred with certainty.However, when patterns are persistent, observations from anonymous communication systems enable the reconstruction of user behavioral profiles. Protection against profiling can be enhanced by adding dummy messages, generated by users or by the anonymity provider, to the communication. In this paper we study the limits of the protection provided by this countermeasure. We propose an analysis methodology based on solving a least squares problem that permits to characterize the adversary's profiling error with respect to the user behavior, the anonymity provider behavior, and the dummy strategy. Focusing on the particular case of a timed pool mix we show how, given a privacy target, the performance analysis can be used to design optimal dummy strategies to protect this objective

[Go to top]

A Censorship-Resistant, Privacy-Enhancing and Fully Decentralized Name System (PDF)
by Matthias Wachs, Martin Schanzenbach, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) is vital for access to information on the Internet. This makes it a target for attackers whose aim is to suppress free access to information. This paper introduces the design and implementation of the GNU Name System (GNS), a fully decentralized and censorship-resistant name system. GNS provides a privacy-enhancing alternative to DNS which preserves the desirable property of memorable names. Due to its design, it can also double as a partial replacement of public key infrastructures, such as X.509. The design of GNS incorporates the capability to integrate and coexist with DNS. GNS is based on the principle of a petname system and builds on ideas from the Simple Distributed Security Infrastructure (SDSI), addressing a central issue with the decentralized mapping of secure identifiers to memorable names: namely the impossibility of providing a global, secure and memorable mapping without a trusted authority. GNS uses the transitivity in the SDSI design to replace the trusted root with secure delegation of authority, thus making petnames useful to other users while operating under a very strong adversary model. In addition to describing the GNS design, we also discuss some of the mechanisms that are needed to smoothly integrate GNS with existing processes and procedures in Web browsers. Specifically, we show how GNS is able to transparently support many assumptions that the existing HTTP(S) infrastructure makes about globally unique names

[Go to top]

Censorship-Resistant and Privacy-Preserving Distributed Web Search (PDF)
by Michael Herrmann, Ren Zhang, Kai-Chun Ning, and Claudia Diaz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The vast majority of Internet users are relying on centralized search engine providers to conduct their web searches. However, search results can be censored and search queries can be recorded by these providers without the user's knowledge. Distributed web search engines based on peer-to-peer networks have been proposed to mitigate these threats. In this paper we analyze the three most popular real-world distributed web search engines: Faroo, Seeks and Yacy, with respect to their censorship resistance and privacy protection. We show that none of them provides an adequate level of protection against an adversary with modest resources. Recognizing these flaws, we identify security properties a censorship-resistant and privacy-preserving distributed web search engine should provide. We propose two novel defense mechanisms called node density protocol and webpage verification protocol to achieve censorship resistance and show their effectiveness and feasibility with simulations. Finally, we elaborate on how state-of-the-art defense mechanisms achieve privacy protection in distributed web search engines

[Go to top]

The Best of Both Worlds: Combining Information-Theoretic and Computational PIR for Communication Efficiency (PDF)
by Casey Devet and Ian Goldberg.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The goal of Private Information Retrieval (PIR) is the ability to query a database successfully without the operator of the database server discovering which record(s) of the database the querier is interested in. There are two main classes of PIR protocols: those that provide privacy guarantees based on the computational limitations of servers (CPIR) and those that rely on multiple servers not colluding for privacy (IT-PIR). These two classes have different advantages and disadvantages that make them more or less attractive to designers of PIR-enabled privacy enhancing technologies. We present a hybrid PIR protocol that combines two PIR protocols, one from each of these classes. Our protocol inherits many positive aspects of both classes and mitigates some of the negative aspects. For example, our hybrid protocol maintains partial privacy when the security assumptions of one of the component protocols is broken, mitigating the privacy loss in such an event. We have implemented our protocol as an extension of the Percy++ library so that it combines a PIR protocol by Aguilar Melchor and Gaborit with one by Goldberg. We show that our hybrid protocol uses less communication than either of these component protocols and that our scheme is particularly beneficial when the number of records in a database is large compared to the size of the records. This situation arises in applications such as TLS certificate verification, anonymous communications systems, private LDAP lookups, and others

[Go to top]

2013

Speeding Up Tor with SPDY (PDF)
by Andrey Uzunov.
Master's, Technische Universität München, November 2013. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SPDY is a rather new protocol which is an alternative to HTTP. It was designed to address inefficiencies in the latter and thereby improve latency and reduce bandwidth consumption. This thesis presents the design and implementation of a setup for utilizing SPDY within the anonymizing Tor network for reducing latency and traffic in the latter. A C library implementing the SPDY server protocol is introduced together with an HTTP to SPDY and a SPDY to HTTP proxy which are the base for the presented design. Furthermore, we focus on the SPDY server push feature which allows servers to send multiple responses to a single request for reducing latency and traffic on loading web pages. We propose a prediction algorithm for employing push at SPDY servers and proxies. The algorithm makes predictions based on previous requests and responses and initially does not know anything about the data which it will push. This thesis includes extensive measurement data highlighting the possible benefits of using SPDY instead of HTTP and HTTPS (1.0 or 1.1), especially with respect to networks experiencing latency or loss. Moreover, the real profit from using SPDY within the Tor network on loading some of the most popular web sites is presented. Finally, evaluations of the proposed push prediction algorithm are given for emphasizing the possible gain of employing it at SPDY reverse and forward proxies

[Go to top]

On the Feasibility of a Censorship Resistant Decentralized Name System (PDF)
by Matthias Wachs, Martin Schanzenbach, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A central problem on the Internet today is that key infrastructure for security is concentrated in a few places. This is particularly true in the areas of naming and public key infrastructure. Secret services and other government organizations can use this fact to block access to information or monitor communications. One of the most popular and easy to perform techniques is to make information on the Web inaccessible by censoring or manipulating the Domain Name System (DNS). With the introduction of DNSSEC, the DNS is furthermore posed to become an alternative PKI to the failing X.509 CA system, further cementing the power of those in charge of operating DNS. This paper maps the design space and gives design requirements for censorship resistant name systems. We survey the existing range of ideas for the realization of such a system and discuss the challenges these systems have to overcome in practice. Finally, we present the results from a survey on browser usage, which supports the idea that delegation should be a key ingredient in any censorship resistant name system

[Go to top]

Monkey–Generating Useful Bug Reports Automatically (PDF)
by Markus Teich.
Bachelor Thesis, Technische Universität München, July 2013. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Automatic crash handlers support software developers in finding bugs and fixing the problems in their code. Most of them behave similarly in providing the developer with a (symbolic) stack trace and a memory dump of the crashed application. This introduces some problems that we try to fix with our proposed automatic bug reporting system called "Monkey". In this paper we describe the problems that occur when debugging widely distributed systems and how Monkey handles them. First, we describe our Motivation for develop- ing the Monkey system. Afterwards we present the most common existing automatic crash handlers and how they work. Thirdly you will get an overview of the Monkey system and its components. In the fourth chapter we will analyze one report gener- ated by Monkey, evaluate an online experiment we conducted and present some of our finding during the development of the clustering algorithm used to categorize crash reports. Last, we discuss some of Monkeys features and compare them to the existing approaches. Also some ideas for the future development of the Monkey system are presented before we conclude that Monkey's approach is promising, but some work is still left to establish Monkey in the open source community

[Go to top]

Large Scale Distributed Evaluation of Peer-to-Peer Protocols (PDF)
by Sree Harsha Totakura.
Masters, Technische Universität München, June 2013. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Evaluations of P2P protocols during the system's design and implementation phases are commonly done through simulation and emulation respectively. While the current state-of-the-art simulation allows evaluations with many millions of peers through the use of abstractions, emulation still lags behind as it involves executing the real implementation at some parts of the system. This difference in scales can make it hard to relate the evaluations made created with simulation and emulation during the design and implementation phases and can results in a limited evaluation of the implementation, which may cause severe problems after deployment. In this thesis, we build upon an existing emulator for P2P applications to push the scales offered by emulation towards the limits set by simulation. Our approach distributes and co-ordinates the emulation across many hosts. Large deployments are possible by deploying hundreds or thousands of peers on each host. To address the varying needs of an experimenter and the range of available hardware, we make our approach scalable such that it can easily be adapted to run evaluations on a single machine or a large group of hosts. Specifically, the system automatically adjusts the number of overlapping operations to the available resources efficiently using a feedback mechanism, thus relieving the experimenter from the hassles of manual tuning. We specifically target HPC systems like compute clusters and supercomputers and demonstrate how such systems can be used for large scale emulations by evaluating two P2P applications with deployment sizes up to 90k peers on a supercomputer

[Go to top]

Towards a Personalized Internet: a Case for a Full Decentralization
by Anne-Marie Kermarrec.
In Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 371(1987), March 2013. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Web has become a user-centric platform where users post, share, annotate, comment and forward content be it text, videos, pictures, URLs, etc. This social dimension creates tremendous new opportunities for information exchange over the Internet, as exemplified by the surprising and exponential growth of social networks and collaborative platforms. Yet, niche content is sometimes difficult to retrieve using traditional search engines because they target the mass rather than the individual. Likewise, relieving users from useless notification is tricky in a world where there is so much information and so little of interest for each and every one of us. We argue that ultra-specific content could be retrieved and disseminated should search and notification be personalized to fit this new setting. We also argue that users' interests should be implicitly captured by the system rather than relying on explicit classifications simply because the world is by nature unstructured, dynamic and users do not want to be hampered in their actions by a tight and static framework. In this paper, we review some existing personalization approaches, most of which are centralized. We then advocate the need for fully decentralized systems because personalization raises two main issues. Firstly, personalization requires information to be stored and maintained at a user granularity which can significantly hurt the scalability of a centralized solution. Secondly, at a time when the big brother is watching you' attitude is prominent, users may be more and more reluctant to give away their personal data to the few large companies that can afford such personalization. We start by showing how to achieve personalization in decentralized systems and conclude with the research agenda ahead

[Go to top]

WhatsUp: A Decentralized Instant News Recommender (PDF)
by Antoine Boutet, Davide Frey, Rachid Guerraoui, Arnaud Jegou, and Anne-Marie Kermarrec.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present WHATSUP, a collaborative filtering system for disseminating news items in a large-scale dynamic setting with no central authority. WHATSUP constructs an implicit social network based on user profiles that express the opinions of users about the news items they receive (like-dislike). Users with similar tastes are clustered using a similarity metric reflecting long-standing and emerging (dis)interests. News items are disseminated through a novel heterogeneous gossip protocol that (1) biases the orientation of its targets towards those with similar interests, and (2) amplifies dissemination based on the level of interest in every news item. We report on an extensive evaluation of WHATSUP through (a) simulations, (b) a ModelNet emulation on a cluster, and (c) a PlanetLab deployment based on real datasets. We show that WHATSUP outperforms various alternatives in terms of accurate and complete delivery of relevant news items while preserving the fundamental advantages of standard gossip: namely, simplicity of deployment and robustness

[Go to top]

Trawling for Tor Hidden Services: Detection, Measurement, Deanonymization (PDF)
by A. Biryukov, I. Pustogarov, and R. Weinmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Public Key Pinning for TLS Using a Trust on First Use Model (PDF)
by Gabor X Toth.
In unknown, 2013. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Although the Public Key Infrastructure (PKI) using X.509 is meant to prevent the occurrence of man-in-the-middle attacks on TLS, there are still situations in which such attacks are possible due to the large number of Certification Authorities (CA) that has to be trusted. Recent incidents involving CA compromises, which lead to issuance of rogue certificates indicate the weakness of the PKI model. Recently various public key pinning protocols – such as DANE or TACK – have been proposed to thwart man-in-the-middle attacks on TLS connections. It will take a longer time, however, until any of these protocols reach wide deployment. We present an approach intended as an interim solution to bridge this gap and provide protection for connections to servers not yet using a pinning protocol. The presented method is based on public key pinning with a trust on first use model, and can be combined with existing notary approaches as well

[Go to top]

Privacy
by Judith DeCew.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Persea: A Sybil-resistant Social DHT (PDF)
by Mahdi N. Al-Ameen and Matthew Wright.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

P2P systems are inherently vulnerable to Sybil attacks, in which an attacker can have a large number of identities and use them to control a substantial fraction of the system. We propose Persea, a novel P2P system that is more robust against Sybil attacks than prior approaches. Persea derives its Sybil resistance by assigning IDs through a bootstrap tree, the graph of how nodes have joined the system through invitations. More specifically, a node joins Persea when it gets an invitation from an existing node in the system. The inviting node assigns a node ID to the joining node and gives it a chunk of node IDs for further distribution. For each chunk of ID space, the attacker needs to socially engineer a connection to another node already in the system. This hierarchical distribution of node IDs confines a large attacker botnet to a considerably smaller region of the ID space than in a normal P2P system. Persea uses a replication mechanism in which each (key,value) pair is stored in nodes that are evenly spaced over the network. Thus, even if a given region is occupied by attackers, the desired (key,value) pair can be retrieved from other regions. We compare our results with Kad, Whanau, and X-Vine and show that Persea is a better solution against Sybil attacks. collapse

[Go to top]

FreeRec: An Anonymous and Distributed Personalization Architecture
by Antoine Boutet, Davide Frey, Arnaud Jegou, Anne-Marie Kermarrec, and Heverson B. Ribeiro.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Design of a Social Messaging System Using Stateful Multicast (PDF)
by Gabor X Toth.
Master's, University of Amsterdam, 2013. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This work presents the design of a social messaging service for the GNUnet peer-to-peer framework that offers scalability, extensibility, and end-to-end encrypted communication. The scalability property is achieved through multicast message delivery, while extensibility is made possible by using PSYC (Protocol for SYnchronous Communication), which provides an extensible RPC (Remote Procedure Call) syntax that can evolve over time without having to upgrade the software on all nodes in the network. Another key feature provided by the PSYC layer are stateful multicast channels, which are used to store e.g. user profiles. End-to-end encrypted communication is provided by the mesh service of GNUnet, upon which the multicast channels are built. Pseudonymous users and social places in the system have cryptographical identities — identified by their public key — these are mapped to human memorable names using GNS (GNU Name System), where each pseudonym has a zone pointing to its places

[Go to top]

Broadening the Scope of Differential Privacy Using Metrics (PDF)
by Konstantinos Chatzikokolakis, MiguelE. Andrés, NicolásEmilio Bordenabe, and Catuscia Palamidessi.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Differential Privacy is one of the most prominent frameworks used to deal with disclosure prevention in statistical databases. It provides a formal privacy guarantee, ensuring that sensitive information relative to individuals cannot be easily inferred by disclosing answers to aggregate queries. If two databases are adjacent, i.e. differ only for an individual, then the query should not allow to tell them apart by more than a certain factor. This induces a bound also on the distinguishability of two generic databases, which is determined by their distance on the Hamming graph of the adjacency relation. In this paper we explore the implications of differential privacy when the indistinguishability requirement depends on an arbitrary notion of distance. We show that we can naturally express, in this way, (protection against) privacy threats that cannot be represented with the standard notion, leading to new applications of the differential privacy framework. We give intuitive characterizations of these threats in terms of Bayesian adversaries, which generalize two interpretations of (standard) differential privacy from the literature. We revisit the well-known results stating that universally optimal mechanisms exist only for counting queries: We show that, in our extended setting, universally optimal mechanisms exist for other queries too, notably sum, average, and percentile queries. We explore various applications of the generalized definition, for statistical databases as well as for other areas, such that geolocation and smart metering

[Go to top]

Answering $n^2+o(1)$ Counting Queries with Differential Privacy is Hard
by Jonathan Ullman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

2012

Decentralized Evaluation of Regular Expressions for Capability Discovery in Peer-to-Peer Networks (PDF)
by Maximilian Szengel.
Masters, Technische Universität München, November 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis presents a novel approach for decentralized evaluation of regular expressions for capability discovery in DHT-based overlays. The system provides support for announcing capabilities expressed as regular expressions and discovering participants offering adequate capabilities. The idea behind our approach is to convert regular expressions into finite automatons and store the corresponding states and transitions in a DHT. We show how locally constructed DFA are merged in the DHT into an NFA without the knowledge of any NFA already present in the DHT and without the need for any central authority. Furthermore we present options of optimizing the DFA. There exist several possible applications for this general approach of decentralized regular expression evaluation. However, in this thesis we focus on the application of discovering users that are willing to provide network access using a specified protocol to a particular destination. We have implemented the system for our proposed approach and conducted a simulation. Moreover we present the results of an emulation of the implemented system in a cluster

[Go to top]

Design and Implementation of a Censorship Resistant and Fully Decentralized Name System (PDF)
by Martin Schanzenbach.
Master's, TU Munich, September 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis presents the design and implementation of the GNU Alternative Domain System (GADS), a decentralized, secure name system providing memorable names for the Internet as an alternative to the Domain Name System (DNS). The system builds on ideas from Rivest's Simple Distributed Security Infrastructure (SDSI) to address a central issue with providing a decentralized mapping of secure identifiers to memorable names: providing a global, secure and memorable mapping is impossible without a trusted authority. SDSI offers an alternative by linking local name spaces; GADS uses the transitivity provided by the SDSI design to build a decentralized and censorship resistant name system without a trusted root based on secure delegation of authority. Additional details need to be considered in order to enable GADS to integrate smoothly with the World Wide Web. While following links on the Web matches following delegations in GADS, the existing HTTP-based infrastructure makes many assumptions about globally unique names; however, proxies can be used to enable legacy applications to function with GADS. This work presents the fundamental goals and ideas behind GADS, provides technical details on how GADS has been implemented and discusses deployment issues for using GADS with existing systems. We discuss how GADS and legacy DNS can interoperate during a transition period and what additional security advantages GADS offers over DNS with Security Extensions (DNSSEC). Finally, we present the results of a survey into surfing behavior, which suggests that the manual introduction of new direct links in GADS will be infrequent

[Go to top]

Saturn: Range Queries, Load Balancing and Fault Tolerance in DHT Data Systems (PDF)
by Theoni Pitoura, Nikos Ntarmos, and Peter Triantafillou.
In IEEE Transactions on Knowledge and Data Engineering 24, July 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present Saturn, an overlay architecture for large-scale data networks maintained over Distributed Hash Tables (DHTs) that efficiently processes range queries and ensures access load balancing and fault-tolerance. Placing consecutive data values in neighboring peers is desirable in DHTs since it accelerates range query processing; however, such a placement is highly susceptible to load imbalances. At the same time, DHTs may be susceptible to node departures/failures and high data availability and fault tolerance are significant issues. Saturn deals effectively with these problems through the introduction of a novel multiple ring, order-preserving architecture. The use of a novel order-preserving hash function ensures fast range query processing. Replication across and within data rings (termed vertical and horizontal replication) forms the foundation over which our mechanisms are developed, ensuring query load balancing and fault tolerance, respectively. Our detailed experimentation study shows strong gains in range query processing efficiency, access load balancing, and fault tolerance, with low replication overheads. The significance of Saturn is not only that it effectively tackles all three issues togetheri.e., supporting range queries, ensuring load balancing, and providing fault tolerance over DHTsbut also that it can be applied on top of any order-preserving DHT enabling it to dynamically handle replication and, thus, to trade off replication costs for fair load distribution and fault tolerance

[Go to top]

Recommendation and Visualization Techniques for Large Scale Data
by Afshin Moin.
phd, Université Rennes 1, July 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Monkey: Automated debugging of deployed distributed systems (PDF)
by Safey A. Halim.
Masters, Technische Universität München, July 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Debugging is tedious and time consuming work that, for certain types of bugs, can and should be automated. Debugging distributed systems is more complex due to time dependencies between interacting processes. Another related problem is duplicate bug reports in bug repositories. Finding bug duplicates is hard and wastes developers' time which may affect the development team's rate of bug fixes and new releases. In this master thesis we introduce Monkey, a new tool that provides a solution for automated classification, investigation and characterization of bugs, as well as a solution for comparing bug reports and avoiding duplicates. Our tool is particularly suitable for distributed systems due to its autonomy. We present Monkey's key design goals and architecture and give experimental results demonstrating the viability of our approach

[Go to top]

Peek-a-Boo, I Still See You: Why Efficient Traffic Analysis Countermeasures Fail (PDF)
by Kevin P. Dyer, Scott Coull, Thomas Ristenpart, and Thomas Shrimpton.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the setting of HTTP traffic over encrypted tunnels, as used to conceal the identity of websites visited by a user. It is well known that traffic analysis (TA) attacks can accurately identify the website a user visits despite the use of encryption, and previous work has looked at specific attack/countermeasure pairings. We provide the first comprehensive analysis of general-purpose TA countermeasures. We show that nine known countermeasures are vulnerable to simple attacks that exploit coarse features of traffic (e.g., total time and bandwidth). The considered countermeasures include ones like those standardized by TLS, SSH, and IPsec, and even more complex ones like the traffic morphing scheme of Wright et al. As just one of our results, we show that despite the use of traffic morphing, one can use only total upstream and downstream bandwidth to identify with 98 accuracy which of two websites was visited. One implication of what we find is that, in the context of website identification, it is unlikely that bandwidth-efficient, general- purpose TA countermeasures can ever provide the type of security targeted in prior work

[Go to top]

LASTor: A Low-Latency AS-Aware Tor Client (PDF)
by Masoud Akhoondi, Curtis Yu, and Harsha V. Madhyastha.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The widely used Tor anonymity network is designed to enable low-latency anonymous communication. However, in practice, interactive communication on Torwhich accounts for over 90 of connections in the Tor network [1]incurs latencies over 5x greater than on the direct Internet path. In addition, since path selection to establish a circuit in Tor is oblivious to Internet routing, anonymity guarantees can breakdown in cases where an autonomous system (AS) can correlate traffic across the entry and exit segments of a circuit. In this paper, we show that both of these shortcomings in Tor can be addressed with only client-side modifications, i.e., without requiring a revamp of the entire Tor architecture. To this end, we design and implement a new Tor client, LASTor. First, we show that LASTor can deliver significant latency gains over the default Tor client by simply accounting for the inferred locations of Tor relays while choosing paths. Second, since the preference for low latency paths reduces the entropy of path selection, we design LASTor's path selection algorithm to be tunable. A user can choose an appropriate tradeoff between latency and anonymity by specifying a value between 0 (lowest latency) and 1 (highest anonymity) for a single parameter. Lastly, we develop an efficient and accurate algorithm to identify paths on which an AS can correlate traffic between the entry and exit segments. This algorithm enables LASTor to avoid such paths and improve a user's anonymity, while the low runtime of the algorithm ensures that the impact on end-to-end latency of communication is low. By applying our techniques to measurements of real Internet paths and by using LASTor to visit the top 200 websites from several geographically-distributed end-hosts, we show that, in comparison to the default Tor client, LASTor reduces median latencies by 25 while also reducing the false negative rate of not detecting a potential snooping AS from 57 to 11

[Go to top]

LAP: Lightweight Anonymity and Privacy (PDF)
by Hsu-Chun Hsiao, Tiffany Hyun-Jin Kim, Adrian Perrig, Akira Yamada, Sam Nelson, Marco Gruteser, and Wei Ming.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Popular anonymous communication systems often require sending packets through a sequence of relays on dilated paths for strong anonymity protection. As a result, increased end-to-end latency renders such systems inadequate for the majority of Internet users who seek an intermediate level of anonymity protection while using latency-sensitive applications, such as Web applications. This paper serves to bridge the gap between communication systems that provide strong anonymity protection but with intolerable latency and non-anonymous communication systems by considering a new design space for the setting. More specifically, we explore how to achieve near-optimal latency while achieving an intermediate level of anonymity with a weaker yet practical adversary model (i.e., protecting an end-host's identity and location from servers) such that users can choose between the level of anonymity and usability. We propose Lightweight Anonymity and Privacy (LAP), an efficient network-based solution featuring lightweight path establishment and stateless communication, by concealing an end-host's topological location to enhance anonymity against remote tracking. To show practicality, we demonstrate that LAP can work on top of the current Internet and proposed future Internet architectures

[Go to top]

Gossip-based counting in dynamic networks (PDF)
by Ruud van de Bovenkamp, Fernando Kuipers, and Piet Van Mieghem.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Efficient and Secure Decentralized Network Size Estimation (PDF)
by Nathan S Evans, Bartlomiej Polot, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The size of a Peer-to-Peer (P2P) network is an important parameter for performance tuning of P2P routing algorithms. This paper introduces and evaluates a new efficient method for participants in an unstructured P2P network to establish the size of the overall network. The presented method is highly efficient, propagating information about the current size of the network to all participants using O(|E|) operations where |E| is the number of edges in the network. Afterwards, all nodes have the same network size estimate, which can be made arbitrarily accurate by averaging results from multiple rounds of the protocol. Security measures are included which make it prohibitively expensive for a typical active participating adversary to significantly manipulate the estimates. This paper includes experimental results that demonstrate the viability, efficiency and accuracy of the protocol

[Go to top]

Efficient and Secure Decentralized Network Size Estimation (PDF)
by Nathan S Evans, Bartlomiej Polot, and Christian Grothoff.
In unknown, May 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The size of a Peer-to-Peer (P2P) network is an important parameter for performance tuning of P2P routing algorithms. This paper introduces and evaluates a new efficient method for participants in an unstructured P2P network to establish the size of the overall network. The presented method is highly efficient, propagating information about the current size of the network to all participants using O(|E|) operations where |E| is the number of edges in the network. Afterwards, all nodes have the same network size estimate, which can be made arbitrarily accurate by averaging results from multiple rounds of the protocol. Security measures are included which make it prohibitively expensive for a typical active participating adversary to significantly manipulate the estimates. This paper includes experimental results that demonstrate the viability, efficiency and accuracy of the protocol

[Go to top]

Koi: A Location-Privacy Platform for Smartphone Apps (PDF)
by Saikat Guha, Mudit Jain, and Venkata Padmanabhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

With mobile phones becoming first-class citizens in the online world, the rich location data they bring to the table is set to revolutionize all aspects of online life including content delivery, recommendation systems, and advertising. However, user-tracking is a concern with such location-based services, not only because location data can be linked uniquely to individuals, but because the low-level nature of current location APIs and the resulting dependence on the cloud to synthesize useful representations virtually guarantees such tracking. In this paper, we propose privacy-preserving location-based matching as a fundamental platform primitive and as an alternative to exposing low-level, latitude-longitude (lat-long) coordinates to applications. Applications set rich location-based triggers and have these be fired based on location updates either from the local device or from a remote device (e.g., a friend's phone). Our Koi platform, comprising a privacy-preserving matching service in the cloud and a phone-based agent, realizes this primitive across multiple phone and browser platforms. By masking low-level lat-long information from applications, Koi not only avoids leaking privacy-sensitive information, it also eases the task of programmers by providing a higher-level abstraction that is easier for applications to build upon. Koi's privacy-preserving protocol prevents the cloud service from tracking users. We verify the non-tracking properties of Koi using a theorem prover, illustrate how privacy guarantees can easily be added to a wide range of location-based applications, and show that our public deployment is performant, being able to perform 12K matches per second on a single core

[Go to top]

A Survey of Monte Carlo Tree Search Methods (PDF)
by Cameron Browne, Edward Powley, Daniel Whitehouse, Simon Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton.
In IEEE Transactions on Computational Intelligence and AI in Games 4, March 2012, pages 1-43. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

[Go to top]

A Critical Look at Decentralized Personal Data Architectures (PDF)
by Arvind Narayanan, Vincent Toubiana, Solon Barocas, Helen Nissenbaum, and Dan Boneh.
In CoRR abs/1202.4503, February 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

While the Internet was conceived as a decentralized network, the most widely used web applications today tend toward centralization. Control increasingly rests with centralized service providers who, as a consequence, have also amassed unprecedented amounts of data about the behaviors and personalities of individuals. Developers, regulators, and consumer advocates have looked to alternative decentralized architectures as the natural response to threats posed by these centralized services. The result has been a great variety of solutions that include personal data stores (PDS), infomediaries, Vendor Relationship Management (VRM) systems, and federated and distributed social networks. And yet, for all these efforts, decentralized personal data architectures have seen little adoption. This position paper attempts to account for these failures, challenging the accepted wisdom in the web community on the feasibility and desirability of these approaches. We start with a historical discussion of the development of various categories of decentralized personal data architectures. Then we survey the main ideas to illustrate the common themes among these efforts. We tease apart the design characteristics of these systems from the social values that they (are intended to) promote. We use this understanding to point out numerous drawbacks of the decentralization paradigm, some inherent and others incidental. We end with recommendations for designers of these systems for working towards goals that are achievable, but perhaps more limited in scope and ambition

[Go to top]

Congestion-aware Path Selection for Tor (PDF)
by Tao Wang, Kevin Bauer, Clara Forero, and Ian Goldberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor, an anonymity network formed by volunteer nodes, uses the estimated bandwidth of the nodes as a central feature of its path selection algorithm. The current load on nodes is not considered in this algorithm, however, and we observe that some nodes persist in being under-utilized or congested. This can degrade the network's performance, discourage Tor adoption, and consequently reduce the size of Tor's anonymity set. In an effort to reduce congestion and improve load balancing, we propose a congestion-aware path selection algorithm. Using latency as an indicator of congestion, clients use opportunistic and lightweight active measurements to evaluate the congestion state of nodes, and reject nodes that appear congested. Through experiments conducted on the live Tor network, we verify our hypothesis that clients can infer congestion using latency and show that congestion-aware path selection can improve performance

[Go to top]

Theory and Practice of Bloom Filters for Distributed Systems (PDF)
by S. Tarkoma, C.E. Rothenberg, and E. Lagerspetz.
In Communications Surveys Tutorials, IEEE 14, January 2012, pages 131-155. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many network solutions and overlay networks utilize probabilistic techniques to reduce information processing and networking costs. This survey article presents a number of frequently used and useful probabilistic techniques. Bloom filters and their variants are of prime importance, and they are heavily used in various distributed systems. This has been reflected in recent research and many new algorithms have been proposed for distributed systems that are either directly or indirectly based on Bloom filters. In this survey, we give an overview of the basic and advanced techniques, reviewing over 20 variants and discussing their application in distributed systems, in particular for caching, peer-to-peer systems, routing and forwarding, and measurement data summarization

[Go to top]

User Interests Driven Web Personalization Based on Multiple Social Networks (PDF)
by Yi Zeng, Ning Zhong, Xu Ren, and Yan Wang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

User related data indicate user interests in a certain environment. In the context of massive data from the Web, if an application wants to provide more personalized service (e.g. search) for users, an investigation on user interests is needed. User interests are usually distributed in different sources. In order to provide a more comprehensive understanding, user related data from multiple sources need to be integrated together for deeper analysis. Web based social networks have become typical platforms for extracting user interests. In addition, there are various types of interests from these social networks. In this paper, we provide an algorithmic framework for retrieving semantic data based on user interests from multiple sources (such as multiple social networking sites). We design several algorithms to deal with interests based retrieval based on single and multiple types of interests. We utilize publication data from Semantic Web Dog Food (which can be considered as an academic collaboration based social network), and microblogging data from Twitter to validate our framework. The Active Academic Visit Recommendation Application (AAVRA) is developed as a concrete usecase to show the potential effectiveness of the proposed framework for user interests driven Web personalization based on multiple social networks

[Go to top]

The state-of-the-art in personalized recommender systems for social networking (PDF)
by Xujuan Zhou, Yue Xu, Yuefeng Li, Audun Josang, and Clive Cox.
In Artificial Intelligence Review 37, 2012, pages 119-132. (BibTeX entry) (Download bibtex record)
(direct link) (website)

With the explosion of Web 2.0 application such as blogs, social and professional networks, and various other types of social media, the rich online information and various new sources of knowledge flood users and hence pose a great challenge in terms of information overload. It is critical to use intelligent agent software systems to assist users in finding the right information from an abundance of Web data. Recommender systems can help users deal with information overload problem efficiently by suggesting items (e.g., information and products) that match users' personal interests. The recommender technology has been successfully employed in many applications such as recommending films, music, books, etc. The purpose of this report is to give an overview of existing technologies for building personalized recommender systems in social networking environment, to propose a research direction for addressing user profiling and cold start problems by exploiting user-generated content newly available in Web 2.0

[Go to top]

Reproducible network experiments using container based emulation (PDF)
by N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McKeown.
In Proc. CoNEXT, 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

The Privacy of the Analyst and the Power of the State
by Cynthia Dwork, Moni Naor, and Salil P. Vadhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Personalization and privacy: a survey of privacy risks and remedies in personalization-based systems (PDF)
by Eran Toch, Yang Wang, and LorrieFaith Cranor.
In User Modeling and User-Adapted Interaction 22, 2012, pages 203-220. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Personalization technologies offer powerful tools for enhancing the user experience in a wide variety of systems, but at the same time raise new privacy concerns. For example, systems that personalize advertisements according to the physical location of the user or according to the user's friends' search history, introduce new privacy risks that may discourage wide adoption of personalization technologies. This article analyzes the privacy risks associated with several current and prominent personalization trends, namely social-based personalization, behavioral profiling, and location-based personalization. We survey user attitudes towards privacy and personalization, as well as technologies that can help reduce privacy risks. We conclude with a discussion that frames risks and technical solutions in the intersection between personalization and privacy, as well as areas for further investigation. This frameworks can help designers and researchers to contextualize privacy challenges of solutions when designing personalization systems

[Go to top]

Octopus: A Secure and Anonymous DHT Lookup (PDF)
by Qiyan Wang and Nikita Borisov.
In CoRR abs/1203.2668, 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

NTALG–TCP NAT traversal with application-level gateways (PDF)
by M. Wander, S. Holzapfel, A. Wacker, and T. Weis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Consumer computers or home communication devices are usually connected to the Internet via a Network Address Translation (NAT) router. This imposes restrictions for networking applications that require inbound connections. Existing solutions for NAT traversal can remedy the restrictions, but still there is a fraction of home users which lack support of it, especially when it comes to TCP. We present a framework for traversing NAT routers by exploiting their built-in FTP and IRC application-level gateways (ALG) for arbitrary TCP-based applications. While this does not work in every scenario, it significantly improves the success chance without requiring any user interaction at all. To demonstrate the framework, we show a small test setup with laptop computers and home NAT routers

[Go to top]

ModelNet-TE: An emulation tool for the study of P2P and traffic engineering interaction dynamics (PDF)
by D. Rossi, P. Veglia, M. Sammarco, and F. Larroca.
In Peer-to-Peer Networking and Applications, 2012, pages 1-19. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Lower Bounds in Differential Privacy (PDF)
by Anindya De.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper is about private data analysis, in which a trusted curator holding a confidential database responds to real vector-valued queries. A common approach to ensuring privacy for the database elements is to add appropriately generated random noise to the answers, releasing only these noisy responses. A line of study initiated in [7] examines the amount of distortion needed to prevent privacy violations of various kinds. The results in the literature vary according to several parameters, including the size of the database, the size of the universe from which data elements are drawn, the amount of privacy desired, and for the purposes of the current work, the arity of the query. In this paper we sharpen and unify these bounds. Our foremost result combines the techniques of Hardt and Talwar [11] and McGregor et al. [13] to obtain linear lower bounds on distortion when providing differential privacy for a (contrived) class of low-sensitivity queries. (A query has low sensitivity if the data of a single individual has small effect on the answer.) Several structural results follow as immediate corollaries: We separate so-called counting queries from arbitrary low-sensitivity queries, proving the latter requires more noise, or distortion, than does the former; We separate (,0)-differential privacy from its well-studied relaxation (,)-differential privacy, even when 2- o(n) is negligible in the size n of the database, proving the latter requires less distortion than the former; We demonstrate that (,)-differential privacy is much weaker than (,0)-differential privacy in terms of mutual information of the transcript of the mechanism with the database, even when 2- o(n) is negligible in the size n of the database. We also simplify the lower bounds on noise for counting queries in [11] and also make them unconditional. Further, we use a characterization of (,) differential privacy from [13] to obtain lower bounds on the distortion needed to ensure (,)-differential privacy for , > 0. We next revisit the LP decoding argument of [10] and combine it with a recent result of Rudelson [15] to improve on a result of Kasiviswanathan et al. [12] on noise lower bounds for privately releasing l-way marginals

[Go to top]

How to Build a Better Testbed: Lessons from a Decade of Network Experiments on Emulab (PDF)
by Fabien Hermenier and Robert Ricci.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Differential Privacy with Imperfect Randomness (PDF)
by Yevgeniy Dodis, Adriana López-Alt, Ilya Mironov, and Salil Vadhan.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC'07) showed that if a source of randomness R is good enough to generate a secret key capable of encrypting k bits, then one can deterministically extract nearly k almost uniform bits from R, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an extractable source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific non-extractable sources of randomness, such as the -Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias < 1 (possibly depending on prior bits). We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC'06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary low sensitivity functions that works even with randomness coming from a -Santha-Vazirani source, for any < 1. This provides a somewhat surprising separation between traditional privacy and differential privacy with respect to imperfect randomness. Interestingly, the design of our mechanism is quite different from the traditional additive-noise mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that any (non-trivial) SV-robust mechanism for our problem requires a demanding property called consistent sampling, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism

[Go to top]

CRISP: Collusion-resistant Incentive-compatible Routing and Forwarding in Opportunistic Networks (PDF)
by Umair Sadiq, Mohan Kumar, and Matthew Wright.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

BLIP: Non-interactive Differentially-Private Similarity Computation on Bloom filters (PDF)
by Mohammad Alaggan, Sébastien Gambs, and Anne-Marie Kermarrec.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we consider the scenario in which the profile of a user is represented in a compact way, as a Bloom filter, and the main objective is to privately compute in a distributed manner the similarity between users by relying only on the Bloom filter representation. In particular, we aim at providing a high level of privacy with respect to the profile even if a potentially unbounded number of similarity computations take place, thus calling for a non-interactive mechanism. To achieve this, we propose a novel non-interactive differentially private mechanism called BLIP (for BLoom-and-flIP) for randomizing Bloom filters. This approach relies on a bit flipping mechanism and offers high privacy guarantees while maintaining a small communication cost. Another advantage of this non-interactive mechanism is that similarity computation can take place even when the user is offline, which is impossible to achieve with interactive mechanisms. Another of our contributions is the definition of a probabilistic inference attack, called the Profile Reconstruction attack, that can be used to reconstruct the profile of an individual from his Bloom filter representation. More specifically, we provide an analysis of the protection offered by BLIP against this profile reconstruction attack by deriving an upper and lower bound for the required value of the differential privacy parameter

[Go to top]

AutoNetkit: simplifying large scale, open-source network experimentation (PDF)
by Simon Knight, Askar Jaboldinov, Olaf Maennel, Iain Phillips, and Matthew Roughan.
In SIGCOMM Comput. Commun. Rev 42(4), 2012, pages 97-98. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

2011

PEREA: Practical TTP-free revocation of repeatedly misbehaving anonymous users (PDF)
by Man Ho Au, Patrick P. Tsang, and Apu Kapadia.
In ACM Transactions on Information and System Security (ACM TISSEC) 14, December 2011, pages 29:1-29:34. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several anonymous authentication schemes allow servers to revoke a misbehaving user's future accesses. Traditionally, these schemes have relied on powerful Trusted Third Parties (TTPs) capable of deanonymizing (or linking) users' connections. Such TTPs are undesirable because users' anonymity is not guaranteed, and users must trust them to judge misbehavior' fairly. Recent schemes such as Blacklistable Anonymous Credentials (BLAC) and Enhanced Privacy ID (EPID) support privacy-enhanced revocation servers can revoke misbehaving users without a TTP's involvement, and without learning the revoked users' identities. In BLAC and EPID, however, the computation required for authentication at the server is linear in the size (L) of the revocation list, which is impractical as the size approaches thousands of entries. We propose PEREA, a new anonymous authentication scheme for which this bottleneck of computation is independent of the size of the revocation list. Instead, the time complexity of authentication is linear in the size of a revocation window K L, the number of subsequent authentications before which a user's misbehavior must be recognized if the user is to be revoked. We extend PEREA to support more complex revocation policies that take the severity of misbehaviors into account. Users can authenticate anonymously if their naughtiness, i.e., the sum of the severities of their blacklisted misbehaviors, is below a certain naughtiness threshold. We call our extension PEREA-Naughtiness. We prove the security of our constructions, and validate their efficiency as compared to BLAC both analytically and quantitatively

[Go to top]

Exposing Invisible Timing-based Traffic Watermarks with BACKLIT (PDF)
by Xiapu Luo, Peng Zhou, Junjie Zhang, Roberto Perdisci, Wenke Lee, and Rocky K. C. Chang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traffic watermarking is an important element in many network security and privacy applications, such as tracing botnet Camp;C communications and deanonymizing peer-to-peer VoIP calls. The state-of-the-art traffic watermarking schemes are usually based on packet timing information and they are notoriously difficult to detect. In this paper, we show for the first time that even the most sophisticated timing-based watermarking schemes (e.g., RAINBOW and SWIRL) are not invisible by proposing a new detection system called BACKLIT. BACKLIT is designed according to the observation that any practical timing-based traffic watermark will cause noticeable alterations in the intrinsic timing features typical of TCP flows. We propose five metrics that are sufficient for detecting four state-of-the-art traffic watermarks for bulk transfer and interactive traffic. BACKLIT can be easily deployed in stepping stones and anonymity networks (e.g., Tor), because it does not rely on strong assumptions and can be realized in an active or passive mode. We have conducted extensive experiments to evaluate BACKLIT's detection performance using the PlanetLab platform. The results show that BACKLIT can detect watermarked network flows with high accuracy and few false positives

[Go to top]

Exploring the Potential Benefits of Expanded Rate Limiting in Tor: Slow and Steady Wins the Race With Tortoise (PDF)
by W. Brad Moore, Chris Wacek, and Micah Sherr.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is a volunteer-operated network of application-layer relays that enables users to communicate privately and anonymously. Unfortunately, Tor often exhibits poor performance due to congestion caused by the unbalanced ratio of clients to available relays, as well as a disproportionately high consumption of network capacity by a small fraction of filesharing users. This paper argues the very counterintuitive notion that slowing down traffic on Tor will increase the bandwidth capacity of the network and consequently improve the experience of interactive web users. We introduce Tortoise, a system for rate limiting Tor at its ingress points. We demonstrate that Tortoise incurs little penalty for interactive web users, while significantly decreasing the throughput for filesharers. Our techniques provide incentives to filesharers to configure their Tor clients to also relay traffic, which in turn improves the network's overall performance. We present large-scale emulation results that indicate that interactive users will achieve a significant speedup if even a small fraction of clients opt to run relays

[Go to top]

Uncovering social network sybils in the wild (PDF)
by Zhi Yang, Christo Wilson, Xiao Wang, Tingting Gao, Ben Y. Zhao, and Yafei Dai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Sybil accounts are fake identities created to unfairly increase the power or resources of a single user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but have not been able to perform large scale measurements to detect them or measure their activities. In this paper, we describe our efforts to detect, characterize and understand Sybil account activity in the Renren online social network (OSN). We use ground truth provided by Renren Inc. to build measurement based Sybil account detectors, and deploy them on Renren to detect over 100,000 Sybil accounts. We study these Sybil accounts, as well as an additional 560,000 Sybil accounts caught by Renren, and analyze their link creation behavior. Most interestingly, we find that contrary to prior conjecture, Sybil accounts in OSNs do not form tight-knit communities. Instead, they integrate into the social graph just like normal users. Using link creation timestamps, we verify that the large majority of links between Sybil accounts are created accidentally, unbeknownst to the attacker. Overall, only a very small portion of Sybil accounts are connected to other Sybils with social links. Our study shows that existing Sybil defenses are unlikely to succeed in today's OSNs, and we must design new techniques to effectively detect and defend against Sybil attacks

[Go to top]

Website Fingerprinting in Onion Routing Based Anonymization Networks (PDF)
by Andriy Panchenko, Lukas Niessen, Andreas Zinnen, and Thomas Engel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Low-latency anonymization networks such as Tor and JAP claim to hide the recipient and the content of communications from a local observer, i.e., an entity that can eavesdrop the traffic between the user and the first anonymization node. Especially users in totalitarian regimes strongly depend on such networks to freely communicate. For these people, anonymity is particularly important and an analysis of the anonymization methods against various attacks is necessary to ensure adequate protection. In this paper we show that anonymity in Tor and JAP is not as strong as expected so far and cannot resist website fingerprinting attacks under certain circumstances. We first define features for website fingerprinting solely based on volume, time, and direction of the traffic. As a result, the subsequent classification becomes much easier. We apply support vector machines with the introduced features. We are able to improve recognition results of existing works on a given state-of-the-art dataset in Tor from 3 to 55 and in JAP from 20 to 80. The datasets assume a closed-world with 775 websites only. In a next step, we transfer our findings to a more complex and realistic open-world scenario, i.e., recognition of several websites in a set of thousands of random unknown websites. To the best of our knowledge, this work is the first successful attack in the open-world scenario. We achieve a surprisingly high true positive rate of up to 73 for a false positive rate of 0.05. Finally, we show preliminary results of a proof-of-concept implementation that applies camouflage as a countermeasure to hamper the fingerprinting attack. For JAP, the detection rate decreases from 80 to 4 and for Tor it drops from 55 to about 3

[Go to top]

Trust-based Anonymous Communication: Adversary Models and Routing Algorithms (PDF)
by Aaron Johnson, Paul Syverson, Roger Dingledine, and Nick Mathewson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce a novel model of routing security that incorporates the ordinarily overlooked variations in trust that users have for different parts of the network. We focus on anonymous communication, and in particular onion routing, although we expect the approach to apply more broadly. This paper provides two main contributions. First, we present a novel model to consider the various security concerns for route selection in anonymity networks when users vary their trust over parts of the network. Second, to show the usefulness of our model, we present as an example a new algorithm to select paths in onion routing. We analyze its effectiveness against deanonymization and other information leaks, and particularly how it fares in our model versus existing algorithms, which do not consider trust. In contrast to those, we find that our trust-based routing strategy can protect anonymity against an adversary capable of attacking a significant fraction of the network

[Go to top]

Stealthy Traffic Analysis of Low-Latency Anonymous Communication Using Throughput Fingerprinting (PDF)
by Prateek Mittal, Ahmed Khurshid, Joshua Juen, Matthew Caesar, and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity systems such as Tor aim to enable users to communicate in a manner that is untraceable by adversaries that control a small number of machines. To provide efficient service to users, these anonymity systems make full use of forwarding capacity when sending traffic between intermediate relays. In this paper, we show that doing this leaks information about the set of Tor relays in a circuit (path). We present attacks that, with high confidence and based solely on throughput information, can (a) reduce the attacker's uncertainty about the bottleneck relay of any Tor circuit whose throughput can be observed, (b) exactly identify the guard relay(s) of a Tor user when circuit throughput can be observed over multiple connections, and (c) identify whether two concurrent TCP connections belong to the same Tor user, breaking unlinkability. Our attacks are stealthy, and cannot be readily detected by a user or by Tor relays. We validate our attacks using experiments over the live Tor network. We find that the attacker can substantially reduce the entropy of a bottleneck relay distribution of a Tor circuit whose throughput can be observedthe entropy gets reduced by a factor of 2 in the median case. Such information leaks from a single Tor circuit can be combined over multiple connections to exactly identify a user's guard relay(s). Finally, we are also able to link two connections from the same initiator with a crossover error rate of less than 1.5 in under 5 minutes. Our attacks are also more accurate and require fewer resources than previous attacks on Tor

[Go to top]

Practical Privacy-Preserving Multiparty Linear Programming Based on Problem Transformation (PDF)
by Jannik Dreier and Florian Kerschbaum.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Cryptographic solutions to privacy-preserving multiparty linear programming are slow. This makes them unsuitable for many economically important applications, such as supply chain optimization, whose size exceeds their practically feasible input range. In this paper we present a privacy-preserving trans- formation that allows secure outsourcing of the linear program computation in an ef?cient manner. We evaluate security by quantifying the leakage about the input after the transformation and present implementation results. Using this transformation, we can mostly replace the costly cryptographic operations and securely solve problems several orders of magnitude larger

[Go to top]

FAUST: Efficient, TTP-Free Abuse Prevention by Anonymous Whitelisting (PDF)
by Peter Lofgren and Nicholas J. Hopper.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce Faust, a solution to the anonymous blacklisting problem: allow an anonymous user to prove that she is authorized to access an online service such that if the user misbehaves, she retains her anonymity but will be unable to authenticate in future sessions. Faust uses no trusted third parties and is one to two orders of magnitude more efficient than previous schemes without trusted third parties. The key idea behind Faust is to eliminate the explicit blacklist used in all previous approaches, and rely instead on an implicit whitelist, based on blinded authentication tokens

[Go to top]

Cirripede: Circumvention Infrastructure using Router Redirection with Plausible Deniability (PDF)
by Amir Houmansadr, Giang T. K. Nguyen, Matthew Caesar, and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many users face surveillance of their Internet communications and a significant fraction suffer from outright blocking of certain destinations. Anonymous communication systems allow users to conceal the destinations they communicate with, but do not hide the fact that the users are using them. The mere use of such systems may invite suspicion, or access to them may be blocked. We therefore propose Cirripede, a system that can be used for unobservable communication with Internet destinations. Cirripede is designed to be deployed by ISPs; it intercepts connections from clients to innocent-looking destinations and redirects them to the true destination requested by the client. The communication is encoded in a way that is indistinguishable from normal communications to anyone without the master secret key, while public-key cryptography is used to eliminate the need for any secret information that must be shared with Cirripede users. Cirripede is designed to work scalably with routers that handle large volumes of traffic while imposing minimal overhead on ISPs and not disrupting existing traffic. This allows Cirripede proxies to be strategically deployed at central locations, making access to Cirripede very difficult to block. We built a proof-of-concept implementation of Cirripede and performed a testbed evaluation of its performance properties

[Go to top]

BridgeSPA: Improving Tor Bridges with Single Packet Authorization (PDF)
by Rob Smits, Divam Jain, Sarah Pidcock, Ian Goldberg, and Urs Hengartner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is a network designed for low-latency anonymous communications. Tor clients form circuits through relays that are listed in a public directory, and then relay their encrypted traffic through these circuits. This indirection makes it difficult for a local adversary to determine with whom a particular Tor user is communicating. In response, some local adversaries restrict access to Tor by blocking each of the publicly listed relays. To deal with such an adversary, Tor uses bridges, which are unlisted relays that can be used as alternative entry points into the Tor network. Unfortunately, issues with Tor's bridge implementation make it easy to discover large numbers of bridges. An adversary that hoards this information may use it to determine when each bridge is online over time. If a bridge operator also browses with Tor on the same machine, this information may be sufficient to deanonymize him. We present BridgeSPA as a method to mitigate this issue. A client using BridgeSPA relies on innocuous single packet authorization (SPA) to present a time-limited key to a bridge. Before this authorization takes place, the bridge will not reveal whether it is online. We have implemented BridgeSPA as a working proof-of-concept, which is available under an open-source licence

[Go to top]

X-Vine: Secure and Pseudonymous Routing Using Social Networks (PDF)
by Prateek Mittal, Matthew Caesar, and Nikita Borisov.
In Computer Research Repository abs/1109.0971, September 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hash tables suffer from several security and privacy vulnerabilities, including the problem of Sybil attacks. Existing social network-based solutions to mitigate the Sybil attacks in DHT routing have a high state requirement and do not provide an adequate level of privacy. For instance, such techniques require a user to reveal their social network contacts. We design X-Vine, a protection mechanism for distributed hash tables that operates entirely by communicating over social network links. As with traditional peer-to-peer systems, X-Vine provides robustness, scalability, and a platform for innovation. The use of social network links for communication helps protect participant privacy and adds a new dimension of trust absent from previous designs. X-Vine is resilient to denial of service via Sybil attacks, and in fact is the first Sybil defense that requires only a logarithmic amount of state per node, making it suitable for large-scale and dynamic settings. X-Vine also helps protect the privacy of users social network contacts and keeps their IP addresses hidden from those outside of their social circle, providing a basis for pseudonymous communication. We first evaluate our design with analysis and simulations, using several real world large-scale social networking topologies. We show that the constraints of X-Vine allow the insertion of only a logarithmic number of Sybil identities per attack edge; we show this mitigates the impact of malicious attacks while not affecting the performance of honest nodes. Moreover, our algorithms are efficient, maintain low stretch, and avoid hot spots in the network. We validate our design with a PlanetLab implementation and a Facebook plugin

[Go to top]

R5N : Randomized Recursive Routing for Restricted-Route Networks (PDF)
by Nathan S Evans and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes a new secure DHT routing algorithm for open, decentralized P2P networks operating in a restricted-route environment with malicious participants. We have implemented our routing algorithm and have evaluated its performance under various topologies and in the presence of malicious peers. For small-world topologies, our algorithm provides significantly better performance when compared to existing methods. In more densely connected topologies, our performance is better than or on par with other designs

[Go to top]

Performance Regression Monitoring with Gauger
by Bartlomiej Polot and Christian Grothoff.
In LinuxJournal(209), September 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

High-speed high-security signatures (PDF)
by Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Hang.
In Journal of Cryptographic Engineering 2, September 2011, pages 77-89. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Telex: Anticensorship in the Network Infrastructure (PDF)
by Eric Wustrow, Scott Wolchok, Ian Goldberg, and J. Alex Halderman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present Telex, a new approach to resisting state-level Internet censorship. Rather than attempting to win the cat-and-mouse game of finding open proxies, we leverage censors' unwillingness to completely block day-to-day Internet access. In effect, Telex converts innocuous, unblocked websites into proxies, without their explicit collaboration. We envision that friendly ISPs would deploy Telex stations on paths between censors' networks and popular, uncensored Internet destinations. Telex stations would monitor seemingly innocuous flows for a special tag and transparently divert them to a forbidden website or service instead. We propose a new cryptographic scheme based on elliptic curves for tagging TLS handshakes such that the tag is visible to a Telex station but not to a censor. In addition, we use our tagging scheme to build a protocol that allows clients to connect to Telex stations while resisting both passive and active attacks. We also present a proof-of-concept implementation that demonstrates the feasibility of our system

[Go to top]

PIR-Tor: Scalable Anonymous Communication Using Private Information Retrieval (PDF)
by Prateek Mittal, Femi Olumofin, Carmela Troncoso, Nikita Borisov, and Ian Goldberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Existing anonymous communication systems like Tor do not scale well as they require all users to maintain up-to-date information about all available Tor relays in the system. Current proposals for scaling anonymous communication advocate a peer-to-peer (P2P) approach. While the P2P paradigm scales to millions of nodes, it provides new opportunities to compromise anonymity. In this paper, we step away from the P2P paradigm and advocate a client-server approach to scalable anonymity. We propose PIR-Tor, an architecture for the Tor network in which users obtain information about only a few onion routers using private information retrieval techniques. Obtaining information about only a few onion routers is the key to the scalability of our approach, while the use of private retrieval information techniques helps preserve client anonymity. The security of our architecture depends on the security of PIR schemes which are well understood and relatively easy to analyze, as opposed to peer-to-peer designs that require analyzing extremely complex and dynamic systems. In particular, we demonstrate that reasonable parameters of our architecture provide equivalent security to that of the Tor network. Moreover, our experimental results show that the overhead of PIR-Tor is manageable even when the Tor network scales by two orders of magnitude

[Go to top]

Methods for Secure Decentralized Routing in Open Networks (PDF)
by Nathan S Evans.
Ph.D. thesis, Technische Universität München, August 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The contribution of this thesis is the study and improvement of secure, decentralized, robust routing algorithms for open networks including ad-hoc networks and peer-to-peer (P2P) overlay networks. The main goals for our secure routing algorithm are openness, efficiency, scalability and resilience to various types of attacks. Common P2P routing algorithms trade-off decentralization for security; for instance by choosing whether or not to require a centralized authority to allow peers to join the network. Other algorithms trade scalability for security, for example employing random search or flooding to prevent certain types of attacks. Our design attempts to meet our security goals in an open system, while limiting the performance penalties incurred. The first step we took towards designing our routing algorithm was an analysis of the routing algorithm in Freenet. This algorithm is relevant because it achieves efficient (order O(log n)) routing in realistic network topologies in a fully decentralized open network. However, we demonstrate why their algorithm is not secure, as malicious participants are able to severely disrupt the operation of the network. The main difficulty with the Freenet routing algorithm is that for performance it relies on information received from untrusted peers. We also detail a range of proposed solutions, none of which we found to fully fix the problem. A related problem for efficient routing in sparsely connected networks is the difficulty in sufficiently populating routing tables. One way to improve connectivity in P2P overlay networks is by utilizing modern NAT traversal techniques. We employ a number of standard NAT traversal techniques in our approach, and also developed and experimented with a novel method for NAT traversal based on ICMP and UDP hole punching. Unlike other NAT traversal techniques ours does not require a trusted third party. Another technique we use in our implementation to help address the connectivity problem in sparse networks is the use of distance vector routing in a small local neighborhood. The distance vector variant used in our system employs onion routing to secure the resulting indirect connections. Materially to this design, we discovered a serious vulnerability in the Tor protocol which allowed us to use a DoS attack to reduce the anonymity of the users of this extant anonymizing P2P network. This vulnerability is based on allowing paths of unrestricted length for onion routes through the network. Analyzing Tor and implementing this attack gave us valuable knowledge which helped when designing the distance vector routing protocol for our system. Finally, we present the design of our new secure randomized routing algorithm that does not suffer from the various problems we discovered in previous designs. Goals for the algorithm include providing efficiency and robustness in the presence of malicious participants for an open, fully decentralized network without trusted authorities. We provide a mathematical analysis of the algorithm itself and have created and deployed an implementation of this algorithm in GNUnet. In this thesis we also provide a detailed overview of a distributed emulation framework capable of running a large number of nodes using our full code base as well as some of the challenges encountered in creating and using such a testing framework. We present extensive experimental results showing that our routing algorithm outperforms the dominant DHT design in target topologies, and performs comparably in other scenarios

[Go to top]

ExperimenTor: A Testbed for Safe and Realistic Tor Experimentation (PDF)
by Kevin Bauer, Micah Sherr, Damon McCoy, and Dirk Grunwald.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is one of the most widely-used privacy enhancing technologies for achieving online anonymity and resisting censorship. Simultaneously, Tor is also an evolving research network on which investigators perform experiments to improve the network's resilience to attacks and enhance its performance. Existing methods for studying Tor have included analytical modeling, simulations, small-scale network emulations, small-scale PlanetLab deployments, and measurement and analysis of the live Tor network. Despite the growing body of work concerning Tor, there is no widely accepted methodology for conducting Tor research in a manner that preserves realism while protecting live users' privacy. In an effort to propose a standard, rigorous experimental framework for conducting Tor research in a way that ensures safety and realism, we present the design of ExperimenTor, a large-scale Tor network emulation toolkit and testbed. We also report our early experiences with prototype testbeds currently deployed at four research institutions

[Go to top]

Decoy Routing: Toward Unblockable Internet Communication (PDF)
by Josh Karlin, Daniel Ellard, Alden W. Jackson, Christine E. Jones, Greg Lauer, David P. Mankins, and W. Timothy Strayer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present decoy routing, a mechanism capable of circumventing common network filtering strategies. Unlike other circumvention techniques, decoy routing does not require a client to connect to a specific IP address (which is easily blocked) in order to provide circumvention. We show that if it is possible for a client to connect to any unblocked host/service, then decoy routing could be used to connect them to a blocked destination without cooperation from the host. This is accomplished by placing the circumvention service in the network itself – where a single device could proxy traffic between a significant fraction of hosts – instead of at the edge

[Go to top]

DefenestraTor: Throwing out Windows in Tor (PDF)
by Mashael AlSabah, Kevin Bauer, Ian Goldberg, Dirk Grunwald, Damon McCoy, Stefan Savage, and Geoffrey M. Voelker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is one of the most widely used privacy enhancing technologies for achieving online anonymity and resisting censorship. While conventional wisdom dictates that the level of anonymity offered by Tor increases as its user base grows, the most significant obstacle to Tor adoption continues to be its slow performance. We seek to enhance Tor's performance by offering techniques to control congestion and improve flow control, thereby reducing unnecessary delays. To reduce congestion, we first evaluate small fixed-size circuit windows and a dynamic circuit window that adaptively re-sizes in response to perceived congestion. While these solutions improve web page response times and require modification only to exit routers, they generally offer poor flow control and slower downloads relative to Tor's current design. To improve flow control while reducing congestion, we implement N23, an ATM-style per-link algorithm that allows Tor routers to explicitly cap their queue lengths and signal congestion via back-pressure. Our results show that N23 offers better congestion and flow control, resulting in improved web page response times and faster page loads compared to Tor's current design and other window-based approaches. We also argue that our proposals do not enable any new attacks on Tor users' privacy

[Go to top]

An Accurate System-Wide Anonymity Metric for Probabilistic Attacks (PDF)
by Rajiv Bagai, Huabo Lu, Rong Li, and Bin Tang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We give a critical analysis of the system-wide anonymity metric of Edman et al. [3], which is based on the permanent value of a doubly-stochastic matrix. By providing an intuitive understanding of the permanent of such a matrix, we show that a metric that looks no further than this composite value is at best a rough indicator of anonymity. We identify situations where its inaccuracy is acute, and reveal a better anonymity indicator. Also, by constructing an information-preserving embedding of a smaller class of attacks into the wider class for which this metric was proposed, we show that this metric fails to possess desirable generalization properties. Finally, we present a new anonymity metric that does not exhibit these shortcomings. Our new metric is accurate as well as general

[Go to top]

Scalability amp; Paranoia in a Decentralized Social Network (PDF)
by Carlo v. Loesch, Gabor X Toth, and Mathias Baumann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There's a lot of buzz out there about "replacing" Facebook with a privacy-enhanced, decentralized, ideally open source something. In this talk we'll focus on how much privacy we should plan for (specifically about how we cannot entrust our privacy to modern virtual machine technology) and the often underestimated problem of getting such a monster network to function properly. These issues can be considered together or separately: Even if you're not as concerned about privacy as we are, the scalability problem still persists

[Go to top]

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Improving Security and Performance in Low Latency Anonymity Networks (PDF)
by Kevin Bauer.
PhD, University of Colorado, May 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Conventional wisdom dictates that the level of anonymity offered by low latency anonymity networks increases as the user base grows. However, the most significant obstacle to increased adoption of such systems is that their security and performance properties are perceived to be weak. In an effort to help foster adoption, this dissertation aims to better understand and improve security, anonymity, and performance in low latency anonymous communication systems. To better understand the security and performance properties of a popular low latency anonymity network, we characterize Tor, focusing on its application protocol distribution, geopolitical client and router distributions, and performance. For instance, we observe that peer-to-peer file sharing protocols use an unfair portion of the network's scarce bandwidth. To reduce the congestion produced by bulk downloaders in networks such as Tor, we design, implement, and analyze an anonymizing network tailored specifically for the BitTorrent peer-to-peer file sharing protocol. We next analyze Tor's security and anonymity properties and empirically show that Tor is vulnerable to practical end-to-end traffic correlation attacks launched by relatively weak adversaries that inflate their bandwidth claims to attract traffic and thereby compromise key positions on clients' paths. We also explore the security and performance trade-offs that revolve around path length design decisions and we show that shorter paths offer performance benefits and provide increased resilience to certain attacks. Finally, we discover a source of performance degradation in Tor that results from poor congestion and flow control. To improve Tor's performance and grow its user base, we offer a fresh approach to congestion and flow control inspired by techniques from IP and ATM networks

[Go to top]

Formalizing Anonymous Blacklisting Systems (PDF)
by Ryan Henry and Ian Goldberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous communications networks, such as Tor, help to solve the real and important problem of enabling users to communicate privately over the Internet. However, in doing so, anonymous communications networks introduce an entirely new problem for the service providerssuch as websites, IRC networks or mail serverswith which these users interact; in particular, since all anonymous users look alike, there is no way for the service providers to hold individual misbehaving anonymous users accountable for their actions. Recent research efforts have focused on using anonymous blacklisting systems (which are sometimes called anonymous revocation systems) to empower service providers with the ability to revoke access from abusive anonymous users. In contrast to revocable anonymity systems, which enable some trusted third party to deanonymize users, anonymous blacklisting systems provide users with a way to authenticate anonymously with a service provider, while enabling the service provider to revoke access from any users that misbehave, without revealing their identities. In this paper, we introduce the anonymous blacklisting problem and survey the literature on anonymous blacklisting systems, comparing and contrasting the architecture of various existing schemes, and discussing the tradeoffs inherent with each design. The literature on anonymous blacklisting systems lacks a unified set of definitions; each scheme operates under different trust assumptions and provides different security and privacy guarantees. Therefore, before we discuss the existing approaches in detail, we first propose a formal definition for anonymous blacklisting systems, and a set of security and privacy properties that these systems should possess. We also outline a set of new performance requirements that anonymous blacklisting systems should satisfy to maximize their potential for real-world adoption, and give formal definitions for several optional features already supported by some schemes in the literature

[Go to top]

Schedule coordination through egalitarian recurrent multi-unit combinatorial auctions (PDF)
by Javier Murillo, Víctor Muñoz, Dídac Busquets, and Beatriz López.
In Applied Intelligence 34(1), April 2011, pages 47-63. (BibTeX entry) (Download bibtex record)
(direct link) (website)

When selfish industries are competing for limited shared resources, they need to coordinate their activities to handle possible conflicting situations. Moreover, this coordination should not affect the activities already planned by the industries, since this could have negative effects on their performance. Although agents may have buffers that allow them to delay the use of resources, these are of a finite capacity, and therefore cannot be used indiscriminately. Thus, we are faced with the problem of coordinating schedules that have already been generated by the agents. To address this task, we propose to use a recurrent auction mechanism to mediate between the agents. Through this auction mechanism, the agents can express their interest in using the resources, thus helping the scheduler to find the best distribution. We also introduce a priority mechanism to add fairness to the coordination process. The proposed coordination mechanism has been applied to a waste water treatment system scenario, where different industries need to discharge their waste. We have simulated the behavior of the system, and the results show that using our coordination mechanism the waste water treatment plant can successfully treat most of the discharges, while the production activity of the industries is almost not affected by it

[Go to top]

Remote Timing Attacks are Still Practical (PDF)
by Billy Bob Brumley and Nicola Tuveri.
In unknown, April 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

For over two decades, timing attacks have been an active area of research within applied cryptography. These attacks exploit cryptosystem or protocol implementations that do not run in constant time. When implementing an elliptic curve cryptosystem with a goal to provide side-channel resistance, the scalar multiplication routine is a critical component. In such instances, one attractive method often suggested in the literature is Montgomery's ladder that performs a fixed sequence of curve and field operations. This paper describes a timing attack vulnerability in OpenSSL's ladder implementation for curves over binary fields. We use this vulnerability to steal the private key of a TLS server where the server authenticates with ECDSA signatures. Using the timing of the exchanged messages, the messages themselves, and the signatures, we mount a lattice attack that recovers the private key. Finally, we describe and implement an effective countermeasure

[Go to top]

Privacy-Implications of Performance-Based Peer Selection by Onion-Routers: A Real-World Case Study using I2P (PDF)
by Michael Herrmann and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

I2P is one of the most widely used anonymizing Peer-to-Peer networks on the Internet today. Like Tor, it uses onion routing to build tunnels between peers as the basis for providing anonymous communication channels. Unlike Tor, I2P integrates a range of anonymously hosted services directly with the platform. This paper presents a new attack on the I2P Peer-to-Peer network, with the goal of determining the identity of peers that are anonymously hosting HTTP services (Eepsite) in the network. Key design choices made by I2P developers, in particular performance-based peer selection, enable a sophisticated adversary with modest resources to break key security assumptions. Our attack first obtains an estimate of the victim's view of the network. Then, the adversary selectively targets a small number of peers used by the victim with a denial-of-service attack while giving the victim the opportunity to replace those peers with other peers that are controlled by the adversary. Finally, the adversary performs some simple measurements to determine the identity of the peer hosting the service. This paper provides the necessary background on I2P, gives details on the attack — including experimental data from measurements against the actual I2P network — and discusses possible solutions

[Go to top]

Privacy-Implications of Performance-Based Peer Selection by Onion-Routers: A Real-World Case Study using I2P (PDF)
by Michael Herrmann.
M.S, Technische Universität München, March 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Invisible Internet Project (I2P) is one of the most widely used anonymizing Peer-to-Peer networks on the Internet today. Like Tor, it uses onion routing to build tunnels between peers as the basis for providing anonymous communication channels. Unlike Tor, I2P integrates a range of anonymously hosted services directly with the platform. This thesis presents a new attack on the I2P Peer-to-Peer network, with the goal of determining the identity of peers that are anonymously hosting HTTP (Eepsite) services in the network. Key design choices made by I2P developers, in particular performance-based peer selection, enable a sophisticated adversary with modest resources to break key security assumptions. Our attack first obtains an estimate of the victim's view of the network. Then, the adversary selectively targets a small number of peers used by the victim with a denial-of-service attack while giving the victim the opportunity to replace those peers with other peers that are controlled by the adversary. Finally, the adversary performs some simple measurements to determine the identity of the peer hosting the service. This thesis provides the necessary background on I2P, gives details on the attack — including experimental data from measurements against the actual I2P network — and discusses possible solutions

[Go to top]

One Bad Apple Spoils the Bunch: Exploiting P2P Applications to Trace and Profile Tor Users (PDF)
by Stevens Blond, Pere Manils, Chaabane Abdelberi, Mohamed Ali Kaafar, Claude Castelluccia, Arnaud Legout, and Walid Dabbous.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is a popular low-latency anonymity network. However, Tor does not protect against the exploitation of an insecure application to reveal the IP address of, or trace, a TCP stream. In addition, because of the linkability of Tor streams sent together over a single circuit, tracing one stream sent over a circuit traces them all. Surprisingly, it is unknown whether this linkability allows in practice to trace a significant number of streams originating from secure (i.e., proxied) applications. In this paper, we show that linkability allows us to trace 193 of additional streams, including 27 of HTTP streams possibly originating from secure'' browsers. In particular, we traced 9 of Tor streams carried by our instrumented exit nodes. Using BitTorrent as the insecure application, we design two attacks tracing BitTorrent users on Tor. We run these attacks in the wild for 23 days and reveal 10,000 IP addresses of Tor users. Using these IP addresses, we then profile not only the BitTorrent downloads but also the websites visited per country of origin of Tor users. We show that BitTorrent users on Tor are over-represented in some countries as compared to BitTorrent users outside of Tor. By analyzing the type of content downloaded, we then explain the observed behaviors by the higher concentration of pornographic content downloaded at the scale of a country. Finally, we present results suggesting the existence of an underground BitTorrent ecosystem on Tor

[Go to top]

SWIRL: A Scalable Watermark to Detect Correlated Network Flows (PDF)
by Amir Houmansadr and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Flow watermarks are active traffic analysis techniques that help establish a causal connection between two network flows by content-independent manipulations, e.g., altering packet timings. Watermarks provide a much more scalable approach for flow correlation than passive traffic analysis. Previous designs of scalable watermarks, however, were subject to multi-flow attacks. They also introduced delays too large to be used in most environments. We design SWIRL, a Scalable Watermark that is Invisible and Resilient to packet Losses. SWIRL is the first watermark that is practical to use for large-scale traffic analysis. SWIRL uses a flow-dependent approach to resist multi-flow attacks, marking each flow with a different pattern. SWIRL is robust to packet losses and network jitter, yet it introduces only small delays that are invisible to both benign users and determined adversaries. We analyze the performance of SWIRL both analytically and on the PlanetLab testbed, demonstrating very low error rates. We consider applications of SWIRL to stepping stone detection and linking anonymous communication. We also propose a novel application of watermarks to defend against congestion attacks on Tor

[Go to top]

A Security API for Distributed Social Networks (PDF)
by Michael Backes, Matteo Maffei, and Kim Pecina.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a cryptographic framework to achieve access control, privacy of social relations, secrecy of resources, and anonymity of users in social networks. We illustrate our technique on a core API for social networking, which includes methods for establishing social relations and for sharing resources. The cryptographic protocols implementing these methods use pseudonyms to hide user identities, signatures on these pseudonyms to establish social relations, and zero-knowledge proofs of knowledge of such signatures to demonstrate the existence of social relations without sacrificing user anonymity. As we do not put any constraints on the underlying social network, our framework is generally applicable and, in particular, constitutes an ideal plug-in for decentralized social networks. We analyzed the security of our protocols by developing formal definitions of the aforementioned security properties and by verifying them using ProVerif, an automated theorem prover for cryptographic protocols. Finally, we built a prototypical implementation and conducted an experimental evaluation to demonstrate the efficiency and the scalability of our framework

[Go to top]

Proximax: Fighting Censorship With an Adaptive System for Distribution of Open Proxies (PDF)
by Kirill Levchenko and Damon McCoy.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many people currently use proxies to circumvent government censorship that blocks access to content on the Internet. Unfortunately, the dissemination channels used to distribute proxy server locations are increasingly being monitored to discover and quickly block these proxies. This has given rise to a large number of ad hoc dissemination channels that leverage trust networks to reach legitimate users and at the same time prevent proxy server addresses from falling into the hands of censors. To address this problem in a more principled manner, we present Proximax, a robust system that continuously distributes pools of proxies to a large number of channels. The key research challenge in Proximax is to distribute the proxies among the different channels in a way that maximizes the usage of these proxies while minimizing the risk of having them blocked. This is challenging because of two conflicting goals: widely disseminating the location of the proxies to fully utilize their capacity and preventing (or at least delaying) their discovery by censors. We present a practical system that lays out a design and analytical model that balances these factors

[Go to top]

Malice versus AN.ON: Possible Risks of Missing Replay and Integrity Protection (PDF)
by Benedikt Westermann and Dogan Kesdogan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we investigate the impact of missing replay protection as well as missing integrity protection concerning a local attacker in AN.ON. AN.ON is a low latency anonymity network mostly used to anonymize web traffic. We demonstrate that both protection mechanisms are important by presenting two attacks that become feasible as soon as the mechanisms are missing. We mount both attacks on the AN.ON network which neither implements replay protection nor integrity protection yet

[Go to top]

BNymble: More anonymous blacklisting at almost no cost (PDF)
by Peter Lofgren and Nicholas J. Hopper.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous blacklisting schemes allow online service providers to prevent future anonymous access by abusive users while preserving the privacy of all anonymous users (both abusive and non-abusive). The first scheme proposed for this purpose was Nymble, an extremely efficient scheme based only on symmetric primitives; however, Nymble relies on trusted third parties who can collude to de-anonymize users of the scheme. Two recently proposed schemes, Nymbler and Jack, reduce the trust placed in these third parties at the expense of using less-efficient asymmetric crypto primitives. We present BNymble, a scheme which matches the anonymity guarantees of Nymbler and Jack while (nearly) maintaining the efficiency of the original Nymble. The key insight of BNymble is that we can achieve the anonymity goals of these more recent schemes by replacing only the infrequent User Registration protocol from Nymble with asymmetric primitives. We prove the security of BNymble, and report on its efficiency

[Go to top]

Secure collaborative supply chain planning and inverse optimization–The JELS model
by Richard Pibernik, Yingying Zhang, Florian Kerschbaum, and Axel Schröpfer.
In European Journal of Operations Research 208, January 2011, pages 75-85. (BibTeX entry) (Download bibtex record)
(direct link) (website)

It is a well-acknowledged fact that collaboration between different members of a supplychain yields a significant potential to increase overall supplychain performance. Sharing private information has been identified as prerequisite for collaboration and, at the same time, as one of its major obstacles. One potential avenue for overcoming this obstacle is Secure Multi-Party Computation (SMC). SMC is a cryptographic technique that enables the computation of any (well-defined) mathematical function by a number of parties without any party having to disclose its input to another party. In this paper, we show how SMC can be successfully employed to enable joint decision-making and benefit sharing in a simple supplychain setting. We develop secure protocols for implementing the well-known Joint Economic Lot Size (JELS) Model with benefit sharing in such a way that none of the parties involved has to disclose any private (cost and capacity) data. Thereupon, we show that although computation of the model's outputs can be performed securely, the approach still faces practical limitations. These limitations are caused by the potential of inverseoptimization, i.e., a party can infer another party's private data from the output of a collaborativeplanning scheme even if the computation is performed in a secure fashion. We provide a detailed analysis of inverseoptimization potentials and introduce the notion of stochastic security, a novel approach to assess the additional information a party may learn from joint computation and benefit sharing. Based on our definition of stochastic security we propose a stochastic benefit sharing rule, develop a secure protocol for this benefit sharing rule, and assess under which conditions stochastic benefit sharing can guarantee secure collaboration

[Go to top]

Considering Complex Search Techniques in DHTs under Churn
by Jamie Furness and Mario Kolberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditionally complex queries have been performed over unstructured P2P networks by means of flooding, which is inherently inefficient due to the large number of redundant messages generated. While Distributed Hash Tables (DHTs) can provide very efficient look-up operations, they traditionally do not provide any methods for complex queries. By exploiting the structure inherent in DHTs we can perform complex querying over structured P2P networks by means of efficiently broadcasting the search query. This allows every node in the network to process the query locally, and hence is as powerful and flexible as flooding in unstructured networks, but without the inefficiency of redundant messages. While there have been various approaches proposed for broadcasting search queries over DHTs, the focus has not been on validation under churn. Comparing blind search methods for DHTs though simulation we see that churn, in particular nodes leaving the network, has a large impact on query success rate. In this paper we present novel results comparing blind search over Chord and Pastry while under varying levels of churn. We further consider how different data replication strategies can be used to enhance the query success rate

[Go to top]

A comprehensive study of Convergent and Commutative Replicated Data Types (PDF)
by Marc Shapiro, Nuno Preguica, Carlos Baquero, and Marek Zawirski.
In unknown(7506), January 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Eventual consistency aims to ensure that replicas of some mutable shared object converge without foreground synchronisation. Previous approaches to eventual con- sistency are ad-hoc and error-prone. We study a principled approach: to base the design of shared data types on some simple formal conditions that are sufficient to guarantee even- tual consistency. We call these types Convergent or Commutative Replicated Data Types (CRDTs). This paper formalises asynchronous object replication, either state based or op- eration based, and provides a sufficient condition appropriate for each case. It describes several useful CRDTs, including container data types supporting both add and remove op- erations with clean semantics, and more complex types such as graphs, montonic DAGs, and sequences. It discusses some properties needed to implement non-trivial CRDTs

[Go to top]

What's the difference?: efficient set reconciliation without prior context (PDF)
by David Eppstein, Michael T. Goodrich, Frank Uyeda, and George Varghese.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Social Market: Combining Explicit and Implicit Social Networks (PDF)
by Davide Frey, Arnaud Jégou, and Anne-Marie Kermarrec.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The pervasiveness of the Internet has lead research and applications to focus more and more on their users. Online social networks such as Facebook provide users with the ability to maintain an unprecedented number of social connections. Recommendation systems exploit the opinions of other users to suggest movies or products based on our similarity with them. This shift from machines to users motivates the emergence of novel applications and research challenges. In this paper, we embrace the social aspects of the Web 2.0 by considering a novel problem. We build a distributed social market that combines interest-based social networks with explicit networks like Facebook. Our Social Market (SM) allows users to identify and build connections to other users that can provide interesting goods, or information. At the same time, it backs up these connections with trust, by associating them with paths of trusted users that connect new acquaintances through the explicit network. This convergence of implicit and explicit networks yields TAPS, a novel gossip protocol that can be applied in applications devoted to commercial transactions, or to add robustness to standard gossip applications like dissemination or recommendation systems

[Go to top]

Selling Privacy at Auction
by Arpita Ghosh and Aaron Roth.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

On the Relation Between Differential Privacy and Quantitative Information Flow (PDF)
by Mário S. Alvim and Miguel E. Andrés.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Differential privacy is a notion that has emerged in the community of statistical databases, as a response to the problem of protecting the privacy of the database's participants when performing statistical queries. The idea is that a randomized query satisfies differential privacy if the likelihood of obtaining a certain answer for a database x is not too different from the likelihood of obtaining the same answer on adjacent databases, i.e. databases which differ from x for only one individual. Information flow is an area of Security concerned with the problem of controlling the leakage of confidential information in programs and protocols. Nowadays, one of the most established approaches to quantify and to reason about leakage is based on the Rényi min entropy version of information theory. In this paper, we analyze critically the notion of differential privacy in light of the conceptual framework provided by the Rényi min information theory. We show that there is a close relation between differential privacy and leakage, due to the graph symmetries induced by the adjacency relation. Furthermore, we consider the utility of the randomized answer, which measures its expected degree of accuracy. We focus on certain kinds of utility functions called binary, which have a close correspondence with the Rényi min mutual information. Again, it turns out that there can be a tight correspondence between differential privacy and utility, depending on the symmetries induced by the adjacency relation and by the query. Depending on these symmetries we can also build an optimal-utility randomization mechanism while preserving the required level of differential privacy. Our main contribution is a study of the kind of structures that can be induced by the adjacency relation and the query, and how to use them to derive bounds on the leakage and achieve the optimal utility

[Go to top]

Public-Key Encrypted Bloom Filters with Applications to Supply Chain Integrity
by Florian Kerschbaum.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Private Similarity Computation in Distributed Systems: From Cryptography to Differential Privacy (PDF)
by Mohammad Alaggan, Sébastien Gambs, and Anne-Marie Kermarrec.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we address the problem of computing the similarity between two users (according to their profiles) while preserving their privacy in a fully decentralized system and for the passive adversary model. First, we introduce a two-party protocol for privately computing a threshold version of the similarity and apply it to well-known similarity measures such as the scalar product and the cosine similarity. The output of this protocol is only one bit of information telling whether or not two users are similar beyond a predetermined threshold. Afterwards, we explore the computation of the exact and threshold similarity within the context of differential privacy. Differential privacy is a recent notion developed within the field of private data analysis guaranteeing that an adversary that observes the output of the differentially private mechanism, will only gain a negligible advantage (up to a privacy parameter) from the presence (or absence) of a particular item in the profile of a user. This provides a strong privacy guarantee that holds independently of the auxiliary knowledge that the adversary might have. More specifically, we design several differentially private variants of the exact and threshold protocols that rely on the addition of random noise tailored to the sensitivity of the considered similarity measure. We also analyze their complexity as well as their impact on the utility of the resulting similarity measure. Finally, we provide experimental results validating the effectiveness of the proposed approach on real datasets

[Go to top]

Multi-objective optimization based privacy preserving distributed data mining in Peer-to-Peer networks (PDF)
by Kamalika Das, Kanishka Bhaduri, and Hillol Kargupta.
In Peer-to-Peer Networking and Applications 4, 2011, pages 192-209. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper proposes a scalable, local privacy-preserving algorithm for distributed Peer-to-Peer (P2P) data aggregation useful for many advanced data mining/analysis tasks such as average/sum computation, decision tree induction, feature selection, and more. Unlike most multi-party privacy-preserving data mining algorithms, this approach works in an asynchronous manner through local interactions and it is highly scalable. It particularly deals with the distributed computation of the sum of a set of numbers stored at different peers in a P2P network in the context of a P2P web mining application. The proposed optimization-based privacy-preserving technique for computing the sum allows different peers to specify different privacy requirements without having to adhere to a global set of parameters for the chosen privacy model. Since distributed sum computation is a frequently used primitive, the proposed approach is likely to have significant impact on many data mining tasks such as multi-party privacy-preserving clustering, frequent itemset mining, and statistical aggregate computation

[Go to top]

Meeting subscriber-defined QoS constraints in publish/subscribe systems (PDF)
by Muhammad Adnan Tariq, Boris Koldehofe, Gerald G. Koch, Imran Khan, and Kurt Rothermel.
In Concurr. Comput. : Pract. Exper 23(17), 2011, pages 2140-2153. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

How Much Is Enough? Choosing for Differential Privacy (PDF)
by Jaewoo Lee and Chris Clifton.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Differential privacy is a recent notion, and while it is nice conceptually it has been difficult to apply in practice. The parameters of differential privacy have an intuitive theoretical interpretation, but the implications and impacts on the risk of disclosure in practice have not yet been studied, and choosing appropriate values for them is non-trivial. Although the privacy parameter in differential privacy is used to quantify the privacy risk posed by releasing statistics computed on sensitive data, is not an absolute measure of privacy but rather a relative measure. In effect, even for the same value of , the privacy guarantees enforced by differential privacy are different based on the domain of attribute in question and the query supported. We consider the probability of identifying any particular individual as being in the database, and demonstrate the challenge of setting the proper value of given the goal of protecting individuals in the database with some fixed probability

[Go to top]

The Free Secure Network Systems Group: Secure Peer-to-Peer Networking and Beyond (PDF)
by Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper introduces the current research and future plans of the Free Secure Network Systems Group at the Technische Universitauml;t Muuml;nchen. In particular, we provide some insight into the development process and architecture of the GNUnet P2P framework and the challenges we are currently working on

[Go to top]

Forensic investigation of the OneSwarm anonymous filesharing system (PDF)
by Swagatika Prusty, Brian Neil Levine, and Marc Liberatore.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

OneSwarm is a system for anonymous p2p file sharing in use by thousands of peers. It aims to provide Onion Routing-like privacy and BitTorrent-like performance. We demonstrate several flaws in OneSwarm's design and implementation through three different attacks available to forensic investigators. First, we prove that the current design is vulnerable to a novel timing attack that allows just two attackers attached to the same target to determine if it is the source of queried content. When attackers comprise 15 of OneSwarm peers, we expect over 90 of remaining peers will be attached to two attackers and therefore vulnerable. Thwarting the attack increases OneSwarm query response times, making them longer than the equivalent in Onion Routing. Second, we show that OneSwarm's vulnerability to traffic analysis by colluding attackers is much greater than was previously reported, and is much worse than Onion Routing. We show for this second attack that when investigators comprise 25 of peers, over 40 of the network can be investigated with 80 precision to find the sources of content. Our examination of the OneSwarm source code found differences with the technical paper that significantly reduce security. For the implementation in use by thousands of people, attackers that comprise 25 of the network can successfully use this second attack against 98 of remaining peers with 95 precision. Finally, we show that a novel application of a known TCP-based attack allows a single attacker to identify whether a neighbor is the source of data or a proxy for it. Users that turn off the default rate-limit setting are exposed. Each attack can be repeated as investigators leave and rejoin the network. All of our attacks are successful in a forensics context: Law enforcement can use them legally ahead of a warrant. Furthermore, private investigators, who have fewer restrictions on their behavior, can use them more easily in pursuit of evidence for such civil suits as copyright infringement

[Go to top]

Distributed Private Data Analysis: On Simultaneously Solving How and What (PDF)
by Amos Beimel, Kobbi Nissim, and Eran Omri.
In CoRR abs/1103.2626, 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We examine the combination of two directions in the field of privacy concerning computations over distributed private inputs–secure function evaluation (SFE) and differential privacy. While in both the goal is to privately evaluate some function of the individual inputs, the privacy requirements are significantly different. The general feasibility results for SFE suggest a natural paradigm for implementing differentially private analyses distributively: First choose what to compute, i.e., a differentially private analysis; Then decide how to compute it, i.e., construct an SFE protocol for this analysis. We initiate an examination whether there are advantages to a paradigm where both decisions are made simultaneously. In particular, we investigate under which accuracy requirements it is beneficial to adapt this paradigm for computing a collection of functions including binary sum, gap threshold, and approximate median queries. Our results imply that when computing the binary sum of n distributed inputs then: * When we require that the error is o(n) and the number of rounds is constant, there is no benefit in the new paradigm. * When we allow an error of O(n), the new paradigm yields more efficient protocols when we consider protocols that compute symmetric functions. Our results also yield new separations between the local and global models of computations for private data analysis

[Go to top]

Collaborative Personalized Top-k Processing (PDF)
by Xiao Bai, Rachid Guerraoui, Anne-Marie Kermarrec, and Vincent Leroy.
In ACM Trans. Database Syst 36, 2011, pages 26:1-26:38. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This article presents P4Q, a fully decentralized gossip-based protocol to personalize query processing in social tagging systems. P4Q dynamically associates each user with social acquaintances sharing similar tagging behaviors. Queries are gossiped among such acquaintances, computed on-the-fly in a collaborative, yet partitioned manner, and results are iteratively refined and returned to the querier. Analytical and experimental evaluations convey the scalability of P4Q for top-k query processing, as well its inherent ability to cope with users updating profiles and departing

[Go to top]

Beyond Simulation: Large-Scale Distributed Emulation of P2P Protocols (PDF)
by Nathan S Evans and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents details on the design and implementation of a scalable framework for evaluating peer-to-peer protocols. Unlike systems based on simulation, emulation-based systems enable the experimenter to obtain data that reflects directly on the concrete implementation in much greater detail. This paper argues that emulation is a better model for experiments with peer-to-peer protocols since it can provide scalability and high flexibility while eliminating the cost of moving from experimentation to deployment. We discuss our unique experience with large-scale emulation using the GNUnet peer-to-peer framework and provide experimental results to support these claims

[Go to top]

2010

Distributing social applications (PDF)
by Vincent Leroy.
phd, IRISA, December 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Developing Peer-to-Peer Web Applications (PDF)
by Toni Ruottu.
Master's Thesis, University of Helsinki, September 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform's suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data

[Go to top]

Application of Random Walks to Decentralized Recommender Systems (PDF)
by Anne-Marie Kermarrec, Vincent Leroy, Afshin Moin, and Christopher Thraves.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

SEPIA: privacy-preserving aggregation of multi-domain network events and statistics (PDF)
by Martin Burkhart, Mario Strasser, Dilip Many, and Xenofontas Dimitropoulos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Secure multiparty computation (MPC) allows joint privacy-preserving computations on data of multiple parties. Although MPC has been studied substantially, building solutions that are practical in terms of computation and communication cost is still a major challenge. In this paper, we investigate the practical usefulness of MPC for multi-domain network security and monitoring. We first optimize MPC comparison operations for processing high volume data in near real-time. We then design privacy-preserving protocols for event correlation and aggregation of network traffic statistics, such as addition of volume metrics, computation of feature entropy, and distinct item count. Optimizing performance of parallel invocations, we implement our protocols along with a complete set of basic operations in a library called SEPIA. We evaluate the running time and bandwidth requirements of our protocols in realistic settings on a local cluster as well as on PlanetLab and show that they work in near real-time for up to 140 input providers and 9 computation nodes. Compared to implementations using existing general-purpose MPC frameworks, our protocols are significantly faster, requiring, for example, 3 minutes for a task that takes 2 days with general-purpose frameworks. This improvement paves the way for new applications of MPC in the area of networking. Finally, we run SEPIA's protocols on real traffic traces of 17 networks and show how they provide new possibilities for distributed troubleshooting and early anomaly detection

[Go to top]

Pr2-P2PSIP: Privacy Preserving P2P Signaling for VoIP and IM (PDF)
by Ali Fessi, Nathan S Evans, Heiko Niedermayer, and Ralph Holz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Differential Privacy Under Continual Observation
by Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N. Rothblum.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Incentive-driven QoS in peer-to-peer overlays (PDF)
by Raul Leonardo Landa Gamiochipi.
Ph.D. thesis, University College London, May 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

[Go to top]

Efficient DHT attack mitigation through peers' ID distribution (PDF)
by Thibault Cholez, Isabelle Chrisment, and Olivier Festor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a new solution to protect the widely deployed KAD DHT against localized attacks which can take control over DHT entries. We show through measurements that the IDs distribution of the best peers found after a lookup process follows a geometric distribution. We then use this result to detect DHT attacks by comparing real peers' ID distributions to the theoretical one thanks to the Kullback-Leibler divergence. When an attack is detected, we propose countermeasures that progressively remove suspicious peers from the list of possible contacts to provide a safe DHT access. Evaluations show that our method detects the most efficient attacks with a very small false-negative rate, while countermeasures successfully filter almost all malicious peers involved in an attack. Moreover, our solution completely fits the current design of the KAD network and introduces no network overhead

[Go to top]

Hierarchical codes: A flexible trade-off for erasure codes in peer-to-peer storage systems (PDF)
by Alessandro Duminuco and E W Biersack.
In Peer-to-Peer Networking and Applications 3, March 2010, pages 52-66. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Redundancy is the basic technique to provide reliability in storage systems consisting of multiple components. A redundancy scheme defines how the redundant data are produced and maintained. The simplest redundancy scheme is replication, which however suffers from storage inefficiency. Another approach is erasure coding, which provides the same level of reliability as replication using a significantly smaller amount of storage. When redundant data are lost, they need to be replaced. While replacing replicated data consists in a simple copy, it becomes a complex operation with erasure codes: new data are produced performing a coding over some other available data. The amount of data to be read and coded is d times larger than the amount of data produced, where d, called repair degree, is larger than 1 and depends on the structure of the code. This implies that coding has a larger computational and I/O cost, which, for distributed storage systems, translates into increased network traffic. Participants of Peer-to-Peer systems often have ample storage and CPU power, but their network bandwidth may be limited. For these reasons existing coding techniques are not suitable for P2P storage. This work explores the design space between replication and the existing erasure codes. We propose and evaluate a new class of erasure codes, called Hierarchical Codes, which allows to reduce the network traffic due to maintenance without losing the benefits given by traditional erasure codes

[Go to top]

User-perceived Performance of the NICE Application Layer Multicast Protocol in Large and Highly Dynamic Groups (PDF)
by Christian Hübsch, Christoph P. Mayer, and Oliver Waldhorst.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The presentation of a landmark paper by Chu et al. at SIGMETRICS 2000 introduced application layer multicast (ALM) as completely new area of network research. Many researchers have since proposed ALM protocols, and have shown that these protocols only put a small burden on the network in terms of link-stress and -stretch. However, since the network is typically not a bottleneck, user acceptance remains the limiting factor for the deployment of ALM. In this paper we present an in-depth study of the user-perceived performance of the NICE ALM protocol. We use the OverSim simulation framework to evaluate delay experienced by a user and bandwidth consumption on the user's access link in large multicast groups and under aggressive churn models. Our major results are (1) latencies grow moderate with increasing number of nodes as clusters get optimized, (2) join delays get optimized over time, and (3) despite being a tree-dissemination protocol NICE handles churn surprisingly well when adjusting heartbeat intervals accordingly. We conclude that NICE comes up to the user's expectations even for large groups and under high churn. This work was partially funded as part of the Spontaneous Virtual Networks (SpoVNet) project by the Landesstiftung Baden-Württemberg within the BW-FIT program and as part of the Young Investigator Group Controlling Heterogeneous and Dynamic Mobile Grid and Peer-to-Peer Systems (CoMoGriP) by the Concept for the Future of Karlsruhe Institute of Technology (KIT) within the framework of the German Excellence Initiative

[Go to top]

Poisoning the Kad network (PDF)
by Thomas Locher, David Mysicka, Stefan Schmid, and Roger Wattenhofer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Since the demise of the Overnet network, the Kad network has become not only the most popular but also the only widely used peer-to-peer system based on a distributed hash table. It is likely that its user base will continue to grow in numbers over the next few years as, unlike the eDonkey network, it does not depend on central servers, which increases scalability and reliability. Moreover, the Kad network is more efficient than unstructured systems such as Gnutella. However, we show that today's Kad network can be attacked in several ways by carrying out several (well-known) attacks on the Kad network. The presented attacks could be used either to hamper the correct functioning of the network itself, to censor contents, or to harm other entities in the Internet not participating in the Kad network such as ordinary web servers. While there are simple heuristics to reduce the impact of some of the attacks, we believe that the presented attacks cannot be thwarted easily in any fully decentralized peer-to-peer system without some kind of a centralized certification and verification authority

[Go to top]

How Much Anonymity does Network Latency Leak? (PDF)
by Nicholas J. Hopper, Eugene Y. Vasserman, and Eric Chan-Tin.
In ACM Transactions on Information and System Security, January 2010, pages 82-91. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Low-latency anonymity systems such as Tor, AN.ON, Crowds, and Anonymizer.com aim to provide anonymous connections that are both untraceable by "local" adversaries who control only a few machines, and have low enough delay to support anonymous use of network services like web browsing and remote login. One consequence of these goals is that these services leak some information about the network latency between the sender and one or more nodes in the system. This paper reports on three experiments that partially measure the extent to which such leakage can compromise anonymity. First, using a public dataset of pairwise round-trip times (RTTs) between 2000 Internet hosts, we estimate that on average, knowing the network location of host A and the RTT to host B leaks 3.64 bits of information about the network location of B. Second, we describe an attack that allows a pair of colluding web sites to predict, based on local timing information and with no additional resources, whether two connections from the same Tor exit node are using the same circuit with 17 equal error rate. Finally, we describe an attack that allows a malicious website, with access to a network coordinate system and one corrupted Tor router, to recover roughly 6.8 bits of network location per hour

[Go to top]

Building Incentives into Tor (PDF)
by Tsuen-Wan Johnny'' Ngan, Roger Dingledine, and Dan S. Wallach.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed anonymous communication networks like Tor depend on volunteers to donate their resources. However, the efforts of Tor volunteers have not grown as fast as the demands on the Tor network.We explore techniques to incentivize Tor users to relay Tor traffic too; if users contribute resources to the Tor overlay, they should receive faster service in return. In our design, the central Tor directory authorities measure performance and publish a list of Tor relays that should be given higher priority when establishing circuits. Simulations of our proposed design show that conforming users receive significant improvements in performance, in some cases experiencing twice the network throughput of selfish users who do not relay traffic for the Tor network

[Go to top]

Using Legacy Applications in Future Heterogeneous Networks with ariba
by Christian Hübsch, Christoph P. Mayer, Sebastian Mies, Roland Bless, Oliver Waldhorst, and Martina Zitterbart.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Unleashing Tor, BitTorrent amp; Co.: How to Relieve TCP Deficiencies in Overlays
by Daniel Marks, Florian Tschorsch, and Bjoern Scheuermann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Scalable Application-Layer Multicast Simulations with OverSim
by Stephan Krause and Christian Hübsch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Application-Layer Multicast has become a promising class of protocols since IP Multicast has not found wide area deployment in the Internet. Developing such protocols requires in-depth analysis of their properties even with large numbers of participants—a characteristic which is at best hard to achieve in real network experiments. Several well-known simulation frameworks have been developed and used in recent years, but none has proved to be fitting the requirements for analyzing large-scale application-layer networks. In this paper we propose the OverSim framework as a promising simulation environment for scalabe Application-Layer Multicast research. We show that OverSim is able to manage even overlays with several thousand participants in short time while consuming comparably little memory. We compare the framework's runtime properties with the two exemplary Application-Layer Mutlicast protocols Scribe and NICE. The results show that both simulation time and memory consumption grow linearly with the number of nodes in highly feasible dimensions

[Go to top]

On Runtime Adaptation of Application-Layer Multicast Protocol Parameters
by Christian Hübsch, Christoph P. Mayer, and Oliver Waldhorst.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Reconnecting the internet with ariba: self-organizing provisioning of end-to-end connectivity in heterogeneous networks (PDF)
by Christian Hübsch, Christoph P. Mayer, Sebastian Mies, Roland Bless, Oliver Waldhorst, and Martina Zitterbart.
In SIGCOMM Comput. Commun. Rev 40(1), 2010, pages 131-132. (BibTeX entry) (Download bibtex record)
(direct link) (website)

End-to-End connectivity in today's Internet can no longer be taken for granted. Middleboxes, mobility, and protocol heterogeneity complicate application development and often result in application-specific solutions. In our demo we present ariba: an overlay-based approach to handle such network challenges and to provide consistent homogeneous network primitives in order to ease application and service development

[Go to top]

Providing basic security mechanisms in broker-less publish/subscribe systems (PDF)
by Muhammad Adnan Tariq, Boris Koldehofe, Ala Altaweel, and Kurt Rothermel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The provisioning of basic security mechanisms such as authentication and confidentiality is highly challenging in a content-based publish/subscribe system. Authentication of publishers and subscribers is difficult to achieve due to the loose coupling of publishers and subscribers. Similarly, confidentiality of events and subscriptions conflicts with content-based routing. In particular, content-based approaches in broker-less environments do not address confidentiality at all. This paper presents a novel approach to provide confidentiality and authentication in a broker-less content-based publish-subscribe system. The authentication of publishers and subscribers as well as confidentiality of events is ensured, by adapting the pairing-based cryptography mechanisms, to the needs of a publish/subscribe system. Furthermore, an algorithm to cluster subscribers according to their subscriptions preserves a weak notion of subscription confidentiality. Our approach provides fine grained key management and the cost for encryption, decryption and routing is in the order of subscribed attributes. Moreover, the simulation results verify that supporting security is affordable with respect to the cost for overlay construction and event dissemination latencies, thus preserving scalability of the system

[Go to top]

Private Record Matching Using Differential Privacy (PDF)
by Ali Inan, Murat Kantarcioglu, Gabriel Ghinita, and Elisa Bertino.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private matching between datasets owned by distinct parties is a challenging problem with several applications. Private matching allows two parties to identify the records that are close to each other according to some distance functions, such that no additional information other than the join result is disclosed to any party. Private matching can be solved securely and accurately using secure multi-party computation (SMC) techniques, but such an approach is prohibitively expensive in practice. Previous work proposed the release of sanitized versions of the sensitive datasets which allows blocking, i.e., filtering out sub-sets of records that cannot be part of the join result. This way, SMC is applied only to a small fraction of record pairs, reducing the matching cost to acceptable levels. The blocking step is essential for the privacy, accuracy and efficiency of matching. However, the state-of-the-art focuses on sanitization based on k-anonymity, which does not provide sufficient privacy. We propose an alternative design centered on differential privacy, a novel paradigm that provides strong privacy guarantees. The realization of the new model presents difficult challenges, such as the evaluation of distance-based matching conditions with the help of only a statistical queries interface. Specialized versions of data indexing structures (e.g., kd-trees) also need to be devised, in order to comply with differential privacy. Experiments conducted on the real-world Census-income dataset show that, although our methods provide strong privacy, their effectiveness in reducing matching cost is not far from that of k-anonymity based counterparts

[Go to top]

Privacy-preserving similarity-based text retrieval (PDF)
by Hweehwa Pang, Jialie Shen, and Ramayya Krishnan.
In ACM Trans. Internet Technol 10(1), 2010, pages 1-39. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Users of online services are increasingly wary that their activities could disclose confidential information on their business or personal activities. It would be desirable for an online document service to perform text retrieval for users, while protecting the privacy of their activities. In this article, we introduce a privacy-preserving, similarity-based text retrieval scheme that (a) prevents the server from accurately reconstructing the term composition of queries and documents, and (b) anonymizes the search results from unauthorized observers. At the same time, our scheme preserves the relevance-ranking of the search server, and enables accounting of the number of documents that each user opens. The effectiveness of the scheme is verified empirically with two real text corpora

[Go to top]

Privacy-preserving P2P data sharing with OneSwarm (PDF)
by Tomas Isdal, Michael Piatek, Arvind Krishnamurthy, and Thomas Anderson.
In SIGCOMM Comput. Commun. Rev 40(4), 2010, pages 111-122. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A Novel Testbed for P2P Networks (PDF)
by Pekka H. J. Perälä, Jori P. Paananen, Milton Mukhopadhyay, and Jukka-Pekka Laulajainen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Managing Distributed Applications Using Gush (PDF)
by Jeannie R. Albrecht and Danny Yuxing Huang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Malugo: A peer-to-peer storage system (PDF)
by Yu-Wei Chan, Tsung-Hsuan Ho, Po-Chi Shih, and Yeh-Ching Chung.
In unknown, 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of routing locality in peer-to-peer storage systems where peers store and exchange data among themselves. With the global information, peers will take the data locality into consideration when they implement their replication mechanisms to keep a number of file replicas all over the systems. In this paper, we mainly propose a peer-to-peer storage system–Malugo. Algorithms for the implementation of the peers' locating and file operation processes are also presented. Simulation results show that the proposed system successfully constructs an efficient and stable peer-to-peer storage environment with considerations of data and routing locality among peers

[Go to top]

How to Build Complex, Large-Scale Emulated Networks (PDF)
by Hung X. Nguyen, Matthew Roughan, Simon Knight, Nick Falkner, Olaf Maennel, and Randy Bush.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

How Accurately Can One's Interests Be Inferred from Friends? (PDF)
by Zhen Wen and Ching-Yung Lin.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Search and recommendation systems must effectively model user interests in order to provide personalized results. The proliferation of social software makes social network an increasingly important source for user interest modeling, be- cause of the social influence and correlation among friends. However, there are large variations in people's contribution of social content. Therefore, it is impractical to accurately model interests for all users. As a result, applications need to decide whether to utilize a user interest model based on its accuracy. To address this challenge, we present a study on the accuracy of user interests inferred from three types of social content: social bookmarking, file sharing, and electronic communication, in an organizational social network within a large-scale enterprise. First, we demonstrate that combining different types of social content to infer user interests outperforms methods that use only one type of social content. Second, we present a technique to predict the inference accuracy based on easily observed network characteristics, including user activeness, network in-degree, out-degree, and betweenness centrality

[Go to top]

The Gossple Anonymous Social Network (PDF)
by Marin Bertier, Davide Frey, Rachid Guerraoui, Anne-Marie Kermarrec, and Vincent Leroy.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

While social networks provide news from old buddies, you can learn a lot more from people you do not know, but with whom you share many interests. We show in this paper how to build a network of anonymous social acquaintances using a gossip protocol we call Gossple, and how to leverage such a network to enhance navigation within Web 2.0 collaborative applications, à la LastFM and Delicious. Gossple nodes (users) periodically gossip digests of their interest profiles and compute their distances (in terms of interest) with respect to other nodes. This is achieved with little bandwidth and storage, fast convergence, and without revealing which profile is associated with which user. We evaluate Gossple on real traces from various Web 2.0 applications with hundreds of PlanetLab hosts and thousands of simulated nodes

[Go to top]

Event processing for large-scale distributed games
by Gerald G. Koch, Muhammad Adnan Tariq, Boris Koldehofe, and Kurt Rothermel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Novel peer-to-peer-based multiplayer online games are instantiated in an ad-hoc manner without the support of dedicated infrastructure and maintain their state in a distributed manner. Although their employed communication paradigms provide efficient access to sections of distributed state, such communication fails if the participants need to access large subsets of the application state in order to detect high-level situations. We propose a demonstration that shows how multiplayer online games can benefit from using publish/subscribe communication and complex event processing alongside their traditional communication paradigm

[Go to top]

Drac: An Architecture for Anonymous Low-Volume Communications (PDF)
by George Danezis, Claudia Diaz, Carmela Troncoso, and Ben Laurie.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy (PDF)
by Cynthia Dwork and Moni Naor.
In Journal of Privacy and Confidentiality 2, 2010, pages 93-107. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In 1977 Tore Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a natural formalization of Dalenius' goal cannot be achieved if the database is useful. The key obstacle is the side information that may be available to an adversary. Our results hold under very general conditions regarding the database, the notion of privacy violation, and the notion of utility.

Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs motivated the notion of differential privacy [15, 16], a strong ad omnia privacy which, intuitively, captures the increased risk to one's privacy incurred by participating in a database

[Go to top]

Cryptographic Extraction and Key Derivation: The HKDF Scheme (PDF)
by Hugo Krawczyk.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In spite of the central role of key derivation functions (KDF) in applied cryptography, there has been little formal work addressing the design and analysis of general multi-purpose KDFs. In practice, most KDFs (including those widely standardized) follow ad-hoc approaches that treat cryptographic hash functions as perfectly random functions. In this paper we close some gaps between theory and practice by contributing to the study and engineering of KDFs in several ways. We provide detailed rationale for the design of KDFs based on the extract-then-expand approach; we present the first general and rigorous definition of KDFs and their security which we base on the notion of computational extractors; we specify a concrete fully practical KDF based on the HMAC construction; and we provide an analysis of this construction based on the extraction and pseudorandom properties of HMAC. The resultant KDF design can support a large variety of KDF applications under suitable assumptions on the underlying hash function; particular attention and effort is devoted to minimizing these assumptions as much as possible for each usage scenario. Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function. (The HMAC-based scheme presented here, named HKDF, is being standardized by the IETF.)

[Go to top]

Cordies: expressive event correlation in distributed systems
by Gerald G. Koch, Boris Koldehofe, and Kurt Rothermel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Complex Event Processing (CEP) is the method of choice for the observation of system states and situations by means of events. A number of systems have been introduced that provide CEP in selected environments. Some are restricted to centralised systems, or to systems with synchronous communication, or to a limited space of event relations that are defined in advance. Many modern systems, though, are inherently distributed and asynchronous, and require a more powerful CEP. We present Cordies, a distributed system for the detection of correlated events that is designed for the operation in large-scale, heterogeneous networks and adapts dynamically to changing network conditions. With its expressive language to describe event relations, it is suitable for environments where neither the event space nor the situations of interest are predefined but are constantly adapted. In addition, Cordies supports Quality-of-Service (QoS) for communication in distributed event correlation detection

[Go to top]

BnB-ADOPT: an asynchronous branch-and-bound DCOP algorithm (PDF)
by William Yeoh, Ariel Felner, and Sven Koenig.
In Journal of Artificial Intelligence Research 38, 2010, pages 85-133. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed constraint optimization (DCOP) problems are a popular way of formulating and solving agent-coordination problems. It is often desirable to solve DCOP problems optimally with memory-bounded and asynchronous algorithms. We introduce Branch-and-Bound ADOPT (BnB-ADOPT), a memory-bounded asynchronous DCOP algorithm that uses the message passing and communication framework of ADOPT, a well known memory-bounded asynchronous DCOP algorithm, but changes the search strategy of ADOPT from best-first search to depth-first branch-and-bound search. Our experimental results show that BnB-ADOPT is up to one order of magnitude faster than ADOPT on a variety of large DCOP problems and faster than NCBB, a memory-bounded synchronous DCOP algorithm, on most of these DCOP problems

[Go to top]

Autonomous NAT Traversal (PDF)
by Andreas Müller, Nathan S Evans, Christian Grothoff, and Samy Kamkar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional NAT traversal methods require the help of a third party for signalling. This paper investigates a new autonomous method for establishing connections to peers behind NAT. The proposed method for Autonomous NAT traversal uses fake ICMP messages to initially contact the NATed peer. This paper presents how the method is supposed to work in theory, discusses some possible variations, introduces various concrete implementations of the proposed approach and evaluates empirical results of a measurement study designed to evaluate the efficacy of the idea in practice

[Go to top]

The Ariba Framework for Application Development using Service Overlays
by Christian Hübsch, Christoph P. Mayer, and Oliver Waldhorst.
In Praxis der Informationsverarbeitung und Kommunikation 33, 2010, pages 7-11. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Developing new network services in the Internet is complex and costly. This high entrance barrier has prevented new innovation in the network itself, and stuck the Internet as being mainly browser-based client/server systems. End-system based decentralized services are cheaper, but have a complexity several orders of magnitude higher than centralized systems in terms of structure and protocols. To foster development of such decentralized network services, we present the ariba framework. We show how ariba can facilitate development of end-system based decentralized services through self-organizing service overlays–flexibly deployed purely on end-systems without the need for costly infrastructure

[Go to top]

Adapting Blackhat Approaches to Increase the Resilience of Whitehat Application Scenarios (PDF)
by Bartlomiej Polot.
masters, Technische Universität München, 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

2009

Solving very large distributed constraint satisfaction problems (PDF)
by Peter Harvey.
PhD, University of Wollongog, New South Wales, Australia, December 2009. (BibTeX entry) (Download bibtex record)
(direct link)

This thesis investigates issues with existing approaches to distributed constraint satisfaction, and proposes a solution in the form of a new algorithm. These issues are most evident when solving large distributed constraint satisfaction problems, hence the title of the thesis. We will first survey existing algorithms for centralised constraint satisfaction, and describe how they have been modified to handle distributed constraint satisfaction. The method by which each algorithm achieves completeness will be investigated and analysed by application of a new theorem. We will then present a new algorithm, Support-Based Distributed Search, developed explicitly for distributed constraint satisfaction rather than being derived from centralised algorithms. This algorithm is inspired by the inherent structure of human arguments and similar mechanisms we observe in real-world negotiations. A number of modifications to this new algorithm are considered, and comparisons are made with existing algorithms, effectively demonstrating its place within the field. Empirical analysis is then conducted, and comparisons are made to state-of-the-art algorithms most able to handle large distributed constraint satisfaction problems. Finally, it is argued that any future development in distributed constraint satisfaction will necessitate changes in the algorithms used to solve small embedded' constraint satisfaction problems. The impact on embedded constraint satisfaction problems is considered, with a brief presentation of an improved algorithm for hypertree decomposition

[Go to top]

Monte-Carlo Search Techniques in the Modern Board Game Thurn and Taxis (PDF)
by Frederik Christiaan Schadd.
Master Thesis, Maastricht University, December 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Modern board games present a new and challenging field when researching search techniques in the field of Artificial Intelligence. These games differ to classic board games, such as chess, in that they can be non-deterministic, have imperfect information or more than two players. While tree-search approaches, such as alpha-beta pruning, have been quite successful in playing classic board games, by for instance defeating the then reigning world champion Gary Kasparov in Chess, these techniques are not as effective when applied to modern board games. This thesis investigates the effectiveness of Monte-Carlo Tree Search when applied to a modern board game, for which the board game Thurn and Taxis was used. This is a non-deterministic modern board game with imperfect information that can be played with more than 2 players, and is hence suitable for research. First, the state-space and game-tree complexities of this game are computed, from which the conclusion can be drawn that the two-player version of the game has a complexity similar to the game Shogi. Several techniques are investigated in order to improve the sampling process, for instance by adding domain knowledge. Given the results of the experiments, one can conclude that Monte-Carlo Tree Search gives a slight performance increase over standard Monte-Carlo search. In addition, the most effective improvements appeared to be the application of pseudo-random simulations and limiting simulation lengths, while other techniques have been shown to be less effective or even ineffective. Overall, when applying the best performing techniques, an AI with advanced playing strength has been created, such that further research is likely to push this performance to a strength of expert level

[Go to top]

Attribute-Based Encryption Supporting Direct/Indirect Revocation Modes
by Nuttapong Attrapadung and Hideki Imai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Attribute-based encryption (ABE) enables an access control mechanism over encrypted data by specifying access policies among private keys and ciphertexts. In this paper, we focus on ABE that supports revocation. Currently, there are two available revocable ABE schemes in the literature. Their revocation mechanisms, however, differ in the sense that they can be considered as direct and indirect methods. Direct revocation enforces revocation directly by the sender who specifies the revocation list while encrypting. Indirect revocation enforces revocation by the key authority who releases a key update material periodically in such a way that only non-revoked users can update their keys (hence, revoked users' keys are implicitly rendered useless). An advantage of the indirect method over the direct one is that it does not require senders to know the revocation list. In contrast, an advantage of the direct method over the other is that it does not involve key update phase for all non-revoked users interacting with the key authority. In this paper, we present the first Hybrid Revocable ABE scheme that allows senders to select on-the-fly when encrypting whether to use either direct or indirect revocation mode; therefore, it combines best advantages from both methods

[Go to top]

XPay: Practical anonymous payments for Tor routing and other networked services (PDF)
by Yao Chen, Radu Sion, and Bogdan Carbunar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We design and analyze the first practical anonymous payment mechanisms for network services. We start by reporting on our experience with the implementation of a routing micropayment solution for Tor. We then propose micropayment protocols of increasingly complex requirements for networked services, such as P2P or cloud-hosted services. The solutions are efficient, with bandwidth and latency overheads of under 4 and 0.9 ms respectively (in ORPay for Tor), provide full anonymity (both for payers and payees), and support thousands of transactions per second

[Go to top]

Scalable onion routing with Torsk (PDF)
by Jon McLachlan, Andrew Tran, Nicholas J. Hopper, and Yongdae Kim.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce Torsk, a structured peer-to-peer low-latency anonymity protocol. Torsk is designed as an interoperable replacement for the relay selection and directory service of the popular Tor anonymity network, that decreases the bandwidth cost of relay selection and maintenance from quadratic to quasilinear while introducing no new attacks on the anonymity provided by Tor, and no additional delay to connections made via Tor. The resulting bandwidth savings make a modest-sized Torsk network significantly cheaper to operate, and allows low-bandwidth clients to join the network. Unlike previous proposals for P2P anonymity schemes, Torsk does not require all users to relay traffic for others. Torsk utilizes a combination of two P2P lookup mechanisms with complementary strengths in order to avoid attacks on the confidentiality and integrity of lookups. We show by analysis that previously known attacks on P2P anonymity schemes do not apply to Torsk, and report on experiments conducted with a 336-node wide-area deployment of Torsk, demonstrating its efficiency and feasibility

[Go to top]

On the risks of serving whenever you surf: Vulnerabilities in Tor's blocking resistance design (PDF)
by Jon McLachlan and Nicholas J. Hopper.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In Tor, a bridge is a client node that volunteers to help censored users access Tor by serving as an unlisted, first-hop relay. Since bridging is voluntary, the success of this circumvention mechanism depends critically on the willingness of clients to act as bridges. We identify three key architectural shortcomings of the bridge design: (1) bridges are easy to find; (2) a bridge always accepts connections when its operator is using Tor; and (3) traffic to and from clients connected to a bridge interferes with traffic to and from the bridge operator. These shortcomings lead to an attack that can expose the IP address of bridge operators visiting certain web sites over Tor. We also discuss mitigation mechanisms

[Go to top]

Hashing it out in public: Common failure modes of DHT-based anonymity schemes (PDF)
by Andrew Tran, Nicholas J. Hopper, and Yongdae Kim.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We examine peer-to-peer anonymous communication systems that use Distributed Hash Table algorithms for relay selection. We show that common design flaws in these schemes lead to highly effective attacks against the anonymity provided by the schemes. These attacks stem from attacks on DHT routing, and are not mitigated by the well-known DHT security mechanisms due to a fundamental mismatch between the security requirements of DHT routing's put/get functionality and anonymous routing's relay selection functionality. Our attacks essentially allow an adversary that controls only a small fraction of the relays to function as a global active adversary. We apply these attacks in more detail to two schemes: Salsa and Cashmere. In the case of Salsa, we show that an attacker that controls 10 of the relays in a network of size 10,000 can compromise more than 80 of all completed circuits; and in the case of Cashmere, we show that an attacker that controls 20 of the relays in a network of size 64000 can compromise 42 of the circuits

[Go to top]

Performance Evaluation of On-Demand Multipath Distance Vector Routing Protocol under Different Traffic Models (PDF)
by B. Malarkodi, P. Rakesh, and B. Venkataramani.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Traffic models are the heart of any performance evaluation of telecommunication networks. Understanding the nature of traffic in high speed, high bandwidth communication system is essential for effective operation and performance evaluation of the networks. Many routing protocols reported in the literature for Mobile ad hoc networks(MANETS) have been primarily designed and analyzed under the assumption of CBR traffic models, which is unable to capture the statistical characteristics of the actual traffic. It is necessary to evaluate the performance properties of MANETs in the context of more realistic traffic models. In an effort towards this end, this paper evaluates the performance of adhoc on demand multipath distance vector (AOMDV) routing protocol in the presence of poisson and bursty self similar traffic and compares them with that of CBR traffic. Different metrics are considered in analyzing the performance of routing protocol including packet delivery ratio, throughput and end to end delay. Our simulation results indicate that the packet delivery fraction and throughput in AOMDV is increased in the presence of self similar traffic compared to other traffic. Moreover, it is observed that the end to end delay in the presence of self similar traffic is lesser than that of CBR and higher than that of poisson traffic

[Go to top]

PeerSim: A Scalable P2P Simulator (PDF)
by Alberto Montresor, Márk Jelasity, Gian Paolo Jesi, and Spyros Voulgaris.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The key features of peer-to-peer (P2P) systems are scalability and dynamism. The evaluation of a P2P protocol in realistic environments is very expensive and difficult to reproduce, so simulation is crucial in P2P research. PeerSim is an extremely scalable simulation environment that supports dynamic scenarios such as churn and other failure models. Protocols need to be specifically implemented for the PeerSim Java API, but with a reasonable effort they can be evolved into a real implementation. Testing in specified parameter-spaces is supported as well. PeerSim started out as a tool for our own research

[Go to top]

Nymble: Blocking Misbehaving Users in Anonymizing Networks (PDF)
by Patrick P. Tsang, Apu Kapadia, Cory Cornelius, and Sean Smith.
In IEEE Transactions on Dependable and Secure Computing (TDSC), September 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the client's IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular websites. Website administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to honest and dishonest users alike. To address this problem, we present Nymble, a system in which servers can blacklist misbehaving users without compromising their anonymity. Our system is thus agnostic to different servers' definitions of misbehavior servers can block users for whatever reason, and the privacy of blacklisted users is maintained

[Go to top]

Sybilproof Transitive Trust Protocols (PDF)
by Paul Resnick and Rahul Sami.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We study protocols to enable one user (the principal) to make potentially profitable but risky interactions with another user (the agent), in the absence of direct trust between the two parties. In such situations, it is possible to enable the interaction indirectly through a chain of credit or "trust" links. We introduce a model that provides insight into many disparate applications, including open currency systems, network trust aggregation systems, and manipulation-resistant recommender systems. Each party maintains a trust account for each other party. When a principal's trust balance for an agent is high enough to cover potential losses from a bad interaction, direct trust is sufficient to enable the interaction. Allowing indirect trust opens up more interaction opportunities, but also expands the strategy space of an attacker seeking to exploit the community for its own ends. We show that with indirect trust exchange protocols, some friction is unavoidable: any protocol that satisfies a natural strategic safety property that we call sum-sybilproofness can sometimes lead to a reduction in expected overall trust balances even on interactions that are profitable in expectation. Thus, for long-term growth of trust accounts, which are assets enabling risky but valuable interactions, it may be necessary to limit the use of indirect trust. We present the hedged-transitive protocol and show that it achieves the optimal rate of expected growth in trust accounts, among all protocols satisfying the sum-sybilproofness condition

[Go to top]

Deleting files in the Celeste peer-to-peer storage system (PDF)
by Gal Badishi, Germano Caronni, Idit Keidar, Raphael Rom, and Glenn Scott.
In Journal of Parallel and Distributed Computing 69, July 2009, pages 613-622. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Celeste is a robust peer-to-peer object store built on top of a distributed hash table (DHT). Celeste is a working system, developed by Sun Microsystems Laboratories. During the development of Celeste, we faced the challenge of complete object deletion, and moreover, of deleting ''files'' composed of several different objects. This important problem is not solved by merely deleting meta-data, as there are scenarios in which all file contents must be deleted, e.g., due to a court order. Complete file deletion in a realistic peer-to-peer storage system has not been previously dealt with due to the intricacy of the problem–the system may experience high churn rates, nodes may crash or have intermittent connectivity, and the overlay network may become partitioned at times. We present an algorithm that eventually deletes all file contents, data and meta-data, in the aforementioned complex scenarios. The algorithm is fully functional and has been successfully integrated into Celeste

[Go to top]

A Collusion-Resistant Distributed Scalar Product Protocol with Application to Privacy-Preserving Computation of Trust (PDF)
by C.A. Melchor, B. Ait-Salem, and P. Gaborit.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private scalar product protocols have proved to be interesting in various applications such as data mining, data integration, trust computing, etc. In 2007, Yao et al. proposed a distributed scalar product protocol with application to privacy-preserving computation of trust [1]. This protocol is split in two phases: an homorphic encryption computation; and a private multi-party summation protocol. The summation protocol has two drawbacks: first, it generates a non-negligible communication overhead; and second, it introduces a security flaw. The contribution of this present paper is two-fold. We first prove that the protocol of [1] is not secure in the semi-honest model by showing that it is not resistant to collusion attacks and we give an example of a collusion attack, with only four participants. Second, we propose to use a superposed sending round as an alternative to the multi-party summation protocol, which results in better security properties and in a reduction of the communication costs. In particular, regarding security, we show that the previous scheme was vulnerable to collusions of three users whereas in our proposal we can t isin [1..n–1] and define a protocol resisting to collusions of up to t users

[Go to top]

A Practical Study of Regenerating Codes for Peer-to-Peer Backup Systems (PDF)
by Alessandro Duminuco and E W Biersack.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In distributed storage systems, erasure codes represent an attractive solution to add redundancy to stored data while limiting the storage overhead. They are able to provide the same reliability as replication requiring much less storage space. Erasure coding breaks the data into pieces that are encoded and then stored on different nodes. However, when storage nodes permanently abandon the system, new redundant pieces must be created. For erasure codes, generating a new piece requires the transmission of k pieces over the network, resulting in a k times higher reconstruction traffic as compared to replication. Dimakis proposed a new class of codes, called Regenerating Codes, which are able to provide both the storage efficiency of erasure codes and the communication efficiency of replication. However, Dimakis gave only a theoretical description of the codes without discussing implementation issues or computational costs. We have done a real implementation of Random Linear Regenerating Codes that allows us to measure their computational cost, which can be significant if the parameters are not chosen properly. However, we also find that there exist parameter values that result in a significant reduction of the communication overhead at the expense of a small increase in storage cost and computation, which makes these codes very attractive for distributed storage systems

[Go to top]

Evaluation of Sybil Attacks Protection Schemes in KAD (PDF)
by Thibault Cholez, Isabelle Chrisment, and Olivier Festor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we assess the protection mechanisms entered into recent clients to fight against the Sybil attack in KAD, a widely deployed Distributed Hash Table. We study three main mechanisms: a protection against flooding through packet tracking, an IP address limitation and a verification of identities. We evaluate their efficiency by designing and adapting an attack for several KAD clients with different levels of protection. Our results show that the new security rules mitigate the Sybil attacks previously launched. However, we prove that it is still possible to control a small part of the network despite the new inserted defenses with a distributed eclipse attack and limited resources

[Go to top]

Bootstrapping Peer-to-Peer Systems Using IRC
by Mirko Knoll, Matthias Helling, Arno Wacker, Sebastian Holzapfel, and Torben Weis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Research in the area of peer-to-peer systems is mainly focused on structuring the overlay network. Little attention is paid to the process of setting up and joining a peer-to-peer overlay network, i.e. the bootstrapping of peer-to-peer networks. The major challenge is to get hold of one peer that is already in the overlay. Otherwise, the first peer must be able to detect that the overlay is currently empty. Successful P2P applications either provide a centralized server for this task (Skype) or they simply put the burden on the user (eMule). We propose an automatic solution which does not require any user intervention and does not exhibit a single point of failure. Such decentralized bootstrapping protocols are especially important for open non-commercial peer-to-peer systems which cannot provide a server infrastructure for bootstrapping. The algorithm we are proposing builds on the Internet Relay Chat (IRC), a highly available, open,and distributed network of chat servers. Our algorithm is designed to put only a very minimal load on the IRC servers.In measurements we show that our bootstrapping protocol scales very well, handles flash crowds, and does only put a constant load on the IRC system disregarding of the peer-to-peer overlay size

[Go to top]

Long term study of peer behavior in the KAD DHT (PDF)
by Moritz Steiner, Taoufik En-Najjary, and E W Biersack.
In IEEE/ACM Transactions on Networking 17, May 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hash tables (DHTs) have been actively studied in literature and many different proposals have been made on how to organize peers in a DHT. However, very few DHTs have been implemented in real systems and deployed on a large scale. One exception is KAD, a DHT based on Kademlia, which is part of eDonkey, a peer-to-peer file sharing system with several million simultaneous users. We have been crawling a representative subset of KAD every five minutes for six months and obtained information about geographical distribution of peers, session times, daily usage, and peer lifetime. We have found that session times are Weibull distributed and we show how this information can be exploited to make the publishing mechanism much more efficient. Peers are identified by the so-called KAD ID, which up to now was assumed to be persistent. However, we observed that a fraction of peers changes their KAD ID as frequently as once a session. This change of KAD IDs makes it difficult to characterize end-user behavior. For this reason we have been crawling the entire KAD network once a day for more than a year to track end-users with static IP addresses, which allows us to estimate end-user lifetime and the fraction of end-users changing their KAD ID

[Go to top]

Traffic Engineering vs. Content Distribution: A Game Theoretic Perspective (PDF)
by Dominic DiPalantino and Ramesh Johari.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper we explore the interaction between content distribution and traffic engineering. Because a traffic engineer may be unaware of the structure of content distribution systems or overlay networks, this management of the network does not fully anticipate how traffic might change as a result of his actions. Content distribution systems that assign servers at the application level can respond very rapidly to changes in the routing of the network. Consequently, the traffic engineer's decisions may almost never be applied to the intended traffic. We use a game-theoretic framework in which infinitesimal users of a network select the source of content, and the traffic engineer decides how the traffic will route through the network. We formulate a game and prove the existence of equilibria. Additionally, we present a setting in which equilibria are socially optimal, essentially unique, and stable. Conditions under which efficiency loss may be bounded are presented, and the results are extended to the cases of general overlay networks and multiple autonomous systems

[Go to top]

A Sybilproof Indirect Reciprocity Mechanism for Peer-to-Peer Networks (PDF)
by Raul Leonardo Landa Gamiochipi, David Griffin, Richard G. Clegg, Eleni Mykoniati, and Miguel Rio.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Although direct reciprocity (Tit-for-Tat) contribution systems have been successful in reducing free-loading in peer-to-peer overlays, it has been shown that, unless the contribution network is dense, they tend to be slow (or may even fail) to converge [1]. On the other hand, current indirect reciprocity mechanisms based on reputation systems tend to be susceptible to sybil attacks, peer slander and whitewashing.In this paper we present PledgeRoute, an accounting mechanism for peer contributions that is based on social capital. This mechanism allows peers to contribute resources to one set of peers and use this contribution to obtain services from a different set of peers, at a different time. PledgeRoute is completely decentralised, can be implemented in both structured and unstructured peer-to-peer systems, and it is resistant to the three kinds of attacks mentioned above.To achieve this, we model contribution transitivity as a routing problem in the contribution network of the peer-to-peer overlay, and we present arguments for the routing behaviour and the sybilproofness of our contribution transfer procedures on this basis. Additionally, we present mechanisms for the seeding of the contribution network, and a combination of incentive mechanisms and reciprocation policies that motivate peers to adhere to the protocol and maximise their service contributions to the overlay

[Go to top]

Queuing Network Models for Multi-Channel P2P Live Streaming Systems (PDF)
by Di Wu, Yong Liu, and Keith W. Ross.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In recent years there have been several large-scale deployments of P2P live video systems. Existing and future P2P live video systems will offer a large number of channels, with users switching frequently among the channels. In this paper, we develop infinite-server queueing network models to analytically study the performance of multi-channel P2P streaming systems. Our models capture essential aspects of multi-channel video systems, including peer channel switching, peer churn, peer bandwidth heterogeneity, and Zipf-like channel popularity. We apply the queueing network models to two P2P streaming designs: the isolated channel design (ISO) and the View-Upload Decoupling (VUD) design. For both of these designs, we develop efficient algorithms to calculate critical performance measures, develop an asymptotic theory to provide closed-form results when the number of peers approaches infinity, and derive near- optimal provisioning rules for assigning peers to groups in VUD. We use the analytical results to compare VUD with ISO. We show that VUD design generally performs significantly better, particularly for systems with heterogeneous channel popularities and streaming rates

[Go to top]

On Mechanism Design without Payments for Throughput Maximization (PDF)
by Thomas Moscibroda and Stefan Schmid.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

It is well-known that the overall efficiency of a distributed system can suffer if the participating entities seek to maximize their individual performance. Consequently, mechanisms have been designed that force the participants to behave more cooperatively. Most of these game-theoretic solutions rely on payments between participants. Unfortunately, such payments are often cumbersome to implement in practice, especially in dynamic networks and where transaction costs are high. In this paper, we investigate the potential of mechanisms which work without payments. We consider the problem of throughput maximization in multi-channel environments and shed light onto the throughput increase that can be achieved with and without payments. We introduce and analyze two different concepts: the worst-case leverage where we assume that players end up in the worst rational strategy profile, and the average-case leverage where player select a random non-dominated strategy. Our theoretical insights are complemented by simulations

[Go to top]

Brahms: Byzantine Resilient Random Membership Sampling (PDF)
by Edward Bortnikov, Maxim Gurevich, Idit Keidar, Gabriel Kliot, and Alexander Shraer.
In Computer Networks Journal (COMNET), Special Issue on Gossiping in Distributed Systems, April 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A taxonomy for and analysis of anonymous communications networks (PDF)
by Douglas Kelly.
phd, Air Force Institute of Technology, March 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Any entity operating in cyberspace is susceptible to debilitating attacks. With cyber attacks intended to gather intelligence and disrupt communications rapidly replacing the threat of conventional and nuclear attacks, a new age of warfare is at hand. In 2003, the United States acknowledged that the speed and anonymity of cyber attacks makes distinguishing among the actions of terrorists, criminals, and nation states difficult. Even President Obama's Cybersecurity Chief-elect feels challenged by the increasing sophistication of cyber attacks. Indeed, the rising quantity and ubiquity of new surveillance technologies in cyberspace enables instant, undetectable, and unsolicited information collection about entities. Hence, anonymity and privacy are becoming increasingly important issues. Anonymization enables entities to protect their data and systems from a diverse set of cyber attacks and preserve privacy. This research provides a systematic analysis of anonymity degradation, preservation and elimination in cyberspace to enchance the security of information assets. This includes discovery/obfuscation of identities and actions of/from potential adversaries. First, novel taxonomies are developed for classifying and comparing the wide variety of well-established and state-of-the-art anonymous networking protocols. These expand the classical definition of anonymity and are the first known to capture the peer-to-peer and mobile ad hoc anonymous protocol family relationships. Second, a unique synthesis of state-of-the-art anonymity metrics is provided. This significantly aids an entities ability to reliably measure changing anonymity levels; thereby, increasing their ability to defend against cyber attacks. Finally, a novel epistemic-based model is created to characterize how an adversary reasons with knowledge to degrade anonymity

[Go to top]

Peer Profiling and Selection in the I2P Anonymous Network (PDF)
by Lars Schimmer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Traffic Morphing: An efficient defense against statistical traffic analysis (PDF)
by Charles Wright, Scott Coull, and Fabian Monrose.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Recent work has shown that properties of network traffic that remain observable after encryption, namely packet sizes and timing, can reveal surprising information about the traffic's contents (e.g., the language of a VoIP call [29], passwords in secure shell logins [20], or even web browsing habits [21, 14]). While there are some legitimate uses for encrypted traffic analysis, these techniques also raise important questions about the privacy of encrypted communications. A common tactic for mitigating such threats is to pad packets to uniform sizes or to send packets at fixed timing intervals; however, this approach is often inefficient. In this paper, we propose a novel method for thwarting statistical traffic analysis algorithms by optimally morphing one class of traffic to look like another class. Through the use of convex optimization techniques, we show how to optimally modify packets in real-time to reduce the accuracy of a variety of traffic classifiers while incurring much less overhead than padding. Our evaluation of this technique against two published traffic classifiers for VoIP [29] and web traffic [14] shows that morphing works well on a wide range of network datain some cases, simultaneously providing better privacy and lower overhead than naive defenses

[Go to top]

A performance evaluation and examination of open-source erasure coding libraries for storage (PDF)
by James S. Plank, Jianqiang Luo, Catherine D. Schuman, Lihao Xu, and Zooko Wilcox-O'Hearn.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Over the past five years, large-scale storage installations have required fault-protection beyond RAID-5, leading to a flurry of research on and development of erasure codes for multiple disk failures. Numerous open-source implementations of various coding techniques are available to the general public. In this paper, we perform a head-to-head comparison of these implementations in encoding and decoding scenarios. Our goals are to compare codes and implementations, to discern whether theory matches practice, and to demonstrate how parameter selection, especially as it concerns memory, has a significant impact on a code's performance. Additional benefits are to give storage system designers an idea of what to expect in terms of coding performance when designing their storage systems, and to identify the places where further erasure coding research can have the most impact

[Go to top]

Using link-layer broadcast to improve scalable source routing (PDF)
by Pengfei Di and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Scalable source routing (SSR) is a network layer routing protocol that provides services that are similar to those of structured peer-to-peer overlays. In this paper, we describe several improvements to the SSR protocol. They aim at providing nodes with more up-to-date routing information: 1. The use of link-layer broadcast enables all neighbors of a node to contribute to the forwarding process. 2. A light-weight and fast selection mechanism avoids packet duplication and optimizes the source route iteratively. 3. Nodes implicitly learn the network's topology from overheard broadcast messages. We present simulation results which show the performance gain of the proposed improvements: 1. The delivery ratio in settings with high mobility increases. 2. The required per-node state can be reduced as compared with the original SSR protocol. 3. The route stretch decreases. — These improvements are achieved without increasing the routing overhead

[Go to top]

The Wisdom of Crowds: Attacks and Optimal Constructions (PDF)
by George Danezis, Claudia Diaz, Emilia Käsper, and Carmela Troncoso.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a traffic analysis of the ADU anonymity scheme presented at ESORICS 2008, and the related RADU scheme. We show that optimal attacks are able to de-anonymize messages more effectively than believed before. Our analysis applies to single messages as well as long term observations using multiple messages. The search of a better scheme is bound to fail, since we prove that the original Crowds anonymity system provides the best security for any given mean messaging latency. Finally we present D-Crowds, a scheme that supports any path length distribution, while leaking the least possible information, and quantify the optimal attacks against it

[Go to top]

Wireless Sensor Networks: A Survey
by Vidyasagar Potdar, Atif Sharif, and Elizabeth Chang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless Sensor Networks (WSN), an element of pervasive computing, are presently being used on a large scale to monitor real-time environmental status. However these sensors operate under extreme energy constraints and are designed by keeping an application in mind. Designing a new wireless sensor node is extremely challenging task and involves assessing a number of different parameters required by the target application, which includes range, antenna type, target technology, components, memory, storage, power, life time, security, computational capability, communication technology, power, size, programming interface and applications. This paper analyses commercially (and research prototypes) available wireless sensor nodes based on these parameters and outlines research directions in this area

[Go to top]

Website fingerprinting: attacking popular privacy enhancing technologies with the multinomial naive-bayes classifier (PDF)
by Dominik Herrmann, Rolf Wendolsky, and Hannes Federrath.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Privacy enhancing technologies like OpenSSL, OpenVPN or Tor establish an encrypted tunnel that enables users to hide content and addresses of requested websites from external observers This protection is endangered by local traffic analysis attacks that allow an external, passive attacker between the PET system and the user to uncover the identity of the requested sites. However, existing proposals for such attacks are not practicable yet. We present a novel method that applies common text mining techniques to the normalised frequency distribution of observable IP packet sizes. Our classifier correctly identifies up to 97 of requests on a sample of 775 sites and over 300,000 real-world traffic dumps recorded over a two-month period. It outperforms previously known methods like Jaccard's classifier and Naïve Bayes that neglect packet frequencies altogether or rely on absolute frequency values, respectively. Our method is system-agnostic: it can be used against any PET without alteration. Closed-world results indicate that many popular single-hop and even multi-hop systems like Tor and JonDonym are vulnerable against this general fingerprinting attack. Furthermore, we discuss important real-world issues, namely false alarms and the influence of the browser cache on accuracy

[Go to top]

Tuning Vivaldi: Achieving Increased Accuracy and Stability (PDF)
by Benedikt Elser, Andreas Förschler, and Thomas Fuhrmann.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network Coordinates are a basic building block for most peer-to-peer applications nowadays. They optimize the peer selection process by allowing the nodes to preferably attach to peers to whom they then experience a low round trip time. Albeit there has been substantial research effort in this topic over the last years, the optimization of the various network coordinate algorithms has not been pursued systematically yet. Analyzing the well-known Vivaldi algorithm and its proposed optimizations with several sets of extensive Internet traffic traces, we found that in face of current Internet data most of the parameters that have been recommended in the original papers are a magnitude too high. Based on this insight, we recommend modified parameters that improve the algorithms' performance significantly

[Go to top]

Towards End-to-End Connectivity for Overlays across Heterogeneous Networks
by Sebastian Mies, Oliver Waldhorst, and Hans Wippel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The incremental adoption of IPv6, middle boxes (e.g., NATs, Firewalls) as well as completely new network types and protocols paint a picture of a future Internet that consists of extremely heterogeneous edge networks (e.g. IPv4, IPv6, industrial Ethernet, sensor networks) that are not supposed or able to communicate directly. This increasing heterogeneity imposes severe challenges for overlay networks, which are considered as a potential migration strategy towards the future Internet since they can add new functionality and services in a distributed and self-organizing manner. Unfortunately, overlays are based on end-to-end connectivity and, thus, their deployment is hindered by network heterogeneity. In this paper, we take steps towards a solution to enable overlay connections in such heterogeneous networks, building upon a model of heterogeneous networks that comprises several connectivity domains with direct connectivity, interconnected by relays. As major contribution, we present a distributed protocol that detects the boundaries of connectivity domains as well as relays using a gossiping approach. Furthermore, the protocol manages unique identifiers of connectivity domains and efficiently handles domain splitting and merging due to underlay changes. Simulation studies indicate that the algorithm can handle splitting and merging of connectivity domains in reasonable time and is scalable with respect to control overhead

[Go to top]

SpoVNet Security Task Force Report (PDF)
by Ralph Holz, Christoph P. Mayer, Sebastian Mies, Heiko Niedermayer, and Muhammad Adnan Tariq.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

SPINE : Adaptive Publish/Subscribe for Wireless Mesh Networks (PDF)
by Jorge Alfonso Briones-Garcia, Boris Koldehofe, and Kurt Rothermel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Application deployment on Wireless Mesh Networks (WMNs) is a challenging issue. First it requires communication abstractions that allow for interoperation with Internet applications and second the offered solution should be sensitive to the available resources in the underlying network. Loosely coupled communication abstractions, like publish/subscribe, promote interoperability, but unfortunately are typically implemented at the application layer without considering the available resources at the underlay imposing a significant degradation of application performance in the setting of Wireless Mesh Networks. In this paper we present SPINE, a content-based publish/subscribe system, which considers the particular challenges of deploying application-level services in Wireless Mesh Networks. SPINE is designed to reduce the overhead which stems from both publications and reconfigurations, to cope with the inherent capacity limitations on communication links as well as with mobility of the wireless mesh-clients. We demonstrate the effectiveness of SPINE by comparison with traditional approaches in implementing content-based publish/subscribe

[Go to top]

Sphinx: A Compact and Provably Secure Mix Format (PDF)
by George Danezis and Ian Goldberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Sphinx is a cryptographic message format used to relay anonymized messages within a mix network. It is more compact than any comparable scheme, and supports a full set of security features: indistinguishable replies, hiding the path length and relay position, as well as providing unlinkability for each leg of the message's journey over the network. We prove the full cryptographic security of Sphinx in the random oracle model, and we describe how it can be used as an efficient drop-in replacement in deployed remailer systems

[Go to top]

A Software and Hardware IPTV Architecture for Scalable DVB Distribution (PDF)
by unknown.
In International Journal of Digital Multimedia Broadcasting 2009, 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many standards and even more proprietary technologies deal with IP-based television (IPTV). But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders

[Go to top]

ShadowWalker: Peer-to-peer Anonymous Communication Using Redundant Structured Topologies (PDF)
by Prateek Mittal and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer approaches to anonymous communication pro- mise to eliminate the scalability concerns and central vulner- ability points of current networks such as Tor. However, the P2P setting introduces many new opportunities for attack, and previous designs do not provide an adequate level of anonymity. We propose ShadowWalker: a new low-latency P2P anonymous communication system, based on a random walk over a redundant structured topology. We base our de- sign on shadows that redundantly check and certify neigh- bor information; these certifications enable nodes to perform random walks over the structured topology while avoiding route capture and other attacks. We analytically calculate the anonymity provided by Sha- dowWalker and show that it performs well for moderate lev- els of attackers, and is much better than the state of the art. We also design an extension that improves forwarding per- formance at a slight anonymity cost, while at the same time protecting against selective DoS attacks. We show that our system has manageable overhead and can handle moderate churn, making it an attractive new design for P2P anony- mous communication

[Go to top]

Self-organized Data Redundancy Management for Peer-to-Peer Storage Systems (PDF)
by Yaser Houri, Manfred Jobmann, and Thomas Fuhrmann.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In peer-to-peer storage systems, peers can freely join and leave the system at any time. Ensuring high data availability in such an environment is a challenging task. In this paper we analyze the costs of achieving data availability in fully decentralized peer-to-peer systems. We mainly address the problem of churn and what effect maintaining availability has on network bandwidth. We discuss two different redundancy techniques – replication and erasure coding – and consider their monitoring and repairing costs analytically. We calculate the bandwidth costs using basic costs equations and two different Markov reward models. One for centralized monitoring system and the other for distributed monitoring. We show a comparison of the numerical results accordingly. Depending on these results, we determine the best redundancy and maintenance strategy that corresponds to peer's failure probability

[Go to top]

Security and Privacy Challenges in the Internet of Things (PDF)
by Christoph P. Mayer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The future Internet of Things as an intelligent collaboration of miniaturized sensors poses new challenges to security and end-user privacy. The ITU has identified that the protection of data and privacy of users is one of the key challenges in the Internet of Things [Int05]: lack of confidence about privacy will result in decreased adoption among users and therefore is one of the driving factors in the success of the Internet of Things. This paper gives an overview, categorization, and analysis of security and privacy challenges in the Internet of Things

[Go to top]

Scalable landmark flooding: a scalable routing protocol for WSNs (PDF)
by Pengfei Di and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless sensor networks (WSNs) are about to become a popular and inexpensive tool for all kinds of applications. More advanced applications also need end-to-end routing, which goes beyond the simple data dissemination and collection mechanisms of early WSNs. The special properties of WSNs – scarce memory, CPU, and energy resources – make this a challenge. The Dynamic Address Routing protocol (DART) could be a good candidate for WSN routing, if it were not so prone to link outages. In this paper, we propose Scalable Landmark Flooding (SLF), a new routing protocol for large WSNs. It combines ideas from landmark routing, flooding, and dynamic address routing. SLF is robust against link and node outages, requires only little routing state, and generates low maintenance traffic overhead

[Go to top]

Robust Random Number Generation for Peer-to-Peer Systems (PDF)
by Baruch Awerbuch and Christian Scheideler.
In Theor. Comput. Sci 410, 2009, pages 453-466. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of designing an efficient and robust distributed random number generator for peer-to-peer systems that is easy to implement and works even if all communication channels are public. A robust random number generator is crucial for avoiding adversarial join-leave attacks on peer-to-peer overlay networks. We show that our new generator together with a light-weight rule recently proposed in [B. Awerbuch, C. Scheideler, Towards a scalable and robust DHT, in: Proc. of the 18th ACM Symp. on Parallel Algorithms and Architectures, SPAA, 2006. See also http://www14.in.tum.de/personen/scheideler] for keeping peers well distributed can keep various structured overlay networks in a robust state even under a constant fraction of adversarial peers

[Go to top]

Providing Probabilistic Latency Bounds for Dynamic Publish/Subscribe Systems (PDF)
by Muhammad Adnan Tariq, Boris Koldehofe, Gerald G. Koch, and Kurt Rothermel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In the context of large decentralized many-to-many communication systems it is impractical to provide realistic and hard bounds for certain QoS metrics including latency bounds. Nevertheless, many applications can yield better performance if such bounds hold with a given probability. In this paper we show how probabilistic latency bounds can be applied in the context of publish/subscribe. We present an algorithm for maintaining individual probabilistic latency bounds in a highly dynamic environment for a large number of subscribers. The algorithm consists of an adaptive dissemination algorithm as well as a cluster partitioning scheme. Together they ensure i) adaptation to the individual latency requirements of subscribers under dynamically changing system properties, and ii) scalability by determining appropriate clusters according to available publishers in the system

[Go to top]

Privacy Integrated Queries: An Extensible Platform for Privacy-preserving Data Analysis (PDF)
by Frank D. McSherry.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We report on the design and implementation of the Privacy Integrated Queries (PINQ) platform for privacy-preserving data analysis. PINQ provides analysts with a programming interface to unscrubbed data through a SQL-like language. At the same time, the design of PINQ's analysis language and its careful implementation provide formal guarantees of differential privacy for any and all uses of the platform. PINQ's unconditional structural guarantees require no trust placed in the expertise or diligence of the analysts, substantially broadening the scope for design and deployment of privacy-preserving data analysis, especially by non-experts

[Go to top]

A Practical Congestion Attack on Tor Using Long Paths (PDF)
by Nathan S Evans, Roger Dingledine, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In 2005, Murdoch and Danezis demonstrated the first practical congestion attack against a deployed anonymity network. They could identify which relays were on a target Tor user's path by building paths one at a time through every Tor relay and introducing congestion. However, the original attack was performed on only 13 Tor relays on the nascent and lightly loaded Tor network. We show that the attack from their paper is no longer practical on today's 1500-relay heavily loaded Tor network. The attack doesn't scale because a) the attacker needs a tremendous amount of bandwidth to measure enough relays during the attack window, and b) there are too many false positives now that many other users are adding congestion at the same time as the attacks. We then strengthen the original congestion attack by combining it with a novel bandwidth amplification attack based on a flaw in the Tor design that lets us build long circuits that loop back on themselves. We show that this new combination attack is practical and effective by demonstrating a working attack on today's deployed Tor network. By coming up with a model to better understand Tor's routing behavior under congestion, we further provide a statistical analysis characterizing how effective our attack is in each case

[Go to top]

Optimization of distributed services with UNISONO (PDF)
by unknown.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed services are a special case of P2P networks where nodes have several distinctive tasks. Based on previous work, we show how UNISONO provides a way to optimize these services to increase performance, efficiency and user experience. UNISONO is a generic framework for host-based distributed network measurements. In this talk, we present UNISONO as an Enabler for self-organizing Service Delivery Plattforms. We give a short overview of the UNISONO concept and show how distributed services benefit from its usage

[Go to top]

An Optimally Fair Coin Toss (PDF)
by Tal Moran, Moni Naor, and Gil Segev.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We address one of the foundational problems in cryptography: the bias of coin-flipping protocols. Coin-flipping protocols allow mutually distrustful parties to generate a common unbiased random bit, guaranteeing that even if one of the parties is malicious, it cannot significantly bias the output of the honest party. A classical result by Cleve [STOC '86] showed that for any two-party r-round coin-flipping protocol there exists an efficient adversary that can bias the output of the honest party by Ω(1/r). However, the best previously known protocol only guarantees O(1/√r) bias, and the question of whether Cleve's bound is tight has remained open for more than twenty years. In this paper we establish the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols. Under standard assumptions (the existence of oblivious transfer), we show that Cleve's lower bound is tight: we construct an r-round protocol with bias O(1/r)

[Go to top]

Multi Party Distributed Private Matching, Set Disjointness and Cardinality of Set Intersection with Information Theoretic Security (PDF)
by G. Narayanan, T. Aishwarya, Anugrah Agrawal, Arpita Patra, Ashish Choudhary, and C Rangan.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we focus on the specific problems of Private Matching, Set Disjointness and Cardinality of Set Intersection in information theoretic settings. Specifically, we give perfectly secure protocols for the above problems in n party settings, tolerating a computationally unbounded semi-honest adversary, who can passively corrupt at most t < n/2 parties. To the best of our knowledge, these are the first such information theoretically secure protocols in a multi-party setting for all the three problems. Previous solutions for Distributed Private Matching and Cardinality of Set Intersection were cryptographically secure and the previous Set Disjointness solution, though information theoretically secure, is in a two party setting. We also propose a new model for Distributed Private matching which is relevant in a multi-party setting

[Go to top]

Membership-concealing overlay networks (PDF)
by Eugene Y. Vasserman, Rob Jansen, James Tyra, Nicholas J. Hopper, and Yongdae Kim.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Maintaining reference graphs of globally accessible objects in fully decentralized distributed systems
by Bjoern Saballus and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Since the advent of electronic computing, the processors' clock speed has risen tremendously. Now that energy efficiency requirements have stopped that trend, the number of processing cores per machine started to rise. In near future, these cores will become more specialized, and their inter-connections will form complex networks, both on-chip and beyond. This trend opens new fields of applications for high performance computing: Heterogeneous architectures offer different functionalities and thus support a wider range of applications. The increased compute power of these systems allows more complex simulations and numerical computations. Falling costs enable even small companies to invest in multi-core systems and clusters. However, the growing complexity might impede this growth. Imagine a cluster of thousands of interconnected heterogeneous processor cores. A software developer will need a deep knowledge about the underlying infrastructure as well as the data and communication dependencies in her application to partition it optimally across the available cores. Moreover, a predetermined partitioning scheme cannot reflect failing processors or additionally provided resources. In our poster, we introduce J-Cell, a project that aims at simplifying high performance distributed computing. J-Cell offers a single system image, which allows applications to run transparently on heterogeneous multi-core machines. It distributes code, objects and threads onto the compute resources which may be added or removed at run-time. This dynamic property leads to an ad-hoc network of processors and cores. In this network, a fully decentralized object localization and retrieval algorithm guarantees the access to distributed shared objects

[Go to top]

Heterogeneous gossip (PDF)
by Davide Frey, Rachid Guerraoui, Anne-Marie Kermarrec, Boris Koldehofe, Martin Mogensen, Maxime Monod, and Vivien Quéma.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Gossip-based information dissemination protocols are considered easy to deploy, scalable and resilient to network dynamics. Load-balancing is inherent in these protocols as the dissemination work is evenly spread among all nodes. Yet, large-scale distributed systems are usually heterogeneous with respect to network capabilities such as bandwidth. In practice, a blind load-balancing strategy might significantly hamper the performance of the gossip dissemination. This paper presents HEAP, HEterogeneity-Aware gossip Protocol, where nodes dynamically adapt their contribution to the gossip dissemination according to their bandwidth capabilities. Using a continuous, itself gossip-based, approximation of relative bandwidth capabilities, HEAP dynamically leverages the most capable nodes by increasing their fanout, while decreasing by the same proportion that of less capable nodes. HEAP preserves the simple and proactive (churn adaptation) nature of gossip, while significantly improving its effectiveness. We extensively evaluate HEAP in the context of a video streaming application on a testbed of 270 PlanetLab nodes. Our results show that HEAP significantly improves the quality of the streaming over standard homogeneous gossip protocols, especially when the stream rate is close to the average available bandwidth

[Go to top]

Financial Cryptography and Data Security (PDF)
by Peter Bogetoft, Dan Lund Christensen, Ivan Damgárd, Martin Geisler, Thomas Jakobsen, Mikkel Krøigaard, Janus Dam Nielsen, Jesper Buus Nielsen, Kurt Nielsen, Jakob Pagter, Michael Schwartzbach, and Tomas Toft.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This book constitutes the thoroughly refereed post-conference proceedings of the 14th International Conference on Financial Cryptography and Data Security, FC 2010, held in Tenerife, Canary Islands, Spain in January 2010. The 19 revised full papers and 15 revised short papers presented together with 1 panel report and 7 poster papers were carefully reviewed and selected from 130 submissions. The papers cover all aspects of securing transactions and systems and feature current research focusing on both fundamental and applied real-world deployments on all aspects surrounding commerce security

[Go to top]

Evaluation of Current P2P-SIP Proposals with Respect to the Igor/SSR API
by Markus Bucher.
Diplomarbeit, Technische Universität München, 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Enhancing Application-Layer Multicast Solutions by Wireless Underlay Support (PDF)
by Christian Hübsch and Oliver Waldhorst.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Application Layer Multicast (ALM) is an attractive solution to overcome the deployment problems of IP-Multicast. We show how to cope with the challenges of incorporating wireless devices into ALM protocols. As a rst approach we extend the NICE protocol, significantly increasing its performance in scenarios with many devices connected through wireless LAN

[Go to top]

Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contenders (PDF)
by Frank McSherry and Ilya Mironov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy. Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty–i.e., noise–to computations, trading accuracy for privacy. We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise. We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides

[Go to top]

De-anonymizing Social Networks (PDF)
by Arvind Narayanan and Vitaly Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc. We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12 error rate. Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small

[Go to top]

Cryptographically secure Bloom-filters
by Ryo Nojima and Youki Kadobayashi.
In Transactions on Data Privacy 2(2), 2009, pages 131-139. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

CLIO/UNISONO: practical distributed and overlay- wide network measurement
by Ralph Holz and Dirk Haage.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Building on previous work, we present an early version of our CLIO/UNISONO framework for distributed network measurements. CLIO/UNISONO is a generic measurement framework specifically aimed at overlays that need measurements for optimization purposes. In this talk, we briefly introduce the most important concepts and then focus on some more advanced mechanisms like measurements across connectivity domains and remote orders

[Go to top]

Challenges in Personalizing and Decentralizing the Web: An Overview of GOSSPLE
by Anne-Marie Kermarrec.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Bloom filters and overlays for routing in pocket switched networks (PDF)
by Christoph P. Mayer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Pocket Switched Networks (PSN) [3] have become a promising approach for providing communication between scarcely connected human-carried devices. Such devices, e.g. mobile phones or sensor nodes, are exposed to human mobility and can therewith leverage inter-human contacts for store-and-forward routing. Efficiently routing in such delay tolerant networks is complex due to incomplete knowledge about the network, and high dynamics of the network. In this work we want to develop an extension of Bloom filters for resource-efficient routing in pocket switched networks. Furthermore, we argue that PSNs may become densely populated in special situations. We want to exploit such situations to perform collaborative calculations of forwarding-decisions. In this paper we present a simple scheme for distributed decision calculation using overlays and a DHT-based distributed variant of Bloom filters

[Go to top]

The bayesian traffic analysis of mix networks (PDF)
by Carmela Troncoso and George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This work casts the traffic analysis of anonymity systems, and in particular mix networks, in the context of Bayesian inference. A generative probabilistic model of mix network architectures is presented, that incorporates a number of attack techniques in the traffic analysis literature. We use the model to build an Markov Chain Monte Carlo inference engine, that calculates the probabilities of who is talking to whom given an observation of network traces. We provide a thorough evaluation of its correctness and performance, and confirm that mix networks with realistic parameters are secure. This approach enables us to apply established information theoretic anonymity metrics on complex mix networks, and extract information from anonymised traffic traces optimally

[Go to top]

AS-awareness in Tor path selection (PDF)
by Matthew Edman and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is an anonymous communications network with thousands of router nodes worldwide. An intuition reflected in much of the literature on anonymous communications is that, as an anonymity network grows, it becomes more secure against a given observer because the observer will see less of the network. In particular, as the Tor network grows from volunteers operating relays all over the world, it becomes less and less likely for a single autonomous system (AS) to be able to observe both ends of an anonymous connection. Yet, as the network continues to grow significantly, no analysis has been done to determine if this intuition is correct. Further, modifications to Tor's path selection algorithm to help clients avoid an AS-level observer have not been proposed and analyzed. Five years ago a previous study examined the AS-level threat against client and destination addresses chosen a priori to be likely or interesting to examine. Using an AS-level path inference algorithm with improved accuracy, more extensive Internet routing data, and, most importantly, a model of typical Tor client AS-level sources and destinations based on data gathered from the live network, we demonstrate that the threat of a single AS observing both ends of an anonymous Tor connection is greater than previously thought. We look at the growth of the Tor network over the past five years and show that its explosive growth has had only a small impact on the network's robustness against an AS-level attacker. Finally, we propose and evaluate the effectiveness of some simple, AS-aware path selection algorithms that avoid the computational overhead imposed by full AS-level path inference algorithms. Our results indicate that a novel heuristic we propose is more effective against an AS-level observer than other commonly proposed heuristics for improving location diversity in path selection

[Go to top]

2008

A Practical Approach to Network Size Estimation for Structured Overlays (PDF)
by Tallat M. Shafaat, Ali Ghodsi, and Seif Haridi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Structured overlay networks have recently received much attention due to their self-* properties under dynamic and decentralized settings. The number of nodes in an overlay fluctuates all the time due to churn. Since knowledge of the size of the overlay is a core requirement for many systems, estimating the size in a decentralized manner is a challenge taken up by recent research activities. Gossip-based Aggregation has been shown to give accurate estimates for the network size, but previous work done is highly sensitive to node failures. In this paper, we present a gossip-based aggregation-style network size estimation algorithm. We discuss shortcomings of existing aggregation-based size estimation algorithms, and give a solution that is highly robust to node failures and is adaptive to network delays. We examine our solution in various scenarios to demonstrate its effectiveness

[Go to top]

EGOIST: Overlay Routing using Selfish Neighbor Selection (PDF)
by Georgios Smaragdakis, Vassilis Lekakis, Nikolaos Laoutaris, Azer Bestavros, John W. Byers, and Mema Roussopoulos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

A foundational issue underlying many overlay network applications ranging from routing to peer-to-peer file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a distributed overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using extensive measurements of paths between nodes, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we use a multiplayer peer-to-peer game to demonstrate the value of Egoist to end-user applications

[Go to top]

Rationality and Traffic Attraction: Incentives for Honest Path Announcements in BGP (PDF)
by Sharon Goldberg, Shai Halevi, Aaron D. Jaggard, Vijay Ramachandran, and Rebecca N. Wright.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We study situations in which autonomous systems (ASes) may have incentives to send BGP announcements differing from the AS-level paths that packets traverse in the data plane. Prior work on this issue assumed that ASes seek only to obtain the best possible outgoing path for their traffic. In reality, other factors can influence a rational AS's behavior. Here we consider a more natural model, in which an AS is also interested in attracting incoming traffic (e.g., because other ASes pay it to carry their traffic). We ask what combinations of BGP enhancements and restrictions on routing policies can ensure that ASes have no incentive to lie about their data-plane paths. We find that protocols like S-BGP alone are insufficient, but that S-BGP does suffice if coupled with additional (quite unrealistic) restrictions on routing policies. Our game-theoretic analysis illustrates the high cost of ensuring that the ASes honestly announce data-plane paths in their BGP path announcements

[Go to top]

PEREA: Towards Practical TTP-Free Revocation in Anonymous Authentication (PDF)
by Patrick P. Tsang, Man Ho Au, Apu Kapadia, and Sean Smith.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Information Leaks in Structured Peer-to-peer Anonymous Communication Systems (PDF)
by Prateek Mittal and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We analyze information leaks in the lookup mechanisms of structured peer-to-peer anonymous communication systems and how these leaks can be used to compromise anonymity. We show that the techniques that are used to combat active attacks on the lookup mechanism dramatically increase information leaks and increase the efficacy of passive attacks. Thus there is a trade-off between robustness to active and passive attacks. We study this trade-off in two P2P anonymous systems, Salsa and AP3. In both cases, we find that, by combining both passive and active attacks, anonymity can be compromised much more effectively than previously thought, rendering these systems insecure for most proposed uses. Our results hold even if security parameters are changed or other improvements to the systems are considered. Our study therefore motivates the search for new approaches to P2P anonymous communication

[Go to top]

Identity-based encryption with efficient revocation (PDF)
by Alexandra Boldyreva, Vipul Goyal, and Virendra Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Identity-based encryption (IBE) is an exciting alternative to public-key encryption, as IBE eliminates the need for a Public Key Infrastructure (PKI). The senders using an IBE do not need to look up the public keys and the corresponding certificates of the receivers, the identities (e.g. emails or IP addresses) of the latter are sufficient to encrypt. Any setting, PKI- or identity-based, must provide a means to revoke users from the system. Efficient revocation is a well-studied problem in the traditional PKI setting. However in the setting of IBE, there has been little work on studying the revocation mechanisms. The most practical solution requires the senders to also use time periods when encrypting, and all the receivers (regardless of whether their keys have been compromised or not) to update their private keys regularly by contacting the trusted authority. We note that this solution does not scale well – as the number of users increases, the work on key updates becomes a bottleneck. We propose an IBE scheme that significantly improves key-update efficiency on the side of the trusted party (from linear to logarithmic in the number of users), while staying efficient for the users. Our scheme builds on the ideas of the Fuzzy IBE primitive and binary tree data structure, and is provably secure

[Go to top]

A game-theoretic analysis of the implications of overlay network traffic on ISP peering (PDF)
by Jessie Hui Wang, Dah Ming Chiu, and John C. S. Lui.
In Computer Networks 52, October 2008, pages 2961-2974. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Inter-ISP traffic flow determines the settlement between ISPs and affects the perceived performance of ISP services. In today's Internet, the inter-ISP traffic flow patterns are controlled not only by ISPs' policy-based routing configuration and traffic engineering, but also by application layer routing. The goal of this paper is to study the economic implications of this shift in Internet traffic control assuming rational ISPs and subscribers. For this purpose, we build a general traffic model that predicts traffic patterns based on subscriber distribution and abstract traffic controls such as caching functions and performance sensitivity functions. We also build a game-theoretic model of subscribers picking ISPs, and ISPs making provisioning and peering decisions. In particular, we apply this to a local market where two ISPs compete for market share of subscribers under two traffic patterns: ''Web'' and ''P2P overlay'', that typifies the transition the current Internet is going through. Our methodology can be used to quantitatively demonstrate that (1) while economy of scale is the predominant property of the competitive ISP market, P2P traffic may introduce unfair distribution of peering benefit (i.e. free-riding); (2) the large ISP can restore more fairness by reducing its private capacity (bandwidth throttling), which has the drawback of hurting business growth; and (3) ISPs can reduce the level of peering (e.g. by reducing peering bandwidth) to restore more fairness, but this has the side-effect of also reducing the ISPs' collective bargaining power towards subscribers

[Go to top]

FairplayMP: a system for secure multi-party computation (PDF)
by Assaf Ben-David, Noam Nisan, and Benny Pinkas.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present FairplayMP (for "Fairplay Multi-Party"), a system for secure multi-party computation. Secure computation is one of the great achievements of modern cryptography, enabling a set of untrusting parties to compute any function of their private inputs while revealing nothing but the result of the function. In a sense, FairplayMP lets the parties run a joint computation that emulates a trusted party which receives the inputs from the parties, computes the function, and privately informs the parties of their outputs. FairplayMP operates by receiving a high-level language description of a function and a configuration file describing the participating parties. The system compiles the function into a description as a Boolean circuit, and perform a distributed evaluation of the circuit while revealing nothing else. FairplayMP supplements the Fairplay system [16], which supported secure computation between two parties. The underlying protocol of FairplayMP is the Beaver-Micali-Rogaway (BMR) protocol which runs in a constant number of communication rounds (eight rounds in our implementation). We modified the BMR protocol in a novel way and considerably improved its performance by using the Ben-Or-Goldwasser-Wigderson (BGW) protocol for the purpose of constructing gate tables. We chose to use this protocol since we believe that the number of communication rounds is a major factor on the overall performance of the protocol. We conducted different experiments which measure the effect of different parameters on the performance of the system and demonstrate its scalability. (We can now tell, for example, that running a second-price auction between four bidders, using five computation players, takes about 8 seconds.)

[Go to top]

Entropy Bounds for Traffic Confirmation (PDF)
by Luke O'Connor.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Detecting BitTorrent Blocking (PDF)
by Marcel Dischinger, Alan Mislove, Andreas Haeberlen, and P. Krishna Gummadi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recently, it has been reported that certain access ISPs are surreptitiously blocking their customers from uploading data using the popular BitTorrent file-sharing protocol. The reports have sparked an intense and wide-ranging policy debate on network neutrality and ISP traffic management practices. However, to date, end users lack access to measurement tools that can detect whether their access ISPs are blocking their BitTorrent traffic. And since ISPs do not voluntarily disclose their traffic management policies, no one knows how widely BitTorrent traffic blocking is deployed in the current Internet. In this paper, we address this problem by designing an easy-to-use tool to detect BitTorrent blocking and by presenting results from a widely used public deployment of the tool

[Go to top]

Dependent Link Padding Algorithms for Low Latency Anonymity Systems (PDF)
by Wei Wang, Mehul Motani, and Vikram Srinivasan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Low latency anonymity systems are susceptive to traffic analysis attacks. In this paper, we propose a dependent link padding scheme to protect anonymity systems from traffic analysis attacks while providing a strict delay bound. The covering traffic generated by our scheme uses the minimum sending rate to provide full anonymity for a given set of flows. The relationship between user anonymity and the minimum covering traffic rate is then studied via analysis and simulation. When user flows are Poisson processes with the same sending rate, the minimum covering traffic rate to provide full anonymity to m users is O(log m). For Pareto traffic, we show that the rate of the covering traffic converges to a constant when the number of flows goes to infinity. Finally, we use real Internet trace files to study the behavior of our algorithm when user flows have different rates

[Go to top]

Improving Tor using a TCP-over-DTLS Tunnel (PDF)
by Joel Reardon.
masters, University of Waterloo, September 2008. (BibTeX entry) (Download bibtex record)
(direct link)

The Tor network gives anonymity to Internet users by relaying their traffic through the world over a variety of routers. This incurs latency, and this thesis first explores where this latency occurs. Experiments discount the latency induced by routing traffic and computational latency to determine there is a substantial component that is caused by delay in the communication path. We determine that congestion control is causing the delay. Tor multiplexes multiple streams of data over a single TCP connection. This is not a wise use of TCP, and as such results in the unfair application of congestion control. We illustrate an example of this occurrence on a Tor node on the live network and also illustrate how packet dropping and reordering cause interference between the multiplexed streams. Our solution is to use a TCP-over-DTLS (Datagram Transport Layer Security) transport between routers, and give each stream of data its own TCP connection. We give our design for our proposal, and details about its implementation. Finally, we perform experiments on our implemented version to illustrate that our proposal has in fact resolved the multiplexing issues discovered in our system performance analysis. The future work gives a number of steps towards optimizing and improving our work, along with some tangential ideas that were discovered during research. Additionally, the open-source software projects latency proxy and libspe, which were designed for our purposes but programmed for universal applicability, are discussed

[Go to top]

Compromising Anonymity Using Packet Spinning (PDF)
by Vasilis Pappas, Elias Athanasopoulos, Sotiris Ioannidis, and Evangelos P. Markatos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a novel attack targeting anonymizing systems. The attack involves placing a malicious relay node inside an anonymizing system and keeping legitimate nodes "busy." We achieve this by creating circular circuits and injecting fraudulent packets, crafted in a way that will make them spin an arbitrary number of times inside our artificial loops. At the same time we inject a small number of malicious nodes that we control into the anonymizing system. By keeping a significant part of the anonymizing system busy spinning useless packets, we increase the probability of having our nodes selected in the creation of legitimate circuits, since we have more free capacity to route requests than the legitimate nodes. This technique may lead to the compromise of the anonymity of people using the system. To evaluate our novel attack, we used a real-world anonymizing system, TOR. We show that an anonymizing system that is composed of a series of relay nodes which perform cryptographic operations is vulnerable to our packet spinning attack. Our evaluation focuses on determining the cost we can introduce to the legitimate nodes by injecting the fraudulent packets, and the time required for a malicious client to create n-length TOR circuits. Furthermore we prove that routers that are involved in packet spinning do not have the capacity to process requests for the creation of new circuits and thus users are forced to select our malicious nodes for routing their data streams

[Go to top]

BitBlender: Light-Weight Anonymity for BitTorrent (PDF)
by Kevin Bauer, Damon McCoy, Dirk Grunwald, and Douglas Sicker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present BitBlender, an efficient protocol that provides an anonymity layer for BitTorrent traffic. BitBlender works by creating an ad-hoc multi-hop network consisting of special peers called "relay peers" that proxy requests and replies on behalf of other peers. To understand the effect of introducing relay peers into the BitTorrent system architecture, we provide an analysis of the expected path lengths as the ratio of relay peers to normal peers varies. A prototype is implemented and experiments are conducted on Planetlab to quantify the performance overhead associated with the protocol. We also propose protocol extensions to add confidentiality and access control mechanisms, countermeasures against traffic analysis attacks, and selective caching policies that simultaneously increase both anonymity and performance. We finally discuss the potential legal obstacles to deploying an anonymous file sharing protocol. This work is among the first to propose a privacy enhancing system that is designed specifically for a particular class of peer-to-peer traffic

[Go to top]

Why Share in Peer-to-Peer Networks? (PDF)
by Lian Jian and Jeffrey K. MacKie-Mason.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Prior theory and empirical work emphasize the enormous free-riding problem facing peer-to-peer (P2P) sharing networks. Nonetheless, many P2P networks thrive. We explore two possible explanations that do not rely on altruism or explicit mechanisms imposed on the network: direct and indirect private incentives for the provision of public goods. The direct incentive is a traffic redistribution effect that advantages the sharing peer. We find this incentive is likely insufficient to motivate equilibrium content sharing in large networks. We then approach P2P networks as a graph-theoretic problem and present sufficient conditions for sharing and free-riding to co-exist due to indirect incentives we call generalized reciprocity

[Go to top]

Towards Comparable Network Simulations (PDF)
by Pengfei Di, Yaser Houri, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Simulations have been a valuable and much used tool in networking research for decades. New protocols are evaluated by simulations. Often, competing designs are judged by their respective performance in simulations. Despite this great importance the state-of-the-art in network simulations is nevertheless still low. A recent survey showed that most publications in a top conference did not even give enough details to repeat the simulations. In this paper we go beyond repeatability and ask: Are different simulations comparable? We study various implementations of the IEEE 802.11 media access layer in ns-2 and OMNeT++ and report some dramatic differences. These findings indicate that two protocols cannot be compared meaningfully unless they are compared in the very same simulation environment. We claim that this problem limits the value of the respective publications because readers are forced to re-implement the work that is described in the paper rather than building on its results. Facing the additional problem that not all authors will agree on one simulator, we address ways of making different simulators comparable

[Go to top]

P4P: Provider Portal for Applications (PDF)
by Haiyong Xie, Y. Richard Yang, Arvind Krishnamurthy, Yanbin Grace Liu, and Abraham Silberschatz.
In SIGCOMM Computer Communication Review 38, August 2008, pages 351-362. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As peer-to-peer (P2P) emerges as a major paradigm for scalable network application design, it also exposes significant new challenges in achieving efficient and fair utilization of Internet network resources. Being largely network-oblivious, many P2P applications may lead to inefficient network resource usage and/or low application performance. In this paper, we propose a simple architecture called P4P to allow for more effective cooperative traffic control between applications and network providers. We conducted extensive simulations and real-life experiments on the Internet to demonstrate the feasibility and effectiveness of P4P. Our experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications

[Go to top]

Efficient network aware search in collaborative tagging sites
by Sihem Amer-Yahia, Michael Benedikt, Laks V. S. Lakshmanan, and Julia Stoyanovich.
In PVLDB'08 1(1), August 2008. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Decentralized Learning in Markov Games (PDF)
by Peter Vrancx, Katja Verbeeck, and Ann Nowé.
In IEEE Transactions on Systems, Man, and Cybernetics, Part B 38, August 2008, pages 976-981. (BibTeX entry) (Download bibtex record)
(direct link)

Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games-a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies

[Go to top]

Bootstrapping of Peer-to-Peer Networks (PDF)
by Chis GauthierDickey and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present the first heuristic for fully distributed bootstrapping of peer-to-peer networks. Our heuristic generates a stream of promising IP addresses to be probed as entry points. This stream is generated using statistical profiles using the IP ranges of start-of-authorities (SOAs) in the domain name system (DNS). We present experimental results demonstrating that with this approach it is efficient and practical to bootstrap Gnutella-sized peer-to-peer networks — without the need for centralized services or the public exposure of end-user's private IP addresses

[Go to top]

BitTorrent is an Auction: Analyzing and Improving BitTorrent's Incentives (PDF)
by Dave Levin, Katrina LaCurts, Neil Spring, and Bobby Bhattacharjee.
In SIGCOMM Computer Communication Review 38, August 2008, pages 243-254. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Incentives play a crucial role in BitTorrent, motivating users to upload to others to achieve fast download times for all peers. Though long believed to be robust to strategic manipulation, recent work has empirically shown that BitTorrent does not provide its users incentive to follow the protocol. We propose an auction-based model to study and improve upon BitTorrent's incentives. The insight behind our model is that BitTorrent uses, not tit-for-tat as widely believed, but an auction to decide which peers to serve. Our model not only captures known, performance-improving strategies, it shapes our thinking toward new, effective strategies. For example, our analysis demonstrates, counter-intuitively, that BitTorrent peers have incentive to intelligently under-report what pieces of the file they have to their neighbors. We implement and evaluate a modification to BitTorrent in which peers reward one another with proportional shares of bandwidth. Within our game-theoretic model, we prove that a proportional-share client is strategy-proof. With experiments on PlanetLab, a local cluster, and live downloads, we show that a proportional-share unchoker yields faster downloads against BitTorrent and BitTyrant clients, and that under-reporting pieces yields prolonged neighbor interest

[Go to top]

Auction, but don't block (PDF)
by Xiaowei Yang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper argues that ISP's recent actions to block certain applications (e.g. BitTorrent) and attempts to differentiate traffic could be a signal of bandwidth scarcity. Bandwidth-intensive applications such as VoD could have driven the traffic demand to the capacity limit of their networks. This paper proposes to let ISPs auction their bandwidth, instead of blocking or degrading applications. A user places a bid in a packet header based on how much he values the communication. When congestion occurs, ISPs allocate bandwidth to those users that value their packets the most, and charge them the Vickrey auction price. We outline a design that addresses the technical challenges to support this auction and analyze its feasibility. Our analysis suggests that the design have reasonable overhead and could be feasible with modern hardware

[Go to top]

Shining Light in Dark Places: Understanding the Tor Network (PDF)
by Damon McCoy, Kevin Bauer, Dirk Grunwald, Tadayoshi Kohno, and Douglas Sicker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

To date, there has yet to be a study that characterizes the usage of a real deployed anonymity service. We present observations and analysis obtained by participating in the Tor network. Our primary goals are to better understand Tor as it is deployed and through this understanding, propose improvements. In particular, we are interested in answering the following questions: (1) How is Tor being used? (2) How is Tor being mis-used? (3) Who is using Tor? To sample the results, we show that web traffic makes up the majority of the connections and bandwidth, but non-interactive protocols consume a disproportionately large amount of bandwidth when compared to interactive protocols. We provide a survey of how Tor is being misused, both by clients and by Tor router operators. In particular, we develop a method for detecting exit router logging (in certain cases). Finally, we present evidence that Tor is used throughout the world, but router participation is limited to only a few countries

[Go to top]

Reputation Systems for Anonymous Networks (PDF)
by Elli Androulaki, Seung Geol Choi, Steven M. Bellovin, and Tal Malkin.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a reputation scheme for a pseudonymous peer-to-peer (P2P) system in an anonymous network. Misbehavior is one of the biggest problems in pseudonymous P2P systems, where there is little incentive for proper behavior. In our scheme, using ecash for reputation points, the reputation of each user is closely related to his real identity rather than to his current pseudonym. Thus, our scheme allows an honest user to switch to a new pseudonym keeping his good reputation, while hindering a malicious user from erasing his trail of evil deeds with a new pseudonym

[Go to top]

Performance Measurements and Statistics of Tor Hidden Services (PDF)
by Karsten Loesing, Werner Sandmann, Christian Wilms, and Guido Wirtz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor (The Onion Routing) provides a secure mechanism for offering TCP-based services while concealing the hidden server's IP address. In general the acceptance of services strongly relies on its QoS properties. For potential Tor users, provided the anonymity is secured, probably the most important QoS parameter is the time until they finally get response by such a hidden service. Internally, overall response times are constituted by several steps invisible for the user. We provide comprehensive measurements of all relevant latencies and a detailed statistical analysis with special focus on the overall response times. Thereby, we gain valuable insights that enable us to give certain statistical assertions and to suggest improvements in the hidden service protocol and its implementation

[Go to top]

Perfect Matching Statistical Disclosure Attacks (PDF)
by Carmela Troncoso, Benedikt Gierlichs, Bart Preneel, and Ingrid Verbauwhede.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traffic analysis is the best known approach to uncover relationships amongst users of anonymous communication systems, such as mix networks. Surprisingly, all previously published techniques require very specific user behavior to break the anonymity provided by mixes. At the same time, it is also well known that none of the considered user models reflects realistic behavior which casts some doubt on previous work with respect to real-life scenarios. We first present a user behavior model that, to the best of our knowledge, is the least restrictive scheme considered so far. Second, we develop the Perfect Matching Disclosure Attack, an efficient attack based on graph theory that operates without any assumption on user behavior. The attack is highly effective when de-anonymizing mixing rounds because it considers all users in a round at once, rather than single users iteratively. Furthermore, the extracted sender-receiver relationships can be used to enhance user profile estimations. We extensively study the effectiveness and efficiency of our attack and previous work when de-anonymizing users communicating through a threshold mix. Empirical results show the advantage of our proposal. We also show how the attack can be refined and adapted to different scenarios including pool mixes, and how precision can be traded in for speed, which might be desirable in certain cases

[Go to top]

PAR: Payment for Anonymous Routing (PDF)
by Elli Androulaki, Mariana Raykova, Shreyas Srivatsan, Angelos Stavrou, and Steven M. Bellovin.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Despite the growth of the Internet and the increasing concern for privacy of online communications, current deployments of anonymization networks depend on a very small set of nodes that volunteer their bandwidth. We believe that the main reason is not disbelief in their ability to protect anonymity, but rather the practical limitations in bandwidth and latency that stem from limited participation. This limited participation, in turn, is due to a lack of incentives to participate. We propose providing economic incentives, which historically have worked very well. In this paper, we demonstrate a payment scheme that can be used to compensate nodes which provide anonymity in Tor, an existing onion routing, anonymizing network. We show that current anonymous payment schemes are not suitable and introduce a hybrid payment system based on a combination of the Peppercoin Micropayment system and a new type of one use electronic cash. Our system claims to maintain users' anonymity, although payment techniques mentioned previously – when adopted individually – provably fail

[Go to top]

Metrics for Security and Performance in Low-Latency Anonymity Networks (PDF)
by Steven J. Murdoch and Robert N. M. Watson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

An Improved Clock-skew Measurement Technique for Revealing Hidden Services (PDF)
by Sebastian Zander and Steven J. Murdoch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Tor anonymisation network allows services, such as web servers, to be operated under a pseudonym. In previous work Murdoch described a novel attack to reveal such hidden services by correlating clock skew changes with times of increased load, and hence temperature. Clock skew measurement suffers from two main sources of noise: network jitter and timestamp quantisation error. Depending on the target's clock frequency the quantisation noise can be orders of magnitude larger than the noise caused by typical network jitter. Quantisation noise limits the previous attacks to situations where a high frequency clock is available. It has been hypothesised that by synchronising measurements to the clock ticks, quantisation noise can be reduced. We show how such synchronisation can be achieved and maintained, despite network jitter. Our experiments show that synchronised sampling significantly reduces the quantisation error and the remaining noise only depends on the network jitter (but not clock frequency). Our improved skew estimates are up to two magnitudes more accurate for low-resolution timestamps and up to one magnitude more accurate for high-resolution timestamps, when compared to previous random sampling techniques. The improved accuracy not only allows previous attacks to be executed faster and with less network traffic but also opens the door to previously infeasible attacks on low-resolution clocks, including measuring skew of a HTTP server over the anonymous channel

[Go to top]

On the Impact of Social Network Profiling on Anonymity (PDF)
by Claudia Diaz, Carmela Troncoso, and Andrei Serjantov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper studies anonymity in a setting where individuals who communicate with each other over an anonymous channel are also members of a social network. In this setting the social network graph is known to the attacker. We propose a Bayesian method to combine multiple available sources of information and obtain an overall measure of anonymity. We study the effects of network size and find that in this case anonymity degrades when the network grows. We also consider adversaries with incomplete or erroneous information; characterize their knowledge of the social network by its quantity, quality and depth; and discuss the implications of these properties for anonymity

[Go to top]

How to Bypass Two Anonymity Revocation Systems (PDF)
by George Danezis and Len Sassaman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In recent years, there have been several proposals for anonymous communication systems that provide intentional weaknesses to allow anonymity to be circumvented in special cases. These anonymity revocation schemes attempt to retain the properties of strong anonymity systems while granting a special class of people the ability to selectively break through their protections. We evaluate the two dominant classes of anonymity revocation systems, and identify fundamental flaws in their architecture, leading to a failure to ensure proper anonymity revocation, as well as introducing additional weaknesses for users not targeted for anonymity revocation

[Go to top]

Bridging and Fingerprinting: Epistemic Attacks on Route Selection (PDF)
by George Danezis and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Users building routes through an anonymization network must discover the nodes comprising the network. Yet, it is potentially costly, or even infeasible, for everyone to know the entire network. We introduce a novel attack, the route bridging attack, which makes use of what route creators do not know of the network. We also present new discussion and results concerning route fingerprinting attacks, which make use of what route creators do know of the network. We prove analytic bounds for both route fingerprinting and route bridging and describe the impact of these attacks on published anonymity-network designs. We also discuss implications for network scaling and client-server vs. peer-to-peer systems

[Go to top]

Breaking and Provably Fixing Minx (PDF)
by Eric Shimshock, Matt Staats, and Nicholas J. Hopper.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In 2004, Danezis and Laurie proposed Minx, an encryption protocol and packet format for relay-based anonymity schemes, such as mix networks and onion routing, with simplicity as a primary design goal. Danezis and Laurie argued informally about the security properties of Minx but left open the problem of proving its security. In this paper, we show that there cannot be such a proof by showing that an active global adversary can decrypt Minx messages in polynomial time. To mitigate this attack, we also prove secure a very simple modification of the Minx protocol

[Go to top]

Quantification of Anonymity for Mobile Ad Hoc Networks (PDF)
by Marie Elisabeth Gaup Moe.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a probabilistic system model for anonymous ad hoc routing protocols that takes into account the a priori knowledge of the adversary, and illustrate how the information theoretical entropy can be used for quantification of the anonymity offered by a routing protocol as the adversary captures an increasing number of nodes in the network. The proposed measurement schema is applied to ANODR and ARM routing protocols

[Go to top]

Optimal mechanism design and money burning (PDF)
by Jason D. Hartline and Tim Roughgarden.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mechanism design is now a standard tool in computer science for aligning the incentives of self-interested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising in computer science (such as networks): while monetary "transfers" (i.e., payments) are essential for most of the known positive results in mechanism design, they are undesirable or even technologically infeasible in many computer systems. Classical impossibility results imply that the reach of mechanisms without transfers is severely limited. Computer systems typically do have the ability to reduce service quality–routing systems can drop or delay traffic, scheduling protocols can delay the release of jobs, and computational payment schemes can require computational payments from users (e.g., in spam-fighting systems). Service degradation is tantamount to requiring that users "burn money", and such "payments" can be used to influence the preferences of the agents at a cost of degrading the social surplus. We develop a framework for the design and analysis of "money-burning mechanisms" to maximize the residual surplus-the total value of the chosen outcome minus the payments required. Our primary contributions are the following. * We define a general template for prior-free optimal mechanism design that explicitly connects Bayesian optimal mechanism design, the dominant paradigm in economics, with worst-case analysis. In particular, we establish a general and principled way to identify appropriate performance benchmarks in prior-free mechanism design. * For general single-parameter agent settings, we characterize the Bayesian optimal money-burning mechanism. * For multi-unit auctions, we design a near-optimal prior-free money-burning mechanism: for every valuation profile, its expected residual surplus is within a constant factor of our benchmark, the residual surplus of the best Bayesian optimal mechanism for this profile. * For multi-unit auctions, we quantify the benefit of general transfers over money-burning: optimal money-burning mechanisms always obtain a logarithmic fraction of the full social surplus, and this bound is tight

[Go to top]

Experimental Analysis of Super-Seeding in BitTorrent (PDF)
by Zhijia Chen, Yang Chen, Chuang Lin, Vaibhav Nivargi, and Pei Cao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

With the popularity of BitTorrent, improving its performance has been an active research area. Super-seeding, a special upload policy for initial seeds, improves the efficiency in producing multiple seeds and reduces the uploading cost of the initial seeders. However, the overall benefit of super seeding remains a question. In this paper, we conduct an experimental study over the performance of super-seeding scheme of BitTornado. We attempt to answer the following questions: whether and how much super-seeding saves uploading cost, whether the download time of all peers is decreased by super-seeding, and in which scenario super-seeding performs worse. With varying seed bandwidth and peer behavior, we analyze the overall download time and upload cost of super seeding scheme during random period tests over 250 widely distributed PlanetLab nodes. The results show that benefits of super-seeding depend highly on the upload bandwidth of the initial seeds and the behavior of individual peers. Our work not only provides reference for the potential adoption of super-seeding in BitTorrent, but also much insights for the balance of enhancing Quality of Experience (QoE) and saving cost for a large-scale BitTorrent-like P2P commercial application

[Go to top]

Evaluating the performance of DCOP algorithms in a real world, dynamic problem (PDF)
by Robert Junges and Ana L. C. Bazzan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Complete algorithms have been proposed to solve problems modelled as distributed constraint optimization (DCOP). However, there are only few attempts to address real world scenarios using this formalism, mainly because of the complexity associated with those algorithms. In the present work we compare three complete algorithms for DCOP, aiming at studying how they perform in complex and dynamic scenarios of increasing sizes. In order to assess their performance we measure not only standard quantities such as number of cycles to arrive to a solution, size and quantity of exchanged messages, but also computing time and quality of the solution which is related to the particular domain we use. This study can shed light in the issues of how the algorithms perform when applied to problems other than those reported in the literature (graph coloring, meeting scheduling, and distributed sensor network)

[Go to top]

Anytime local search for distributed constraint optimization (PDF)
by Roie Zivan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most former studies of Distributed Constraint Optimization Problems (DisCOPs) search considered only complete search algorithms, which are practical only for relatively small problems. Distributed local search algorithms can be used for solving DisCOPs. However, because of the differences between the global evaluation of a system's state and the private evaluation of states by agents, agents are unaware of the global best state which is explored by the algorithm. Previous attempts to use local search algorithms for solving DisCOPs reported the state held by the system at the termination of the algorithm, which was not necessarily the best state explored. A general framework for implementing distributed local search algorithms for DisCOPs is proposed. The proposed framework makes use of a BFS-tree in order to accumulate the costs of the system's state in its different steps and to propagate the detection of a new best step when it is found. The resulting framework enhances local search algorithms for DisCOPs with the anytime property. The proposed framework does not require additional network load. Agents are required to hold a small (linear) additional space (beside the requirements of the algorithm in use). The proposed framework preserves privacy at a higher level than complete DisCOP algorithms which make use of a pseudo-tree (ADOPT, DPOP)

[Go to top]

Towards Empirical Aspects of Secure Scalar Product (PDF)
by I-Cheng Wang, Chih-Hao Shen, Tsan-sheng Hsu, Churn-Chung Liao, Da-Wei Wang, and J. Zhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Privacy is ultimately important, and there is a fair amount of research about it. However, few empirical studies about the cost of privacy are conducted. In the area of secure multiparty computation, the scalar product has long been reckoned as one of the most promising building blocks in place of the classic logic gates. The reason is not only the scalar product complete, which is as good as logic gates, but also the scalar product is much more efficient than logic gates. As a result, we set to study the computation and communication resources needed for some of the most well-known and frequently referred secure scalar-product protocols, including the composite-residuosity, the invertible-matrix, the polynomial-sharing, and the commodity-based approaches. Besides the implementation remarks of these approaches, we analyze and compare their execution time, computation time, and random number consumption, which are the most concerned resources when talking about secure protocols. Moreover, Fairplay the benchmark approach implementing Yao's famous circuit evaluation protocol, is included in our experiments in order to demonstrate the potential for the scalar product to replace logic gates

[Go to top]

Swarming on Optimized Graphs for n-way Broadcast (PDF)
by Georgios Smaragdakis, Nikolaos Laoutaris, Pietro Michiardi, Azer Bestavros, John W. Byers, and Mema Roussopoulos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and Max- Sum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes

[Go to top]

Stable Peers: Existence, Importance, and Application in Peer-to-Peer Live Video Streaming (PDF)
by Feng Wang, Jiangchuan Liu, and Yongqiang Xiong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

This paper presents a systematic in-depth study on the existence, importance, and application of stable nodes in peer- to-peer live video streaming. Using traces from a real large-scale system as well as analytical models, we show that, while the number of stable nodes is small throughout a whole session, their longer lifespans make them constitute a significant portion in a per-snapshot view of a peer-to-peer overlay. As a result, they have substantially affected the performance of the overall system. Inspired by this, we propose a tiered overlay design, with stable nodes being organized into a tier-1 backbone for serving tier-2 nodes. It offers a highly cost-effective and deployable alternative to proxy-assisted designs. We develop a comprehensive set of algorithms for stable node identification and organization. Specifically, we present a novel structure, Labeled Tree, for the tier-1 overlay, which, leveraging stable peers, simultaneously achieves low overhead and high transmission reliability. Our tiered framework flexibly accommodates diverse existing overlay structures in the second tier. Our extensive simulation results demonstrated that the customized optimization using selected stable nodes boosts the streaming quality and also effectively reduces the control overhead. This is further validated through prototype experiments over the PlanetLab network

[Go to top]

Privacy guarantees through distributed constraint satisfaction (PDF)
by Boi Faltings, Thomas Leaute, and Adrian Petcu.
In unknown(12), April 2008. (BibTeX entry) (Download bibtex record)
(direct link)

Abstract. In Distributed Constraint Satisfaction Problems, agents often desire to find a solution while revealing as little as possible about their variables and constraints. So far, most algorithms for DisCSP do not guarantee privacy of this information. This paper describes some simple obfuscation techniques that can be used with DisCSP algorithms such as DPOP, and provide sensible privacy guarantees based on the distributed solving process without sacrificing its efficiency

[Go to top]

Improving User and ISP Experience through ISP-aided P2P Locality (PDF)
by Vinay Aggarwal, Obi Akonjang, and Anja Feldmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Despite recent improvements, P2P systems are still plagued by fundamental issues such as overlay/underlay topological and routing mismatch, which affects their performance and causes traffic strains on the ISPs. In this work, we aim to improve overall system performance for ISPs as well as P2P systems by means of traffic localization through improved collaboration between ISPs and P2P systems. More specifically, we study the effects of different ISP/P2P topologies as well as a broad range of influential user behavior characteristics, namely content availability, churn, and query patterns, on end-user and ISP experience. We show that ISP-aided P2P locality benefits both P2P users and ISPs, measured in terms of improved content download times, increased network locality of query responses and desired content, and overall reduction in P2P traffic

[Go to top]

A Concept of an Anonymous Direct P2P Distribution Overlay System (PDF)
by Igor Margasinski and Michal Pioro.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The paper introduces a peer-to-peer system called P2PRIV (peer-to-peer direct and anonymous distribution overlay). Basic novel features of P2PRIV are: (i) a peer-to-peer parallel content exchange architecture, and (ii) separation of the anonymization process from the transport function. These features allow a considerable saving of service time while preserving high degree of anonymity. In the paper we evaluate anonymity measures of P2PRIV (using a normalized entropy measurement model) as well as its traffic measures (including service time and network dynamics), and compare anonymity and traffic performance of P2PRIV with a well known system called CROWDS

[Go to top]

A Tune-up for Tor: Improving Security and Performance in the Tor Network (PDF)
by Robin Snader and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Tor anonymous communication network uses selfreported bandwidth values to select routers for building tunnels. Since tunnels are allocated in proportion to this bandwidth, this allows a malicious router operator to attract tunnels for compromise. Since the metric used is insensitive to relative load, it does not adequately respond to changing conditions and hence produces unreliable performance, driving many users away. We propose an opportunistic bandwidth measurement algorithm to replace selfreported values and address both of these problems. We also propose a mechanisms to let users tune Tor performance to achieve higher performance or higher anonymity. Our mechanism effectively blends the traffic from users of different preferences, making partitioning attacks difficult. We implemented the opportunistic measurement and tunable performance extensions and examined their performance both analytically and in the real Tor network. Our results show that users can get dramatic increases in either performance or anonymity with little to no sacrifice in the other metric, or a more modest improvement in both. Our mechanisms are also invulnerable to the previously published low-resource attacks on Tor

[Go to top]

TRIBLER: a Social-based Peer-to-Peer System (PDF)
by Johan Pouwelse, Pawel Garbacki, Jun Wang, Arno Bakker, Jie Yang, Alexandru Iosup, Dick H. J. Epema, Marcel J. T. Reinders, Maarten van Steen, and Henk J. Sips.
In Concurrency and Computation: Practice amp; Experience 20, February 2008, pages 127-138. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most current peer-to-peer (P2P) file-sharing systems treat their users as anonymous, unrelated entities, and completely disregard any social relationships between them. However, social phenomena such as friendship and the existence of communities of users with similar tastes or interests may well be exploited in such systems in order to increase their usability and performance. In this paper we present a novel social-based P2P file-sharing paradigm that exploits social phenomena by maintaining social networks and using these in content discovery, content recommendation, and downloading. Based on this paradigm's main concepts such as taste buddies and friends, we have designed and implemented the TRIBLER P2P file-sharing system as a set of extensions to BitTorrent. We present and discuss the design of TRIBLER, and we show evidence that TRIBLER enables fast content discovery and recommendation at a low additional overhead, and a significant improvement in download performance. Copyright 2007 John Wiley amp; Sons, Ltd

[Go to top]

Insight into redundancy schemes in DHTs (PDF)
by Guihai Chen, Tongqing Qiu, and Fan Wu.
In Journal of Supercomputing 43, February 2008, pages 183-198. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In order to provide high data availability in peer-to-peer (P2P) DHTs, proper data redundancy schemes are required. This paper compares two popular schemes: replication and erasure coding. Unlike previous comparison, we take user download behavior into account. Furthermore, we propose a hybrid redundancy scheme, which shares user downloaded files for subsequent accesses and utilizes erasure coding to adjust file availability. Comparison experiments of three schemes show that replication saves more bandwidth than erasure coding, although it requires more storage space, when average node availability is higher than 47; moreover, our hybrid scheme saves more maintenance bandwidth with acceptable redundancy factor

[Go to top]

The Decentralized File System Igor-FS as an Application for Overlay-Networks (PDF)
by unknown.
Doctoral, Universität Fridericiana (TH), February 2008. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Working in distributed systems is part of the information society. More and more people and organizations work with growing data volumes. Often, part of the problem is to access large files in a share way. Until now, there are two often used approaches to allow this kind off access. Either the files are tranfered via FTP, e-mail or similar medium before the access happens, or a centralized server provides file services. The first alternative has the disadvantage that the entire file has to be transfered before the first access can be successful. If only small parts in the file have been changed compared to a previous version, the entire file has to be transfered anyway. The centralized approach has disadvantages regarding scalability and reliability. In both approaches authorization and authentication can be difficult in case users are seperated by untrusted network segements

[Go to top]

A Survey of Anonymous Communication Channels (PDF)
by George Danezis and Claudia Diaz.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present an overview of the field of anonymous communications, from its establishment in 1981 from David Chaum to today. Key systems are presented categorized according to their underlying principles: semi-trusted relays, mix systems, remailers, onion routing, and systems to provide robust mixing. We include extended discussions of the threat models and usage models that different schemes provide, and the trade-offs between the security properties offered and the communication characteristics different systems support

[Go to top]

Don't Clog the Queue: Circuit Clogging and Mitigation in P2P anonymity schemes (PDF)
by Jon McLachlan and Nicholas J. Hopper.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

At Oakland 2005, Murdoch and Danezis described an attack on the Tor anonymity service that recovers the nodes in a Tor circuit, but not the client. We observe that in a peer-to-peer anonymity scheme, the client is part of the circuit and thus the technique can be of greater significance in this setting. We experimentally validate this conclusion by showing that "circuit clogging" can identify client nodes using the MorphMix peer-to-peer anonymity protocol. We also propose and empirically validate the use of the Stochastic Fair Queueing discipline on outgoing connections as an efficient and low-cost mitigation technique

[Go to top]

AmbiComp: A platform for distributed execution of Java programs on embedded systems by offering a single system image (PDF)
by Johannes Eickhold, Thomas Fuhrmann, Bjoern Saballus, Sven Schlender, and Thomas Suchy.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Ambient Intelligence pursues the vision that small networked computers will jointly perform tasks that create the illusion of an intelligent environment. One of the most pressing challenges in this context is the question how one could easily develop software for such highly complex, but resource-scarce systems. In this paper we present a snapshot of our ongoing work towards facilitating oftware development for Am- bient Intelligence systems. In particular, we present the AmbiComp [1] platform. It consists of small, modular hardware, a exible rmware including a Java Virtual Machine, and an Eclipse-based integrated development environment

[Go to top]

What Can We Learn Privately? (PDF)
by Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith.
In CoRR abs/0803.0924, 2008. (BibTeX entry) (Download bibtex record)
(direct link)

Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms

[Go to top]

Unerkannt. Anonymisierende Peer-to-Peer-Netze im Überblick
by Nils Durner, Nathan S Evans, and Christian Grothoff.
In iX magazin für professionelle informationstechnik, 2008. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

The Underlay Abstraction in the Spontaneous Virtual Networks (SpoVNet) Architecture (PDF)
by Roland Bless, Christian Hübsch, Sebastian Mies, and Oliver Waldhorst.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Next generation networks will combine many heterogeneous access technologies to provide services to a large number of highly mobile users while meeting their demands for quality of service, robustness, and security. Obviously, this is not a trivial task and many protocols fulfilling some combination of these requirements have been proposed. However, non of the current proposals meets all requirements, and the deployment of new applications and services is hindered by a patchwork of protocols. This paper presents Spontaneous Virtual Networks (SpoVNet), an architecture that fosters the creation of new applications and services for next generation networks by providing an underlay abstraction layer. This layer applies an overlay-based approach to cope with mobility, multi-homing, and heterogeneity. For coping with network mobility, it uses a SpoVNet-specific addressing scheme, splitting node identifiers from network locators and providing persistent connections by transparently switching locators. To deal with multihoming it transparently chooses the most appropriate pair of network locators for each connection. To cope with network and protocol heterogeneity, it uses dedicated overlay nodes, e.g., for relaying between IPv4 and IPv6 hosts

[Go to top]

Trust-Rated Authentication for Domain-Structured Distributed Systems (PDF)
by Ralph Holz, Heiko Niedermayer, Peter Hauck, and Georg Carle.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present an authentication scheme and new protocol for domain-based scenarios with inter-domain authentication. Our protocol is primarily intended for domain-structured Peer-to-Peer systems but is applicable for any domain scenario where clients from different domains wish to authenticate to each other. To this end, we make use of Trusted Third Parties in the form of Domain Authentication Servers in each domain. These act on behalf of their clients, resulting in a four-party protocol. If there is a secure channel between the Domain Authentication Servers, our protocol can provide secure authentication. To address the case where domains do not have a secure channel between them, we extend our scheme with the concept of trust-rating. Domain Authentication Servers signal security-relevant information to their clients (pre-existing secure channel or not, trust, ...). The clients evaluate this information to decide if it fits the security requirements of their application

[Go to top]

Tahoe: the least-authority filesystem (PDF)
by Zooko Wilcox-O'Hearn and Brian Warner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source

[Go to top]

The Spontaneous Virtual Networks Architecture for Supporting Future Internet Services and Applications
by Roland Bless, Oliver Waldhorst, and Christoph P. Mayer.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Shortest-path routing in randomized DHT-based Peer-to-Peer systems
by Chih-Chiang Wang and Khaled Harfoush.
In Comput. Netw 52(18), 2008, pages 3307-3317. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Randomized DHT-based Peer-to-Peer (P2P) systems grant nodes certain flexibility in selecting their overlay neighbors, leading to irregular overlay structures but to better overall performance in terms of path latency, static resilience and local convergence. However, routing in the presence of overlay irregularity is challenging. In this paper, we propose a novel routing protocol, RASTER, that approximates shortest overlay routes between nodes in randomized DHTs. Unlike previously proposed routing protocols, RASTER encodes and aggregates routing information. Its simple bitmap-encoding scheme together with the proposed RASTER routing algorithm enable a performance edge over current overlay routing protocols. RASTER provides a forwarding overhead of merely a small constant number of bitwise operations, a routing performance close to optimal, and a better resilience to churn. RASTER also provides nodes with the flexibility to adjust the size of the maintained routing information based on their storage/processing capabilities. The cost of storing and exchanging encoded routing information is manageable and grows logarithmically with the number of nodes in the system

[Go to top]

Robust De-anonymization of Large Sparse Datasets (PDF)
by Arvind Narayanan and Vitaly Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a new class of statistical deanonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information

[Go to top]

Providing KBR Service for Multiple Applications (PDF)
by Pengfei Di, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Key based routing (KBR) enables peer-to-peer applications to create and use distributed services. KBR is more flexible than distributed hash tables (DHT). However, the broader the application area, the more important become performance issues for a KBR service. In this paper, we present a novel approach to provide a generic KBR service. Its key idea is to use a predictable address assignment scheme. This scheme allows peers to calculate the overlay address of the node that is responsible for a given key and application ID. A public DHT service such as OpenDHT can then resolve this overlay address to the transport address of the respective peer. We compare our solution to alternative proposals such as ReDiR and Diminished Chord. We conclude that our solution has a better worst case complexity for some important KBR operations and the required state. In particular, unlike ReDiR, our solution can guarantee a low latency for KBR route operations

[Go to top]

Progressive Strategies for Monte-Carlo Tree Search (PDF)
by Guillaume M. J-B. Chaslot, Mark H. M. Winands, H. Jaap van den Herik, Jos W. H. M. Uiterwijk, and Bruno Bouzy.
In New Mathematics and Natural Computation 4, 2008, pages 343-357. (BibTeX entry) (Download bibtex record)
(direct link)

Monte-Carlo Tree Search (MCTS) is a new best-first search guided by the results of Monte-Carlo simulations. In this article, we introduce two progressive strategies for MCTS, called progressive bias and progressive unpruning. They enable the use of relatively time-expensive heuristic knowledge without speed reduction. Progressive bias directs the search according to heuristic knowledge. Progressive unpruning first reduces the branching factor, and then increases it gradually again. Experiments assess that the two progressive strategies significantly improve the level of our Go program Mango. Moreover, we see that the combination of both strategies performs even better on larger board sizes

[Go to top]

Privacy-Preserving Data Mining: Models and Algorithms
by Charu C. Aggarwal and Philip S. Yu.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

ODSBR: An on-demand secure Byzantine resilient routing protocol for wireless ad hoc networks (PDF)
by Baruch Awerbuch, Reza Curtmola, David Holmer, Cristina Nita-Rotaru, and Herbert Rubens.
In ACM Trans. Inf. Syst. Secur 10(4), 2008, pages 1-35. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Ah hoc networks offer increased coverage by using multihop communication. This architecture makes services more vulnerable to internal attacks coming from compromised nodes that behave arbitrarily to disrupt the network, also referred to as Byzantine attacks. In this work, we examine the impact of several Byzantine attacks performed by individual or colluding attackers. We propose ODSBR, the first on-demand routing protocol for ad hoc wireless networks that provides resilience to Byzantine attacks caused by individual or colluding nodes. The protocol uses an adaptive probing technique that detects a malicious link after log n faults have occurred, where n is the length of the path. Problematic links are avoided by using a route discovery mechanism that relies on a new metric that captures adversarial behavior. Our protocol never partitions the network and bounds the amount of damage caused by attackers. We demonstrate through simulations ODSBR's effectiveness in mitigating Byzantine attacks. Our analysis of the impact of these attacks versus the adversary's effort gives insights into their relative strengths, their interaction, and their importance when designing multihop wireless routing protocols

[Go to top]

Netkit: easy emulation of complex networks on inexpensive hardware (PDF)
by Maurizio Pizzonia and Massimo Rimondini.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Linyphi: creating IPv6 mesh networks with SSR
by Pengfei Di, Johannes Eickhold, and Thomas Fuhrmann.
In Concurr. Comput. : Pract. Exper 20(6), 2008, pages 675-691. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Scalable source routing (SSR) is a self-organizing routing protocol which is especially suited for networks that do not have a well-crafted structure, e.g. ad hoc and mesh networks. SSR works on a flat identifier space. As a consequence, it can easily support host mobility without requiring any location directory or other centralized service. SSR is based on a virtual ring structure, which is used in a chord-like manner to obtain source routes to previously unknown destinations. It has been shown that SSR requires very little per node state and produces very little control messages. In particular, SSR has been found to outperform other ad hoc routing protocols such as ad hoc on-demand distance vector routing, optimized link-state routing, or beacon vector routing. In this paper we present Linyphi, an implementation of SSR for wireless access routers. Linyphi combines IPv6 and SSR so that unmodified IPv6 hosts have transparent connectivity to both the Linyphi mesh network and the IPv4-v6 Internet. We give a basic outline of the implementation and demonstrate its suitability in real-world mesh network scenarios. Furthermore, we illustrate the use of Linyphi for distributed applications such as the Linyphone peer-to-peer VoIP application. Copyright 2008 John Wiley amp; Sons, Ltd

[Go to top]

Linear-Time Computation of Similarity Measures for Sequential Data (PDF)
by Konrad Rieck and Pavel Laskov.
In J. Mach. Learn. Res 9, 2008, pages 23-48. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Efficient and expressive comparison of sequences is an essential procedure for learning with sequential data. In this article we propose a generic framework for computation of similarity measures for sequences, covering various kernel, distance and non-metric similarity functions. The basis for comparison is embedding of sequences using a formal language, such as a set of natural words, k-grams or all contiguous subsequences. As realizations of the framework we provide linear-time algorithms of different complexity and capabilities using sorted arrays, tries and suffix trees as underlying data structures. Experiments on data sets from bioinformatics, text processing and computer security illustrate the efficiency of the proposed algorithms—enabling peak performances of up to 106 pairwise comparisons per second. The utility of distances and non-metric similarity measures for sequences as alternatives to string kernels is demonstrated in applications of text categorization, network intrusion detection and transcription site recognition in DNA

[Go to top]

Lightweight emulation to study peer-to-peer systems (PDF)
by Lucas Nussbaum and Olivier Richard.
In Concurrency and Computation: Practice and Experience 20(6), 2008, pages 735-749. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Large-scale Virtualization in the Emulab Network Testbed (PDF)
by Mike Hibler, Robert Ricci, Leigh Stoller, Jonathon Duerig, Shashi Guruprasad, Tim Stack, Kirk Webb, and Jay Lepreau.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

IgorFs: A Distributed P2P File System (PDF)
by Bernhard Amann, Benedikt Elser, Yaser Houri, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

IgorFs is a distributed, decentralized peer-to-peer (P2P) file system that is completely transparent to the user. It is built on top of the Igor peer-to-peer overlay network, which is similar to Chord, but provides additional features like service orientation or proximity neighbor and route selection. IgorFs offers an efficient means to publish data files that are subject to frequent but minor modifications. In our demonstration we show two use cases for IgorFs: the first example is (static) software-distribution and the second example is (dynamic) file distribution

[Go to top]

Higher Confidence in Event Correlation Using Uncertainty Restrictions (PDF)
by Gerald G. Koch, Boris Koldehofe, and Kurt Rothermel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed cooperative systems that use event notification for communication can benefit from event correlation within the notification network. In the presence of uncertain data, however, correlation results easily become unreliable. The handling of uncertainty is therefore an important challenge for event correlation in distributed event notification systems. In this paper, we present a generic correlation model that is aware of uncertainty. We propose uncertainty constraints that event correlation can take into account and show how they can lead to higher confidence in the correlation result. We demonstrate that the application of this model allows to obtain a qualitative description of event correlation

[Go to top]

Hash cash–a denial of service counter-measure (PDF)
by Adam Back.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Hashcash was originally proposed as a mechanism to throttle systematic abuse of un-metered internet resources such as email, and anonymous remailers in May 1997. Five years on, this paper captures in one place the various applications, improvements suggested and related subsequent publications, and describes initial experience from experiments using hashcash. The hashcash CPU cost-function computes a token which can be used as a proof-of-work. Interactive and non-interactive variants of cost-functions can be constructed which can be used in situations where the server can issue a challenge (connection oriented interactive protocol), and where it can not (where the communication is store–and–forward, or packet oriented) respectively

[Go to top]

Global Accessible Objects (GAOs) in the Ambicomp Distributed Java Virtual Machine (PDF)
by Bjoern Saballus, Johannes Eickhold, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As networked embedded sensors and actuators become more and more widespread, software developers encounter the difficulty to create applications that run distributed on these nodes: Typically, these nodes are heterogeneous, resource-limited, and there is no centralized control. The Ambicomp project tackles this problem. Its goal is to provide a distributed Java Virtual Machine (VM) that runs on the bare sensor node hardware. This VM creates a single system illusion across several nodes. Objects and threads can migrate freely between these nodes. In this paper, we address the problem of globally accessible objects. We describe how scalable source routing, a DHT-inspired routing protocol, can be used to allow access to objects regardless of their respective physical location and without any centralized component

[Go to top]

On the False-positive Rate of Bloom Filters (PDF)
by Prosenjit Bose, Hua Guo, Evangelos Kranakis, Anil Maheshwari, Pat Morin, Jason Morrison, Michiel Smid, and Yihui Tang.
In Inf. Process. Lett 108, 2008, pages 210-213. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Bloom filters are a randomized data structure for membership queries dating back to 1970. Bloom filters sometimes give erroneous answers to queries, called false positives. Bloom analyzed the probability of such erroneous answers, called the false-positive rate, and Bloom's analysis has appeared in many publications throughout the years. We show that Bloom's analysis is incorrect and give a correct analysis

[Go to top]

Estimating The Size Of Peer-To-Peer Networks Using Lambert's W Function (PDF)
by Javier Bustos-Jiménez, Nicolás Bersano, Satu Elisa Schaeffer, José Miguel Piquer, Alexandru Iosup, and Augusto Ciuffoletti.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this work, we address the problem of locally estimating the size of a Peer-to-Peer (P2P) network using local information. We present a novel approach for estimating the size of a peer-to-peer (P2P) network, fitting the sum of new neighbors discovered at each iteration of a breadth-first search (BFS) with a logarithmic function, and then using Lambert's W function to solve a root of a ln(n) + b–n = 0, where n is the network size. With rather little computation, we reach an estimation error of at most 10 percent, only allowing the BFS to iterate to the third level

[Go to top]

Efficient routing in intermittently connected mobile networks: the single-copy case (PDF)
by Thrasyvoulos Spyropoulos, Konstantinos Psounis, and Cauligi S. Raghavendra.
In IEEE/ACM Trans. Netw 16(1), 2008, pages 63-76. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. There are many real networks that follow this model, for example, wildlife tracking sensor networks, military networks, vehicular ad hoc networks (VANETs), etc. In this context, conventional routing schemes would fail, because they try to establish complete end-to-end paths, before any data is sent. To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. With this in mind, we look into a number of "single-copy" routing schemes that use only one copy per message, and hence significantly reduce the resource requirements of flooding-based algorithms. We perform a detailed exploration of the single-copy routing space in order to identify efficient single-copy solutions that (i) can be employed when low resource usage is critical, and (ii) can help improve the design of general routing schemes that use multiple copies. We also propose a theoretical framework that we use to analyze the performance of all single-copy schemes presented, and to derive upper and lower bounds on the delay of any scheme

[Go to top]

Efficient regular expression evaluation: theory to practice
by Michela Becchi and Patrick Crowley.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Consistency Management for Peer-to-Peer-based Massively Multiuser Virtual Environments (PDF)
by Gregor Schiele, Richard Süselbeck, Arno Wacker, Tonio Triebel, and Christian Becker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Characterizing unstructured overlay topologies in modern P2P file-sharing systems (PDF)
by Daniel Stutzbach, Reza Rejaie, and Subhabrata Sen.
In IEEE/ACM Trans. Netw 16(2), 2008, pages 267-280. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In recent years, peer-to-peer (P2P) file-sharing systems have evolved to accommodate growing numbers of participating peers. In particular, new features have changed the properties of the unstructured overlay topologies formed by these peers. Little is known about the characteristics of these topologies and their dynamics in modern file-sharing applications, despite their importance. This paper presents a detailed characterization of P2P overlay topologies and their dynamics, focusing on the modern Gnutella network. We present Cruiser, a fast and accurate P2P crawler, which can capture a complete snapshot of the Gnutella network of more than one million peers in just a few minutes, and show how inaccuracy in snapshots can lead to erroneous conclusions–such as a power-law degree distribution. Leveraging recent overlay snapshots captured with Cruiser, we characterize the graph-related properties of individual overlay snapshots and overlay dynamics across slices of back-to-back snapshots. Our results reveal that while the Gnutella network has dramatically grown and changed in many ways, it still exhibits the clustering and short path lengths of a small world network. Furthermore, its overlay topology is highly resilient to random peer departure and even systematic attacks. More interestingly, overlay dynamics lead to an "onion-like" biased connectivity among peers where each peer is more likely connected to peers with higher uptime. Therefore, long-lived peers form a stable core that ensures reachability among peers despite overlay dynamics

[Go to top]

BFT protocols under fire (PDF)
by Atul Singh, Tathagata Das, Petros Maniatis, Peter Druschel, and Timothy Roscoe.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Much recent work on Byzantine state machine replication focuses on protocols with improved performance under benign conditions (LANs, homogeneous replicas, limited crash faults), with relatively little evaluation under typical, practical conditions (WAN delays, packet loss, transient disconnection, shared resources). This makes it difficult for system designers to choose the appropriate protocol for a real target deployment. Moreover, most protocol implementations differ in their choice of runtime environment, crypto library, and transport, hindering direct protocol comparisons even under similar conditions. We present a simulation environment for such protocols that combines a declarative networking system with a robust network simulator. Protocols can be rapidly implemented from pseudocode in the high-level declarative language of the former, while network conditions and (measured) costs of communication packages and crypto primitives can be plugged into the latter. We show that the resulting simulator faithfully predicts the performance of native protocol implementations, both as published and as measured in our local network. We use the simulator to compare representative protocols under identical conditions and rapidly explore the effects of changes in the costs of crypto operations, workloads, network conditions and faults. For example, we show that Zyzzyva outperforms protocols like PBFT and Q/U undermost but not all conditions, indicating that one-size-fits-all protocols may be hard if not impossible to design in practice

[Go to top]

Approximate Matching for Peer-to-Peer Overlays with Cubit
by Aleksandrs Slivkins Wong and Emin Gün Sirer.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

Keyword search is a critical component in most content retrieval systems. Despite the emergence of completely decentralized and efficient peer-to-peer techniques for content distribution, there have not been similarly efficient, accurate, and decentralized mechanisms for contentdiscoverybasedonapproximatesearchkeys. Inthis paper, we present a scalable and efficient peer-to-peer system calledCubitwith anewsearchprimitivethat can efficientlyfindthe k dataitemswithkeysmostsimilarto a givensearchkey. Thesystem worksbycreatingakeyword metric space that encompasses both the nodes and theobjectsinthesystem,wherethedistancebetweentwo points is a measure of the similarity between the strings thatthepointsrepresent. It providesa loosely-structured overlaythat can efficientlynavigatethis space. We evaluate Cubit through both a real deployment as a search plugin for a popular BitTorrent client and a large-scale simulation and show that it provides an efficient, accurateandrobustmethodto handleimprecisestringsearch infilesharingapplications. 1

[Go to top]

Analyzing Unreal Tournament 2004 Network Traffic Characteristics
by Christian Hübsch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

With increasing availability of high-speed access links in the private sector, online real-time gaming has become a major and still growing segment in terms of market and network impact today. One of the most popular games is Unreal Tournament 2004, a fast-paced action game that still ranks within the top 10 of the most-played multiplayer Internet-games, according to GameSpy [1]. Besides high demands in terms of graphical computation, games like Unreal also impose hard requirements regarding network packet delay and jitter, for small deterioration in these conditions influences gameplay recognizably. To make matters worse, such games generate a very specific network traffic with strong requirements in terms of data delivery. In this paper, we analyze the network traffic characteristics of Unreal Tournament 2004. The experiments include different aspects like variation of map sizes, player count, player behavior as well as hardware and game-specific configuration. We show how different operating systems influence network behavior of the game. Our work gives a promising picture of how the specific real-time game behaves in terms of network impact and may be used as a basis e.g. for the development of specialized traffic generators

[Go to top]

2007

Identity-based broadcast encryption with constant size ciphertexts and private keys (PDF)
by Cécile Delerablée.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes the first identity-based broadcast encryption scheme (IBBE) with constant size ciphertexts and private keys. In our scheme, the public key is of size linear in the maximal size m of the set of receivers, which is smaller than the number of possible users (identities) in the system. Compared with a recent broadcast encryption system introduced by Boneh, Gentry and Waters (BGW), our system has comparable properties, but with a better efficiency: the public key is shorter than in BGW. Moreover, the total number of possible users in the system does not have to be fixed in the setup

[Go to top]

Covert channel vulnerabilities in anonymity systems (PDF)
by Steven J. Murdoch.
phd, University of Cambridge, December 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The spread of wide-scale Internet surveillance has spurred interest in anonymity systems that protect users' privacy by restricting unauthorised access to their identity. This requirement can be considered as a flow control policy in the well established field of multilevel secure systems. I apply previous research on covert channels (unintended means to communicate in violation of a security policy) to analyse several anonymity systems in an innovative way. One application for anonymity systems is to prevent collusion in competitions. I show how covert channels may be exploited to violate these protections and construct defences against such attacks, drawing from previous covert channel research and collusion-resistant voting systems. In the military context, for which multilevel secure systems were designed, covert channels are increasingly eliminated by physical separation of interconnected single-role computers. Prior work on the remaining network covert channels has been solely based on protocol specifications. I examine some protocol implementations and show how the use of several covert channels can be detected and how channels can be modified to resist detection. I show how side channels (unintended information leakage) in anonymity networks may reveal the behaviour of users. While drawing on previous research on traffic analysis and covert channels, I avoid the traditional assumption of an omnipotent adversary. Rather, these attacks are feasible for an attacker with limited access to the network. The effectiveness of these techniques is demonstrated by experiments on a deployed anonymity network, Tor. Finally, I introduce novel covert and side channels which exploit thermal effects. Changes in temperature can be remotely induced through CPU load and measured by their effects on crystal clock skew. Experiments show this to be an effective attack against Tor. This side channel may also be usable for geolocation and, as a covert channel, can cross supposedly infallible air-gap security boundaries. This thesis demonstrates how theoretical models and generic methodologies relating to covert channels may be applied to find practical solutions to problems in real-world anonymity systems. These findings confirm the existing hypothesis that covert channel analysis, vulnerabilities and defences developed for multilevel secure systems apply equally well to anonymity systems

[Go to top]

Securing peer-to-peer media streaming systems from selfish and malicious behavior (PDF)
by William Conner and Klara Nahrstedt.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a flexible framework for throttling attackers in peer-to-peer media streaming systems. In such systems, selfish nodes (e.g., free riders) and malicious nodes (e.g., DoS attackers) can overwhelm the system by issuing too many requests in a short interval of time. Since peer-to-peer systems are decentralized, it is difficult for individual peers to limit the aggregate download bandwidth consumed by other remote peers. This could potentially allow selfish and malicious peers to exhaust the system's available upload bandwidth. In this paper, we propose a framework to provide a solution to this problem by utilizing a subset of trusted peers (called kantoku nodes) that collectively monitor the bandwidth usage of untrusted peers in the system and throttle attackers. This framework has been evaluated through simulation thus far. Experiments with a full implementation on a network testbed are part of our future work

[Go to top]

Secure asynchronous change notifications for a distributed file system (PDF)
by Bernhard Amann.
Ph.D. thesis, Technische Universität München, November 2007. (BibTeX entry) (Download bibtex record)
(direct link)

Distributed file systems have been a topic of interest for a long time and there are many file systems that are distributed in one way or another. However most distributed file systems are only reasonably usable within a local network of computers and some main tasks are still delegated to a very small number of servers. Today with the advent of Peer-to-Peer technology, distributed file systems that work on top of Peer-to-Peer systems can be built. These systems can be built with no or much less centralised components and are usable on a global scale. The System Architecture Group at the University of Karlsruhe in Germany has developedsuch a file system, which is built on top of a structured overlay network and uses Distributed Hash Tables to store and access the information. One problem with this approach is, that each file system can only be accessed with the help of an identifier, which changes whenever a file system is modified. All clients have to be notified of the new identifier in a secure, fast and reliable way. Usually the strategy to solve this type of problem is an encrypted multicast. This thesis presents and analyses several strategies of using multicast distributions to solve this problem and then unveils our final solution based on the Subset Difference method proposed by Naor et al

[Go to top]

Probabilistic and Information-Theoretic Approaches to Anonymity (PDF)
by Konstantinos Chatzikokolakis.
phd, Laboratoire d'Informatique (LIX), École Polytechnique, Paris, October 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As the number of Internet activities increases, there is a growing amount of personal information about the users that is transferred using public electronic means, making it feasible to collect a huge amount of information about a person. As a consequence, the need for mechanisms to protect such information is compelling. In this thesis, we study security protocols with an emphasis on the property of anonymity and we propose methods to express and verify this property. Anonymity protocols often use randomization to introduce noise, thus limiting the inference power of a malicious observer. We consider a probabilistic framework in which a protocol is described by its set of anonymous information, observable information and the conditional probability of observing the latter given the former. In this framework we express two anonymity properties, namely strong anonymity and probable innocence. Then we aim at quantitative definitions of anonymity. We view protocols as noisy channels in the information-theoretic sense and we express their degree of anonymity as the converse of channel capacity. We apply this definition to two known anonymity protocols. We develop a monotonicity principle for the capacity, and use it to show a number of results for binary channels in the context of algebraic information theory. We then study the probability of error for the attacker in the context of Bayesian inference, showing that it is a piecewise linear function and using this fact to improve known bounds from the literature. Finally we study a problem that arises when we combine probabilities with nondeterminism, where the scheduler is too powerful even for trivially secure protocols. We propose a process calculus which allows to express restrictions to the scheduler, and we use it in the analysis of an anonymity and a contract-signing protocol

[Go to top]

Low-Resource Routing Attacks Against Tor (PDF)
by Kevin Bauer, Damon McCoy, Dirk Grunwald, Tadayoshi Kohno, and Douglas Sicker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor has become one of the most popular overlay networks for anonymizing TCP traffic. Its popularity is due in part to its perceived strong anonymity properties and its relatively low latency service. Low latency is achieved through Tor’s ability to balance the traffic load by optimizing Tor router selection to probabilistically favor routers with highbandwidth capabilities. We investigate how Tor’s routing optimizations impact its ability to provide strong anonymity. Through experiments conducted on PlanetLab, we show the extent to which routing performance optimizations have left the system vulnerable to end-to-end traffic analysis attacks from non-global adversaries with minimal resources. Further, we demonstrate that entry guards, added to mitigate path disruption attacks, are themselves vulnerable to attack. Finally, we explore solutions to improve Tor’s current routing algorithms and propose alternative routing strategies that prevent some of the routing attacks used in our experiments

[Go to top]

How robust are gossip-based communication protocols? (PDF)
by Lorenzo Alvisi, Jeroen Doumen, Rachid Guerraoui, Boris Koldehofe, Harry Li, Robbert Van Renesse, and Gilles Tredan.
In Operating Systems Review 41(5), October 2007, pages 14-18. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Gossip-based communication protocols are often touted as being robust. Not surprisingly, such a claim relies on assumptions under which gossip protocols are supposed to operate. In this paper, we discuss and in some cases expose some of these assumptions and discuss how sensitive the robustness of gossip is to these assumptions. This analysis gives rise to a collection of new research challenges

[Go to top]

A global view of KAD (PDF)
by Moritz Steiner, Taoufik En-Najjary, and E W Biersack.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hash tables (DHTs) have been actively studied in literature and many different proposals have been made on how to organize peers in a DHT. However, very few DHT shave been implemented in real systems and deployed on alarge scale. One exception is KAD, a DHT based on Kademlia, which is part of eDonkey2000, a peer-to-peer file sharing system with several million simultaneous users. We have been crawling KAD continuously for about six months and obtained information about the total number of peers online and their geographical distribution. Peers are identified by the so called KAD ID, which was up to now assumed to remain the same across sessions. However, we observed that this is not the case: There is a large number of peers, in particular in China, that change their KAD ID, sometimes as frequently as after each session. This change of KAD IDs makes it difficult to characterize end-user availability or membership turnover

[Go to top]

Does additional information always reduce anonymity? (PDF)
by Claudia Diaz, Carmela Troncoso, and George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We discuss information-theoretic anonymity metrics, that use entropy over the distribution of all possible recipients to quantify anonymity. We identify a common misconception: the entropy of the distribution describing the potentialreceivers does not always decrease given more information.We show the relation of these a-posteriori distributions with the Shannon conditional entropy, which is an average overall possible observations

[Go to top]

Denial of Service or Denial of Security? How Attacks on Reliability can Compromise Anonymity (PDF)
by Nikita Borisov, George Danezis, Prateek Mittal, and Parisa Tabriz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the effect attackers who disrupt anonymous communications have on the security of traditional high- and low-latency anonymous communication systems, as well as on the Hydra-Onion and Cashmere systems that aim to offer reliable mixing, and Salsa, a peer-to-peer anonymous communication network. We show that denial of service (DoS) lowers anonymity as messages need to get retransmitted to be delivered, presenting more opportunities for attack. We uncover a fundamental limit on the security of mix networks, showing that they cannot tolerate a majority of nodes being malicious. Cashmere, Hydra-Onion, and Salsa security is also badly affected by DoS attackers. Our results are backed by probabilistic modeling and extensive simulations and are of direct applicability to deployed anonymity systems

[Go to top]

Blacklistable Anonymous Credentials: Blocking Misbehaving Users without TTPs (PDF)
by Patrick P. Tsang, Man Ho Au, Apu Kapadia, and Sean Smith.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several credential systems have been proposed in which users can authenticate to services anonymously. Since anonymity can give users the license to misbehave, some variants allow the selective deanonymization (or linking) of misbehaving users upon a complaint to a trusted third party (TTP). The ability of the TTP to revoke a user's privacy at any time, however, is too strong a punishment for misbehavior. To limit the scope of deanonymization, systems such as "e-cash" have been proposed in which users are deanonymized under only certain types of well-defined misbehavior such as "double spending." While useful in some applications, it is not possible to generalize such techniques to more subjective definitions of misbehavior. We present the first anonymous credential system in which services can "blacklist" misbehaving users without contacting a TTP. Since blacklisted users remain anonymous, misbehaviors can be judged subjectively without users fearing arbitrary deanonymization by a TTP

[Go to top]

Attribute-based encryption with non-monotonic access structures (PDF)
by Rafail Ostrovsky, Amit Sahai, and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes

[Go to top]

Anonymous Networking amidst Eavesdroppers (PDF)
by Parvathinathan Venkitasubramaniam, Ting He, and Lang Tong.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The problem of security against packet timing based traffic analysis in wireless networks is considered in this work. An analytical measure of "anonymity" of routes in eavesdropped networks is proposed using the information-theoretic equivocation. For a physical layer with orthogonal transmitter directed signaling, scheduling and relaying techniques are designed to maximize achievable network performance for any desired level of anonymity. The network performance is measured by the total rate of packets delivered from the sources to destinations under strict latency and medium access constraints. In particular, analytical results are presented for two scenarios: For a single relay that forwards packets from m users, relaying strategies are provided that minimize the packet drops when the source nodes and the relay generate independent transmission schedules. A relay using such an independent scheduling strategy is undetectable by an eavesdropper and is referred to as a covert relay. Achievable rate regions are characterized under strict and average delay constraints on the traffic, when schedules are independent Poisson processes. For a multihop network with an arbitrary anonymity requirement, the problem of maximizing the sum-rate of flows (network throughput) is considered. A randomized selection strategy to choose covert relays as a function of the routes is designed for this purpose. Using the analytical results for a single covert relay, the strategy is optimized to obtain the maximum achievable throughput as a function of the desired level of anonymity. In particular, the throughput-anonymity relation for the proposed strategy is shown to be equivalent to an information-theoretic rate-distortion function

[Go to top]

Analyzing Peer Behavior in KAD (PDF)
by Moritz Steiner, Taoufik En-Najjary, and E W Biersack.
In unknown(RR-07-205), October 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hash tables (DHTs) have been actively studied in literature and many different proposals have been made on how to organize peers in a DHT. However, very few DHTs have been implemented in real systems and deployed on a large scale. One exception is KAD, a DHT based on Kademlia, which is part of eDonkey2000, a peer-to-peer file sharing system with several million simultaneous users. We have been crawling KAD continuously for about six months and obtained information about geographical distribution of peers, session times, peer availability, and peer lifetime. We also evaluated to what extent information about past peer uptime can be used to predict the remaining uptime of the peer. Peers are identified by the so called KAD ID, which was up to now as- sumed to remain the same across sessions. However, we observed that this is not the case: There is a large number of peers, in particular in China, that change their KAD ID, sometimes as frequently as after each session. This change of KAD IDs makes it difficult to characterize end-user availability or membership turnover. By tracking end-users with static IP addresses, we could measure the rate of change of KAD ID per end-user

[Go to top]

Securing Internet Coordinate Embedding Systems (PDF)
by Mohamed Ali Kaafar, Laurent Mathy, Chadi Barakat, Kave Salamatian, Thierry Turletti, and Walid Dabbous.
In SIGCOMM Computer Communication Review 37, August 2007, pages 61-72. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper addresses the issue of the security of Internet Coordinate Systems,by proposing a general method for malicious behavior detection during coordinate computations. We first show that the dynamics of a node, in a coordinate system without abnormal or malicious behavior, can be modeled by a Linear State Space model and tracked by a Kalman filter. Then we show, that the obtained model can be generalized in the sense that the parameters of a filtercalibrated at a node can be used effectively to model and predict the dynamic behavior at another node, as long as the two nodes are not too far apart in the network. This leads to the proposal of a Surveyor infrastructure: Surveyor nodes are trusted, honest nodes that use each other exclusively to position themselves in the coordinate space, and are therefore immune to malicious behavior in the system.During their own coordinate embedding, other nodes can thenuse the filter parameters of a nearby Surveyor as a representation of normal, clean system behavior to detect and filter out abnormal or malicious activity. A combination of simulations and PlanetLab experiments are used to demonstrate the validity, generality, and effectiveness of the proposed approach for two representative coordinate embedding systems, namely Vivaldi and NPS

[Go to top]

Hidden-Action in Network Routing (PDF)
by Michal Feldman, John Chuang, Ion Stoica, and S Shenker.
In IEEE Journal on Selected Areas in Communications 25, August 2007, pages 1161-1172. (BibTeX entry) (Download bibtex record)
(direct link)

In communication networks, such as the Internet or mobile ad-hoc networks, the actions taken by intermediate nodes or links are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediaries may choose to forward messages at a low priority or simply not forward messages at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts in both the direct (the endpoints contract with each individual router directly) and the recursive (each router contracts with the next downstream router) cases. We further show that, depending on the network topology, per-hop or per-path monitoring may not necessarily improve the utility of the principal or the social welfare of the system

[Go to top]

Bubblestorm: resilient, probabilistic, and exhaustive peer-to-peer search (PDF)
by Wesley W. Terpstra, Jussi Kangasharju, Christof Leng, and Alejandro P. Buchmann.
In SIGCOMM Computer Communication Review 37, August 2007, pages 49-60. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer systems promise inexpensive scalability, adaptability, and robustness. Thus, they are an attractive platform for file sharing, distributed wikis, and search engines. These applications often store weakly structured data, requiring sophisticated search algorithms. To simplify the search problem, most scalable algorithms introduce structure to the network. However, churn or violent disruption may break this structure, compromising search guarantees. This paper proposes a simple probabilistic search system, BubbleStorm, built on random multigraphs. Our primary contribution is a flexible and reliable strategy for performing exhaustive search. BubbleStorm also exploits the heterogeneous bandwidth of peers. However, we sacrifice some of this bandwidth for high parallelism and low latency. The provided search guarantees are tunable, with success probability adjustable well into the realm of reliable systems. For validation, we simulate a network with one million low-end peers and show BubbleStorm handles up to 90 simultaneous peer departure and 50 simultaneous crash

[Go to top]

Usability of anonymous web browsing: an examination of Tor interfaces and deployability (PDF)
by Jeremy Clark, Paul C. van Oorschot, and Carlisle Adams.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is a popular privacy tool designed to help achieve online anonymity by anonymising web traffic. Employing cognitive walkthrough as the primary method, this paper evaluates four competing methods of deploying Tor clients, and a number of software tools designed to be used in conjunction with Tor: Vidalia, Privoxy, Torbutton, and FoxyProxy. It also considers the standalone anonymous browser TorPark. Our results show that none of the deployment options are fully satisfactory from a usability perspective, but we offer suggestions on how to incorporate the best aspects of each tool. As a framework for our usability evaluation, we also provide a set of guidelines for Tor usability compiled and adapted from existing work on usable security and human-computer interaction

[Go to top]

An Amortized Tit-For-Tat Protocol for Exchanging Bandwidth instead of Content in P2P Networks (PDF)
by Pawel Garbacki, Dick H. J. Epema, and Maarten van Steen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Incentives for resource sharing are crucial for the proper operation of P2P networks. The principle of the incentive mechanisms in current content sharing P2P networks such as BitTorrent is to have peers exchange content of mutual interest. As a consequence, a peer can actively participate in the system only if it shares content that is of immediate interest to other peers. In this paper we propose to lift this restriction by using bandwidth rather than content as the resource upon which incentives are based. Bandwidth, in contrast to content, is independent of peer interests and so can be exchanged between any two peers. We present the design of a protocol called amortized tit-for-tat (ATFT) based on the bandwidth-exchange concept. This protocol defines mechanisms for bandwidth exchange corresponding to those in BitTorrent for content exchange, in particular for finding bandwidth borrowers that amortize the bandwidth borrowed in the past with their currently idle bandwidth. In addition to the formally proven incentives for bandwidth contributions, ATFT provides natural solutions to the problems of peer bootstrapping, seeding incentive, peer link asymmetry, and anonymity, which have previously been addressed with much more complex designs. Experiments with a realworld dataset confirm that ATFT is efficient in enforcing bandwidth contributions and results in download performance better than provided by incentive mechanisms based on content exchange

[Go to top]

An Amortized Tit-For-Tat Protocol for Exchanging Bandwidth instead of Content in P2P Networks (PDF)
by Pawel Garbacki, Dick H. J. Epema, and Maarten van Steen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Incentives for resource sharing are crucial for the proper operation of P2P networks. The principle of the incentive mechanisms in current content sharing P2P networks such as BitTorrent is to have peers exchange content of mutual interest. As a consequence, a peer can actively participate in the system only if it shares content that is of immediate interest to other peers. In this paper we propose to lift this restriction by using bandwidth rather than content as the resource upon which incentives are based. Bandwidth, in contrast to content, is independent of peer interests and so can be exchanged between any two peers. We present the design of a protocol called amortized tit-for-tat (ATFT) based on the bandwidth-exchange concept. This protocol defines mechanisms for bandwidth exchange corresponding to those in BitTorrent for content exchange, in particular for finding bandwidth borrowers that amortize the bandwidth borrowed in the past with their currently idle bandwidth. In addition to the formally proven incentives for bandwidth contributions, ATFT provides natural solutions to the problems of peer bootstrapping, seeding incentive, peer link asymmetry, and anonymity, which have previously been addressed with much more complex designs. Experiments with a realworld dataset confirm that ATFT is efficient in enforcing bandwidth contributions and results in download performance better than provided by incentive mechanisms based on content exchange

[Go to top]

Two-Sided Statistical Disclosure Attack (PDF)
by George Danezis, Claudia Diaz, and Carmela Troncoso.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce a new traffic analysis attack: the Two-sided Statistical Disclosure Attack, that tries to uncover the receivers of messages sent through an anonymizing network supporting anonymous replies. We provide an abstract model of an anonymity system with users that reply to messages. Based on this model, we propose a linear approximation describing the likely receivers of sent messages. Using simulations, we evaluate the new attack given different traffic characteristics and we show that it is superior to previous attacks when replies are routed in the system

[Go to top]

Traffic Analysis Attacks on a Continuously-Observable Steganographic File System (PDF)
by Carmela Troncoso, Claudia Diaz, Orr Dunkelman, and Bart Preneel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A continuously-observable steganographic file system allows to remotely store user files on a raw storage device; the security goal is to offer plausible deniability even when the raw storage device is continuously monitored by an attacker. Zhou, Pang and Tan have proposed such a system in [7] with a claim of provable security against traffic analysis. In this paper, we disprove their claims by presenting traffic analysis attacks on the file update algorithm of Zhou et al. Our attacks are highly effective in detecting file updates and revealing the existence and location of files. For multi-block files, we show that two updates are sufficient to discover the file. One-block files accessed a sufficient number of times can also be revealed. Our results suggest that simple randomization techniques are not sufficient to protect steganographic file systems from traffic analysis attacks

[Go to top]

Sampled Traffic Analysis by Internet-Exchange-Level Adversaries (PDF)
by Steven J. Murdoch and Piotr Zieliński.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Existing low-latency anonymity networks are vulnerable to traffic analysis, so location diversity of nodes is essential to defend against attacks. Previous work has shown that simply ensuring geographical diversity of nodes does not resist, and in some cases exacerbates, the risk of traffic analysis by ISPs. Ensuring high autonomous-system (AS) diversity can resist this weakness. However, ISPs commonly connect to many other ISPs in a single location, known as an Internet eXchange (IX). This paper shows that IXes are a single point where traffic analysis can be performed. We examine to what extent this is true, through a case study of Tor nodes in the UK. Also, some IXes sample packets flowing through them for performance analysis reasons, and this data could be exploited to de-anonymize traffic. We then develop and evaluate Bayesian traffic analysis techniques capable of processing this sampled data

[Go to top]

Proximity Neighbor Selection and Proximity Route Selection for the Overlay-Network IGOR (PDF)
by Yves Philippe Kising.
Diplomarbeit, Technische Universität München, June 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Unfortunately, from all known "Distributed Hash Table"-based overlay networks only a few of them relate to proximity in terms of latency. So a query routing can come with high latency when very distant hops are used. One can imagine hops are from one continent to the other in terms of here and back. Thereby it is possible that the target node is located close to the requesting node. Such cases increase query latency to a great extent and are responsible for performance bottlenecks of a query routing. There exist two main strategies to reduce latency in the query routing process: Proximity Neighbor Selection and Proximity Route Selection. As a new proposal of PNS for the IGOR overlay network, Merivaldi is developed. Merivaldi represents a combination of two basic ideas: The first idea is the Meridian framework and its Closest-Node- Discovery without synthetic coordinates. The second idea is Vivaldi, a distributed algorithm for predicting Internet latency between arbitrary Internet hosts. Merivaldi is quite similar to Meridian. It differs in using no direct Round Trip Time measurements like Meridian does to obtain latency characteristics between hosts. Merivaldi obtains latency characteristics of nodes using the latency prediction derived from the Vivaldi-coordinates. A Merivaldi-node forms exponentially growing latency-rings, i.e., the rings correspond to latency distances to the Merivaldi-node itself. In these rings node-references are inserted with regard to their latency characteristics. These node-references are obtained through a special protocol. A Merivaldi-node finds latency-closest nodes through periodic querying its ring-members for closer nodes. If a closer node is found by a ring-member the query is forwarded to this one until no closer one can be found. The closest on this way reports itself to the Merivaldi-node. Exemplary analysis show that Merivaldi means only a modest burden for the network. Merivaldi uses O(log N) CND-hops at maximum to recognize a closest node, where N is the number of nodes. Empirical tests demonstrate this analysis. Analysis shows, the overhead for a Merivaldi-node is modest. It is shown that Merivaldi's Vivaldi works with high quality with the used PING-message type

[Go to top]

Improving Efficiency and Simplicity of Tor circuit establishment and hidden services (PDF)
by Lasse Øverlier and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we demonstrate how to reduce the overhead and delay of circuit establishment in the Tor anonymizing network by using predistributed Diffie-Hellman values. We eliminate the use of RSA encryption and decryption from circuit setup, and we reduce the number of DH exponentiations vs. the current Tor circuit setup protocol while maintaining immediate forward secrecy. We also describe savings that can be obtained by precomputing during idle cycles values that can be determined before the protocol starts. We introduce the distinction of eventual vs. immediate forward secrecy and present protocols that illustrate the distinction. These protocols are even more efficient in communication and computation than the one we primarily propose, but they provide only eventual forward secrecy. We describe how to reduce the overhead and the complexity of hidden server connections by using our DH-values to implement valet nodes and eliminate the need for rendezvous points as they exist today. We also discuss the security of the new elements and an analysis of efficiency improvements

[Go to top]

Estimating churn in structured P2P networks (PDF)
by Andreas Binzenhöfer and Kenji Leibnitz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In structured peer-to-peer (P2P) networks participating peers can join or leave the system at arbitrary times, a process which is known as churn. Many recent studies revealed that churn is one of the main problems faced by any Distributed Hash Table (DHT). In this paper we discuss different possibilities of how to estimate the current churn rate in the system. In particular, we show how to obtain a robust estimate which is independent of the implementation details of the DHT. We also investigate the trade-offs between accuracy, overhead, and responsiveness to changes

[Go to top]

PRIME: Peer-to-Peer Receiver-drIven MEsh-based Streaming (PDF)
by Nazanin Magharei and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The success of file swarming mechanisms such as BitTorrent has motivated a new approach for scalable streaming of live content that we call mesh-based Peer-to-Peer (P2P) streaming. In this approach, participating end-systems (or peers) form a randomly connected mesh and incorporate swarming content delivery to stream live content. Despite the growing popularity of this approach, neither the fundamental design tradeoffs nor the basic performance bottlenecks in mesh-based P2P streaming are well understood. In this paper, we follow a performance-driven approach to design PRIME, a scalable mesh-based P2P streaming mechanism for live content. The main design goal of PRIME is to minimize two performance bottlenecks, namely bandwidth bottleneck and content bottleneck. We show that the global pattern of delivery for each segment of live content should consist of a diffusion phase which is followed by a swarming phase. This leads to effective utilization of available resources to accommodate scalability and also minimizes content bottleneck. Using packet level simulations, we carefully examine the impact of overlay connectivity, packet scheduling scheme at individual peers and source behavior on the overall performance of the system. Our results reveal fundamental design tradeoffs of mesh-based P2P streaming for live content

[Go to top]

Network coding for distributed storage systems (PDF)
by Alexandros G. Dimakis, Brighten Godfrey, Yunnan Wu, Martin J. Wainwright, and Kannan Ramchandran.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff

[Go to top]

Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches (PDF)
by Nazanin Magharei and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Existing approaches to P2P streaming can be divided into two general classes: (i) tree-based approaches use push-based content delivery over multiple tree-shaped overlays, and (ii) mesh-based approaches use swarming content delivery over a randomly connected mesh. Previous studies have often focused on a particular P2P streaming mechanism and no comparison between these two classes has been conducted. In this paper, we compare and contrast the performance of representative protocols from each class using simulations. We identify the similarities and differences between these two approaches. Furthermore, we separately examine the behavior of content delivery and overlay construction mechanisms for both approaches in static and dynamic scenarios. Our results indicate that the mesh-based approach consistently exhibits a superior performance over the tree-based approach. We also show that the main factors attributing in the inferior performance of the tree-based approach are (i) the static mapping of content to a particular tree, and (ii) the placement of each peer as an internal node in one tree and as a leaf in all other trees

[Go to top]

MARCH: A Distributed Incentive Scheme for Peer-to-Peer Networks (PDF)
by Zhan Zhang, Shigang Chen, and MyungKeun Yoon.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

As peer-to-peer networks grow larger and include more diverse users, the lack of incentive to encourage cooperative behavior becomes one of the key problems. This challenge cannot be fully met by traditional incentive schemes, which suffer from various attacks based on false reports. Especially, due to the lack of central authorities in typical P2P systems, it is difficult to detect colluding groups. Members in the same colluding group can cooperate to manipulate their history information, and the damaging power increases dramatically with the group size. In this paper, we propose a new distributed incentive scheme, in which the benefit that a node can obtain from the system is proportional to its contribution to the system, and a colluding group cannot gain advantage by cooperation regardless of its size. Consequently, the damaging power of colluding groups is strictly limited. The proposed scheme includes three major components: a distributed authority infrastructure, a key sharing protocol, and a contract verification protocol

[Go to top]

Improving the Robustness of Private Information Retrieval (PDF)
by Ian Goldberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. information-theoretic privacy protection, correctness in the face of servers that fail to respond or that respond incorrectly, and protection of sensitive data against the database servers themselves. In this paper, we improve on the robustness of PIR in a number of ways. First, we present a Byzantine-robust PIR protocol which provides information-theoretic privacy protection against coalitions of up to all but one of the responding servers, improving the previous result by a factor of 3. In addition, our protocol allows for more of the responding servers to return incorrect information while still enabling the user to compute the correct result. We then extend our protocol so that queries have information-theoretic protection if a limited number of servers collude, as before, but still retain computational protection if they all collude. We also extend the protocol to provide information-theoretic protection to the contents of the database against collusions of limited numbers of the database servers, at no additional communication cost or increase in the number of servers. All of our protocols retrieve a block of data with communication cost only O(.) times the size of the block, where . is the number of servers

[Go to top]

Implications of Selfish Neighbor Selection in Overlay Networks (PDF)
by Nikolaos Laoutaris, Georgios Smaragdakis, Azer Bestavros, and John W. Byers.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A Combinatorial Approach to Measuring Anonymity (PDF)
by Matthew Edman, Fikret Sivrikaya, and Bülent Yener.
In Intelligence and Security Informatics, 2007 IEEE, May 2007, pages 356-363. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we define a new metric for quantifying the degree of anonymity collectively afforded to users of an anonymous communication system. We show how our metric, based on the permanent of a matrix, can be useful in evaluating the amount of information needed by an observer to reveal the communication pattern as a whole. We also show how our model can be extended to include probabilistic information learned by an attacker about possible sender-recipient relationships. Our work is intended to serve as a complementary tool to existing information-theoretic metrics, which typically consider the anonymity of the system from the perspective of a single user or message

[Go to top]

Multipath routing algorithms for congestion minimization (PDF)
by Ron Banner and Ariel Orda.
In IEEE/ACM Trans. Netw 15, April 2007, pages 413-424. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Unlike traditional routing schemes that route all traffic along a single path, multipath routing strategies split the traffic among several paths in order to ease congestion. It has been widely recognized that multipath routing can be fundamentally more efficient than the traditional approach of routing along single paths. Yet, in contrast to the single-path routing approach, most studies in the context of multipath routing focused on heuristic methods. We demonstrate the significant advantage of optimal (or near optimal) solutions. Hence, we investigate multipath routing adopting a rigorous (theoretical) approach. We formalize problems that incorporate two major requirements of multipath routing. Then, we establish the intractability of these problems in terms of computational complexity. Finally, we establish efficient solutions with proven performance guarantees

[Go to top]

Information Slicing: Anonymity Using Unreliable Overlays (PDF)
by Sachin Katti, Jeffery Cohen, and Dina Katabi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper proposes a new approach to anonymous communication called information slicing. Typically, anonymizers use onion routing, where a message is encrypted in layers with the public keys of the nodes along the path. Instead, our approach scrambles the message, divides it into pieces, and sends the pieces along disjoint paths. We show that information slicing addresses message confidentiality as well as source and destination anonymity. Surprisingly, it does not need any public key cryptography. Further, our approach naturally addresses the problem of node failures. These characteristics make it a good fit for use over dynamic peer-to-peer overlays. We evaluate the anonymity ofinformation slicing via analysis and simulations. Our prototype implementation on PlanetLab shows that it achieves higher throughput than onion routing and effectively copes with node churn

[Go to top]

Empirical Study on the Evolution of PlanetLab (PDF)
by Li Tang, Yin Chen, Fei Li, Hui Zhang, and Jun Li.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

PlanetLab is a globally distributed overlay platform that has been increasingly used by researchers to deploy and assess planetary-scale network services. This paper analyzes some particular advantages of PlanetLab, and then investigates its evolution process, geographical node-distribution, and network topological features. The revealed results are helpful for researchers to 1) understand the history of PlanetLab and some of its important properties quantitatively; 2) realize the dynamic of PlanetLab environment and design professional experiments; 3) select stable nodes that possess a high probability to run continuously for a long time; and 4) objectively and in depth evaluate the experimental results

[Go to top]

Do incentives build robustness in BitTorrent? (PDF)
by Michael Piatek, Tomas Isdal, Thomas Anderson, Arvind Krishnamurthy, and Arun Venkataramani.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental problem with many peer-to-peer systems is the tendency for users to "free ride"–to consume resources without contributing to the system. The popular file distribution tool BitTorrent was explicitly designed to address this problem, using a tit-for-tat reciprocity strategy to provide positive incentives for nodes to contribute resources to the swarm. While BitTorrent has been extremely successful, we show that its incentive mechanism is not robust to strategic clients. Through performance modeling parameterized by real world traces, we demonstrate that all peers contribute resources that do not directly improve their performance. We use these results to drive the design and implementation of BitTyrant, a strategic BitTorrent client that provides a median 70 performance gain for a 1 Mbit client on live Internet swarms. We further show that when applied universally, strategic clients can hurt average per-swarm performance compared to today's BitTorrent client implementations

[Go to top]

Do incentives build robustness in BitTorrent? (PDF)
by Michael Piatek, Tomas Isdal, Thomas Anderson, Arvind Krishnamurthy, and Arun Venkataramani.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental problem with many peer-to-peer systems is the tendency for users to "free ride"–to consume resources without contributing to the system. The popular file distribution tool BitTorrent was explicitly designed to address this problem, using a tit-for-tat reciprocity strategy to provide positive incentives for nodes to contribute resources to the swarm. While BitTorrent has been extremely successful, we show that its incentive mechanism is not robust to strategic clients. Through performance modeling parameterized by real world traces, we demonstrate that all peers contribute resources that do not directly improve their performance. We use these results to drive the design and implementation of BitTyrant, a strategic BitTorrent client that provides a median 70 performance gain for a 1 Mbit client on live Internet swarms. We further show that when applied universally, strategic clients can hurt average per-swarm performance compared to today's BitTorrent client implementations

[Go to top]

ParaNets: A Parallel Network Architecture for Challenged Networks (PDF)
by Khaled A. Harras, Mike P. Wittie, Kevin C. Almeroth, and Elizabeth M. Belding.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Networks characterized by challenges, such as intermittent connectivity, network heterogeneity, and large delays, are called "challenged networks". We propose a novel network architecture for challenged networks dubbed Parallel Networks, or, ParaNets. The vision behind ParaNets is to have challenged network protocols operate over multiple heterogenous networks, simultaneously available, through one or more devices. We present the ParaNets architecture and discuss its short-term challenges and longterm implications. We also argue, based on current research trends and the ParaNets architecture, for the evolution of the conventional protocol stack to a more flexible cross-layered protocol tree. To demonstrate the potential impact of ParaNets, we use Delay Tolerant Mobile Networks (DTMNs) as a representative challenged network over which we evaluate ParaNets. Our ultimate goal in this paper is to open the way for further work in challenged networks using ParaNets as the underlying architecture

[Go to top]

Mapping an Arbitrary Message to an Elliptic Curve when Defined over GF (2n) (PDF)
by Brian King.
In International Journal of Network Security 8, March 2007, pages 169-176. (BibTeX entry) (Download bibtex record)
(direct link)

The use of elliptic curve cryptography (ECC) when used as a public-key cryptosystem for encryption is such that if one has a message to encrypt, then they attempt to map it to some point in the prime subgroup of the elliptic curve by systematically modifying the message in a determinis- tic manner. The applications typically used for ECC are the key-exchange, digital signature or a hybrid encryption systems (ECIES) all of which avoid this problem. In this paper we provide a deterministic method that guarantees that the map of a message to an elliptic curve point can be made without any modification. This paper provides a solution to the open problem posed in [7] concerning the creation of a deterministic method to map arbitrary message to an elliptic curve

[Go to top]

Cooperative Data Backup for Mobile Devices (PDF)
by Ludovic Courtès.
Ph.D. thesis, March 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile devices such as laptops, PDAs and cell phones are increasingly relied on but are used in contexts that put them at risk of physical damage, loss or theft. However, few mechanisms are available to reduce the risk of losing the data stored on these devices. In this dissertation, we try to address this concern by designing a cooperative backup service for mobile devices. The service leverages encounters and spontaneous interactions among participating devices, such that each device stores data on behalf of other devices. We first provide an analytical evaluation of the dependability gains of the proposed service. Distributed storage mechanisms are explored and evaluated. Security concerns arising from thecooperation among mutually suspicious principals are identified, and core mechanisms are proposed to allow them to be addressed. Finally, we present our prototype implementation of the cooperative backup service

[Go to top]

Secure Group Communication in Ad-Hoc Networks using Tree Parity Machines (PDF)
by Bjoern Saballus, Sebastian Wallner, and Markus Volkmer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental building block of secure group communication is the establishment of a common group key. This can be divided into key agreement and key distribution. Common group key agreement protocols are based on the Diffie-Hellman (DH) key exchange and extend it to groups. Group key distribution protocols are centralized approaches which make use of one or more special key servers. In contrast to these approaches, we present a protocol which makes use of the Tree Parity Machine key exchange between multiple parties. It does not need a centralized server and therefore is especially suitable for ad-hoc networks of any kind

[Go to top]

How to Shuffle in Public (PDF)
by Ben Adida and Douglas Wikström.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We show how to obfuscate a secret shuffle of ciphertexts: shuffling becomes a public operation. Given a trusted party that samples and obfuscates a shuffle before any ciphertexts are received, this reduces the problem of constructing a mix-net to verifiable joint decryption. We construct public-key obfuscations of a decryption shuffle based on the Boneh-Goh-Nissim (BGN) cryptosystem and a re-encryption shuffle based on the Paillier cryptosystem. Both allow efficient distributed verifiable decryption. Finally, we give a distributed protocol for sampling and obfuscating each of the above shuffles and show how it can be used in a trivial way to construct a universally composable mix-net. Our constructions are practical when the number of senders N is small, yet large enough to handle a number of practical cases, e.g. N = 350 in the BGN case and N = 2000 in the Paillier case

[Go to top]

The Byzantine Postman Problem: A Trivial Attack Against PIR-based Nym Servers (PDF)
by Len Sassaman and Bart Preneel.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Over the last several decades, there have been numerous proposals for systems which can preserve the anonymity of the recipient of some data. Some have involved trusted third-parties or trusted hardware; others have been constructed on top of link-layer anonymity systems or mix-nets. In this paper, we evaluate a pseudonymous message system which takes the different approach of using Private Information Retrieval (PIR) as its basis. We expose a flaw in the system as presented: it fails to identify Byzantine servers. We provide suggestions on correcting the flaw, while observing the security and performance trade-offs our suggestions require

[Go to top]

PC-DPOP: a new partial centralization algorithm for distributed optimization (PDF)
by Adrian Petcu, Boi Faltings, and Roger Mailler.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Fully decentralized algorithms for distributed constraint optimization often require excessive amounts of communication when applied to complex problems. The OptAPO algorithm of [Mailler and Lesser, 2004] uses a strategy of partial centralization to mitigate this problem. We introduce PC-DPOP, a new partial centralization technique, based on the DPOP algorithm of [Petcu and Faltings, 2005]. PC-DPOP provides better control over what parts of the problem are centralized and allows this centralization to be optimal with respect to the chosen communication structure. Unlike OptAPO, PC-DPOP allows for a priory, exact predictions about privacy loss, communication, memory and computational requirements on all nodes and links in the network. Upper bounds on communication and memory requirements can be specified. We also report strong efficiency gains over OptAPO in experiments on three problem domains

[Go to top]

Local Production, Local Consumption: Peer-to-Peer Architecture for a Dependable and Sustainable Social Infrastructure (PDF)
by Kenji Saito, Eiichi Morino, Yoshihiko Suko, Takaaki Suzuki, and Jun Murai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) is a system of overlay networks such that participants can potentially take symmetrical roles. This translates itself into a design based on the philosophy of Local Production, Local Consumption (LPLC), originally an agricultural concept to promote sustainable local economy. This philosophy helps enhancing survivability of a society by providing a dependable economic infrastructure and promoting the power of individuals. This paper attempts to put existing works of P2P designs into the perspective of the five-layer architecture model to realize LPLC, and proposes future research directions toward integration of P2P studies for actualization of a dependable and sustainable social infrastructure

[Go to top]

Vielleicht anonym? Die Enttarnung von StealthNet-Nutzern
by Nils Durner, Nathan S Evans, and Christian Grothoff.
In c't magazin für computer technik, 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Valgrind: a framework for heavyweight dynamic binary instrumentation (PDF)
by Nicholas Nethercote and Julian Seward.
In SIGPLAN Not 42(6), 2007, pages 89-100. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Dynamic binary instrumentation (DBI) frameworks make it easy to build dynamic binary analysis (DBA) tools such as checkers and profilers. Much of the focus on DBI frameworks has been on performance; little attention has been paid to their capabilities. As a result, we believe the potential of DBI has not been fully exploited. In this paper we describe Valgrind, a DBI framework designed for building heavyweight DBA tools. We focus on its unique support for shadow values-a powerful but previously little-studied and difficult-to-implement DBA technique, which requires a tool to shadow every register and memory value with another value that describes it. This support accounts for several crucial design features that distinguish Valgrind from other DBI frameworks. Because of these features, lightweight tools built with Valgrind run comparatively slowly, but Valgrind can be used to build more interesting, heavyweight tools that are difficult or impossible to build with other DBI frameworks such as Pin and DynamoRIO

[Go to top]

Using Linearization for Global Consistency in SSR (PDF)
by Kendy Kutzner and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Novel routing algorithms such as scalable source routing (SSR) and virtual ring routing (VRR) need to set up and maintain a virtual ring structure among all the nodes in the network. The iterative successor pointer rewiring protocol (ISPRP) is one way to bootstrap such a network. Like its VRR-analogon, ISPRP requires one of the nodes to flood the network to guarantee consistency. Recent results on self-stabilizing algorithms now suggest a new approach to bootstrap the virtual rings of SSR and VRR. This so-called linearization method does not require any flooding at all. Moreover, it has been shown that linearization with shortcut neighbors has on average polylogarithmic convergence time, only

[Go to top]

An Unconditionally Secure Protocol for Multi-Party Set Intersection (PDF)
by Ronghua Li and Chuankun Wu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Existing protocols for private set intersection are based on homomorphic public-key encryption and the technique of representing sets as polynomials in the cryptographic model. Based on the ideas of these protocols and the two-dimensional verifiable secret sharing scheme, we propose a protocol for private set intersection in the information-theoretic model. By representing the sets as polynomials, the set intersection problem is converted into the task of computing the common roots of the polynomials. By sharing the coefficients of the polynomials among parties, the common roots can be computed out using the shares. As long as more than 2n/3 parties are semi-honest, our protocol correctly computes the intersection of nsets, and reveals no other information than what is implied by the intersection and the secrets sets controlled by the active adversary. This is the first specific protocol for private set intersection in the information-theoretic model as far as we know

[Go to top]

Towards Fair Event Dissemination (PDF)
by Sebastien Baehni, Rachid Guerraoui, Boris Koldehofe, and Maxime Monod.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Event dissemination in large scale dynamic systems is typically claimed to be best achieved using decentralized peer-to-peer architectures. The rationale is to have every participant in the system act both as a client (information consumer) and as a server (information dissemination enabler), thus, precluding specific brokers which would prevent scalability and fault-tolerance. We argue that, for such decentralized architectures to be really meaningful, participants should serve the system as much as they benefit from it. That is, the system should be fair in the sense that the extend to which a participant acts as a server should depend on the extend to which it has the opportunity to act as a client. This is particularly crucial in selective information dissemination schemes where clients are not all interested in the same information. In this position paper, we discuss what a notion of fairness could look like, explain why current architectures are not fair, and raise several challenges towards achieving fairness

[Go to top]

Towards application-aware anonymous routing (PDF)
by Micah Sherr, Boon Thau, and Matt Blaze.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper investigates the problem of designing anonymity networks that meet application-specific performance and security constraints. We argue that existing anonymity networks take a narrow view of performance by considering only the strength of the offered anonymity. However, real-world applications impose a myriad of communication requirements, including end-to-end bandwidth and latency, trustworthiness of intermediary routers, and network jitter. We pose a grand challenge for anonymity: the development of a network architecture that enables applications to customize routes that tradeoff between anonymity and performance. Towards this challenge, we present the Application-Aware Anonymity (A3) routing service. We envision that A3 will serve as a powerful and flexible anonymous communications layer that will spur the future development of anonymity services

[Go to top]

Towards a Distributed Java VM in Sensor Networks using Scalable Source Routing (PDF)
by Bjoern Saballus, Johannes Eickhold, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

One of the major drawbacks of small embedded systems such as sensor nodes is the need to program in a low level programming language like C or assembler. The resulting code is often unportable, system specific and demands deep knowledge of the hardware details. This paper motivates the use of Java as an alternative programming language. We focus on the tiny AmbiComp Virtual Machine (ACVM) which we currently develop as the main part of a more general Java based development platform for interconnected sensor nodes. This VM is designed to run on different small embedded devices in a distributed network. It uses the novel scalable source routing (SSR) algorithm to distribute and share data and workload. SSR provides key based routing which enables distributed hash table (DHT) structures as a substrate for the VM to disseminate and access remote code and objects. This approach allows all VMs in the network to collaborate. The result looks like one large, distributed VM which supports a subset of the Java language. The ACVM substitutes functionality of an operating system which is missing on the target platform. As this development is work in progress, we outline the ideas behind this approach to provide first insights into the upcoming problems

[Go to top]

t-Closeness: Privacy Beyond k-Anonymity and $$-Diversity
by Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Subliminal Channels in the Private Information Retrieval Protocols (PDF)
by Meredith L. Patterson and Len Sassaman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Information-theoretic private information retrieval (PIR) protocols, such as those described by Chor et al. [5], provide a mechanism by which users can retrieve information from a database distributed across multiple servers in such a way that neither the servers nor an outside observer can determine the contents of the data being retrieved. More recent PIR protocols also provide protection against Byzantine servers, such that a user can detect when one or more servers have attempted to tamper with the data he has requested. In some cases (as in the protocols presented by Beimel and Stahl [1]), the user can still recover his data and protect the contents of his query if the number of Byzantine servers is below a certain threshold; this property is referred to as Byzantine-recovery. However, tampering with a user's data is not the only goal a Byzantine server might have. We present a scenario in which an arbitrarily sized coalition of Byzantine servers transforms the userbase of a PIR network into a signaling framework with varying levels of detectability by means of a subliminal channel [11]. We describe several such subliminal channel techniques, illustrate several use-cases for this subliminal channel, and demonstrate its applicability to a wide variety of PIR protocols

[Go to top]

SpoVNet: An Architecture for Supporting Future Internet Applications (PDF)
by Sebastian Mies.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This talk presents an approach for providing Spontaneous Virtual Networks (SpoVNets) that enable flexible, adaptive, and spontaneous provisioning of application-oriented and network-oriented services on top of heterogeneous networks. SpoVNets supply new and uniform communication abstrac-tions for future Internet applications so applications can make use of advanced services not supported by today's Internet. We expect that many functions, which are currently provided by SpoVNet on the application layer will become an integral part of future networks. Thus, SpoVNet will transparently use advanced services from the underlying network infrastructure as they become available (e.g., QoS-support in access networks or multicast in certain ISPs), enabling a seamless transition from current to future genera-tion networks without modifying the applications

[Go to top]

Space-Efficient Private Search (PDF)
by George Danezis and Claudia Diaz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private keyword search is a technique that allows for searching and retrieving documents matching certain keywords without revealing the search criteria. We improve the space efficiency of the Ostrovsky et al. Private Search [9] scheme, by describing methods that require considerably shorter buffers for returning the results of the search. Our basic decoding scheme recursive extraction, requires buffers of length less than twice the number of returned results and is still simple and highly efficient. Our extended decoding schemes rely on solving systems of simultaneous equations, and in special cases can uncover documents in buffers that are close to 95 full. Finally we note the similarity between our decoding techniques and the ones used to decode rateless codes, and show how such codes can be extracted from encrypted documents

[Go to top]

Skype4Games (PDF)
by Tonio Triebel, Benjamin Guthier, and Wolfgang Effelsberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose to take advantage of the distributed multi-user Skype system for the implementation of an interactive online game. Skype combines efficient multi-peer support with the ability to get around firewalls and network address translation; in addition, speech is available to all game participants for free. We discuss the network requirements of interactive multi-player games, in particular concerning end-to-end delay and distributed state maintenance. We then introduce the multi-user support available in Skype and conclude that it should suffice for a game implementation. We explain how our multi-player game based on the Irrlicht graphics engine was implemented over Skype, and we present very promising results of an early performance evaluation

[Go to top]

S/Kademlia: A practicable approach towards secure key-based routing (PDF)
by Ingmar Baumgart and Sebastian Mies.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Security is a common problem in completely decentralized peer-to-peer systems. Although several suggestions exist on how to create a secure key-based routing protocol, a practicable approach is still unattended. In this paper we introduce a secure key-based routing protocol based on Kademlia that has a high resilience against common attacks by using parallel lookups over multiple disjoint paths, limiting free nodeId generation with crypto puzzles and introducing a reliable sibling broadcast. The latter is needed to store data in a safe replicated way. We evaluate the security of our proposed extensions to the Kademlia protocol analytically and simulate the effects of multiple disjoint paths on lookup success under the influence of adversarial nodes

[Go to top]

Security Rationale for a Cooperative Backup Service for Mobile Devices (PDF)
by Ludovic Courtès, Marc-Olivier Killijian, and David Powell.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile devices (e.g., laptops, PDAs, cell phones) are increasingly relied on but are used in contexts that put them at risk of physical damage, loss or theft. This paper discusses security considerations that arise in the design of a cooperative backup service for mobile devices. Participating devices leverage encounters with other devices to temporarily replicate critical data. Anyone is free to participate in the cooperative service, without requiring any prior trust relationship with other participants. In this paper, we identify security threats relevant in this context as well as possible solutions and discuss how they map to low-level security requirements related to identity and trust establishment. We propose self-organized, policy-neutral mechanisms that allow the secure designation and identification of participating devices. We show that they can serve as a building block for a wide range of cooperation policies that address most of the security threats we are concerned with. We conclude on future directions

[Go to top]

Routing in the Dark: Pitch Black (PDF)
by Nathan S Evans, Chis GauthierDickey, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In many networks, such as mobile ad-hoc networks and friend-to-friend overlay networks, direct communication between nodes is limited to specific neighbors. Often these networks have a small-world topology; while short paths exist between any pair of nodes in small-world networks, it is non-trivial to determine such paths with a distributed algorithm. Recently, Clarke and Sandberg proposed the first decentralized routing algorithm that achieves efficient routing in such small-world networks. This paper is the first independent security analysis of Clarke and Sandberg's routing algorithm. We show that a relatively weak participating adversary can render the overlay ineffective without being detected, resulting in significant data loss due to the resulting load imbalance. We have measured the impact of the attack in a testbed of 800 nodes using minor modifications to Clarke and Sandberg's implementation of their routing algorithm in Freenet. Our experiments show that the attack is highly effective, allowing a small number of malicious nodes to cause rapid loss of data on the entire network. We also discuss various proposed countermeasures designed to detect, thwart or limit the attack. While we were unable to find effective countermeasures, we hope that the presented analysis will be a first step towards the design of secure distributed routing algorithms for restricted-route topologies

[Go to top]

Purely functional system configuration management (PDF)
by Eelco Dolstra and Armijn Hemel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

System configuration management is difficult because systems evolve in an undisciplined way: packages are upgraded, configuration files are edited, and so on. The management of existing operating systems is strongly imperative in nature, since software packages and configuration data (e.g., /bin and /etc in Unix) can be seen as imperative data structures: they are updated in-place by system administration actions. In this paper we present an alternative approach to system configuration management: a purely functional method, analogous to languages like Haskell. In this approach, the static parts of a configuration – software packages, configuration files, control scripts – are built from pure functions, i.e., the results depend solely on the specified inputs of the function and are immutable. As a result, realising a system configuration becomes deterministic and reproducible. Upgrading to a new configuration is mostly atomic and doesn't overwrite anything of the old configuration, thus enabling rollbacks. We have implemented the purely functional model in a small but realistic Linux-based operating system distribution called NixOS

[Go to top]

Probability of Error in Information-Hiding Protocols (PDF)
by Konstantinos Chatzikokolakis, Catuscia Palamidessi, and Prakash Panangaden.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Randomized protocols for hiding private information can fruitfully be regarded as noisy channels in the information-theoretic sense, and the inference of the concealed information can be regarded as a hypothesis-testing problem. We consider the Bayesian approach to the problem, and investigate the probability of error associated to the inference when the MAP (Maximum Aposteriori Probability) decision rule is adopted. Our main result is a constructive characterization of a convex base of the probability of error, which allows us to compute its maximum value (over all possible input distributions), and to identify upper bounds for it in terms of simple functions. As a side result, we are able to improve substantially the Hellman-Raviv and the Santhi-Vardy bounds expressed in terms of conditional entropy. We then discuss an application of our methodology to the Crowds protocol, and in particular we show how to compute the bounds on the probability that an adversary breaks anonymity

[Go to top]

Private Searching on Streaming Data (PDF)
by Rafail Ostrovsky and William E. Skeith.
In J. Cryptol 20(4), 2007, pages 397-430. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper we consider the problem of private searching on streaming data, where we can efficiently implement searching for documents that satisfy a secret criteria (such as the presence or absence of a hidden combination of hidden keywords) under various cryptographic assumptions. Our results can be viewed in a variety of ways: as a generalization of the notion of private information retrieval (to more general queries and to a streaming environment); as positive results on privacy-preserving datamining; and as a delegation of hidden program computation to other machines

[Go to top]

Privacy-enhanced searches using encrypted Bloom filters
by Steven M. Bellovin and William R. Cheswick.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Privacy protection in personalized search (PDF)
by Xuehua Shen, Bin Tan, and ChengXiang Zhai.
In SIGIR Forum 41(1), 2007, pages 4-17. (BibTeX entry) (Download bibtex record)
(direct link)

Personalized search is a promising way to improve the accuracy of web search, and has been attracting much attention recently. However, effective personalized search requires collecting and aggregating user information, which often raise serious concerns of privacy infringement for many users. Indeed, these concerns have become one of the main barriers for deploying personalized search applications, and how to do privacy-preserving personalization is a great challenge. In this paper, we systematically examine the issue of privacy preservation in personalized search. We distinguish and define four levels of privacy protection, and analyze various software architectures for personalized search. We show that client-side personalization has advantages over the existing server-side personalized search services in preserving privacy, and envision possible future strategies to fully protect user privacy

[Go to top]

The Price of Privacy and the Limits of LP Decoding
by Cynthia Dwork, Frank D. McSherry, and Kunal Talwar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Practical and Secure Solutions for Integer Comparison (PDF)
by Juan Garay, Berry Schoenmakers, and José Villegas.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Yao's classical millionaires' problem is about securely determining whether x > y, given two input values x,y, which are held as private inputs by two parties, respectively. The output x > y becomes known to both parties. In this paper, we consider a variant of Yao's problem in which the inputs x,y as well as the output bit x > y are encrypted. Referring to the framework of secure n-party computation based on threshold homomorphic cryptosystems as put forth by Cramer, Damg ard, and Nielsen at Eurocrypt 2001, we develop solutions for integer comparison, which take as input two lists of encrypted bits representing x and y, respectively, and produce an encrypted bit indicating whether x > y as output. Secure integer comparison is an important building block for applications such as secure auctions. In this paper, our focus is on the two-party case, although most of our results extend to the multi-party case. We propose new logarithmic-round and constant-round protocols for this setting, which achieve simultaneously very low communication and computational complexities. We analyze the protocols in detail and show that our solutions compare favorably to other known solutions

[Go to top]

Performance of Scalable Source Routing in Hybrid MANETs (PDF)
by Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Scalable source routing (SSR) is a novel routing approach for large unstructured networks such as mobile ad hoc networks, mesh networks, or sensor-actuator networks. It is especially suited for organically growing networks of many resource-limited mobile devices supported by a few fixed-wired nodes. SSR is a full-fledged network layer routing protocol that directly provides the semantics of a structured peer-to-peer network. Hence, it can serve as an efficient basis for fully decentralized applications on mobile devices. SSR combines source routing in the physical network with Chord-like routing in the virtual ring formed by the address space. Message forwarding greedily decreases the distance in the virtual ring while preferring physically short paths. Thereby, scalability is achieved without imposing artificial hierarchies or assigning location-dependent addresses

[Go to top]

A New Efficient Privacy-preserving Scalar Product Protocol (PDF)
by Artak Amirbekyan and Vladimir Estivill-Castro.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recently, privacy issues have become important in data analysis, especially when data is horizontally partitioned over several parties. In data mining, the data is typically represented as attribute-vectors and, for many applications, the scalar (dot) product is one of the fundamental operations that is repeatedly used. In privacy-preserving data mining, data is distributed across several parties. The efficiency of secure scalar products is important, not only because they can cause overhead in communication cost, but dot product operations also serve as one of the basic building blocks for many other secure protocols. Although several solutions exist in the relevant literature for this problem, the need for more efficient and more practical solutions still remains. In this paper, we present a very efficient and very practical secure scalar product protocol. We compare it to the most common scalar product protocols. We not only show that our protocol is much more efficient than the existing ones, we also provide experimental results by using a real life dataset

[Go to top]

Multiparty Computation for Interval, Equality, and Comparison Without Bit-Decomposition Protocol (PDF)
by Takashi Nishide and Kazuo Ohta.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Damg ard et al. [11] showed a novel technique to convert a polynomial sharing of secret a into the sharings of the bits of a in constant rounds, which is called the bit-decomposition protocol. The bit-decomposition protocol is a very powerful tool because it enables bit-oriented operations even if shared secrets are given as elements in the field. However, the bit-decomposition protocol is relatively expensive. In this paper, we present a simplified bit-decomposition protocol by analyzing the original protocol. Moreover, we construct more efficient protocols for a comparison, interval test and equality test of shared secrets without relying on the bit-decomposition protocol though it seems essential to such bit-oriented operations. The key idea is that we do computation on secret a with c and r where c = a + r, c is a revealed value, and r is a random bitwise-shared secret. The outputs of these protocols are also shared without being revealed. The realized protocols as well as the original protocol are constant-round and run with less communication rounds and less data communication than those of [11]. For example, the round complexities are reduced by a factor of approximately 3 to 10

[Go to top]

Keyless Jam Resistance (PDF)
by Leemon C. Baird, William L. Bahn, Michael D. Collins, Martin C. Carlisle, and Sean C. Butler.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

has been made resistant to jamming by the use of a secret key that is shared by the sender and receiver. There are no known methods for achieving jam resistance without that shared key. Unfortunately, wireless communication is now reaching a scale and a level of importance where such secret-key systems are becoming impractical. For example, the civilian side of the Global Positioning System (GPS) cannot use a shared secret, since that secret would have to be given to all 6.5 billion potential users, and so would no longer be secret. So civilian GPS cannot currently be protected from jamming. But the FAA has stated that the civilian airline industry will transition to using GPS for all navigational aids, even during landings. A terrorist with a simple jamming system could wreak havoc at a major airport. No existing system can solve this problem, and the problem itself has not even been widely discussed. The problem of keyless jam resistance is important. There is a great need for a system that can broadcast messages without any prior secret shared between the sender and receiver. We propose the first system for keyless jam resistance: the BBC algorithm. We describe the encoding, decoding, and broadcast algorithms. We then analyze it for expected resistance to jamming and error rates. We show that BBC can achieve the same level of jam resistance as traditional spread spectrum systems, at just under half the bit rate, and with no shared secret. Furthermore, a hybrid system can achieve the same average bit rate as traditional systems

[Go to top]

The Iterated Prisoner's Dilemma: 20 Years On
by Graham Kendall, Xin Yao, and Siang Yew Ching.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

In 1984, Robert Axelrod published a book, relating the story of two competitions which he ran, where invited academics entered strategies for "The Iterated Prisoners' Dilemma". The book, almost 20 years on, is still widely read and cited by academics and the general public. As a celebration of that landmark work, we have recreated those competitions to celebrate its 20th anniversary, by again inviting academics to submit prisoners' dilemma strategies. The first of these new competitions was run in July 2004, and the second in April 2005. "Iterated Prisoners' Dilemma: 20 Years On essentially" provides an update of the Axelrod's book. Specifically, it presents the prisoners' dilemma, its history and variants; highlights original Axelrod's work and its impact; discusses results of new competitions; and, showcases selected papers that reflect the latest researches in the area

[Go to top]

On improving the efficiency of truthful routing in MANETs with selfish nodes
by Yongwei Wang and Mukesh Singhal.
In Pervasive Mob. Comput 3(5), 2007, pages 537-559. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In Mobile Ad Hoc Networks (MANETs), nodes depend upon each other for routing and forwarding packets. However, nodes belonging to independent authorities in MANETs may behave selfishly and may not forward packets to save battery and other resources. To stimulate cooperation, nodes are rewarded for their forwarding service. Since nodes spend different cost to forward packets, it is desirable to reimburse nodes according to their cost so that nodes get incentive while the least total payment is charged to the sender. However, to maximize their utility, nodes may tell lie about their cost. This poses the requirement of truthful protocols, which maximizes the utility of nodes only when they declare their true cost. Anderegg and Eidenbenz recently proposed a truthful routing protocol, named ad hoc-VCG. This protocol incurs the route discovery overhead of O(n3), where n is the number of nodes in the network. This routing overhead is likely to become prohibitively large as the network size grows. Moreover, it leads to low network performance due to congestion and interference. We present a low-overhead truthful routing protocol for route discovery in MANETs with selfish nodes by applying mechanism design. The protocol, named LOTTO (Low Overhead Truthful rouTing prOtocol), finds a least cost path for data forwarding with a lower routing overhead of O(n2). We conduct an extensive simulation study to evaluate the performance of our protocol and compare it with ad hoc-VCG. Simulation results show that our protocol provides a much higher packet delivery ratio, generates much lower overhead and has much lower end-to-end delay

[Go to top]

An Identity-Free and On-Demand Routing Scheme against Anonymity Threats in Mobile Ad Hoc Networks (PDF)
by Jiejun Kong, Xiaoyan Hong, and Mario Gerla.
In IEEE Transactions on Mobile Computing 6(8), 2007, pages 888-902. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Introducing node mobility into the network also introduces new anonymity threats. This important change of the concept of anonymity has recently attracted attentions in mobile wireless security research. This paper presents identity-free routing and on-demand routing as two design principles of anonymous routing in mobile ad hoc networks. We devise ANODR (ANonymous On-Demand Routing) as the needed anonymous routing scheme that is compliant with the design principles. Our security analysis and simulation study verify the effectiveness and efficiency of ANODR

[Go to top]

Gossiping in Distributed Systems (PDF)
by Anne-Marie Kermarrec and Maarten van Steen.
In SIGOPS Oper. Syst. Rev 41, 2007, pages 2-7. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Gossip-based algorithms were first introduced for reliably disseminating data in large-scale distributed systems. However, their simplicity, robustness, and flexibility make them attractive for more than just pure data dissemination alone. In particular, gossiping has been applied to data aggregation, overlay maintenance, and resource allocation. Gossiping applications more or less fit the same framework, with often subtle differences in algorithmic details determining divergent emergent behavior. This divergence is often difficult to understand, as formal models have yet to be developed that can capture the full design space of gossiping solutions. In this paper, we present a brief introduction to the field of gossiping in distributed systems, by providing a simple framework and using that framework to describe solutions for various application domains

[Go to top]

Gossip-based Peer Sampling (PDF)
by Márk Jelasity, Spyros Voulgaris, Rachid Guerraoui, Anne-Marie Kermarrec, and Maarten van Steen.
In ACM Trans. Comput. Syst 25, 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Gossip-based communication protocols are appealing in large-scale distributed applications such as information dissemination, aggregation, and overlay topology management. This paper factors out a fundamental mechanism at the heart of all these protocols: the peer-sampling service. In short, this service provides every node with peers to gossip with. We promote this service to the level of a first-class abstraction of a large-scale distributed system, similar to a name service being a first-class abstraction of a local-area system. We present a generic framework to implement a peer-sampling service in a decentralized manner by constructing and maintaining dynamic unstructured overlays through gossiping membership information itself. Our framework generalizes existing approaches and makes it easy to discover new ones. We use this framework to empirically explore and compare several implementations of the peer sampling service. Through extensive simulation experiments we show that—although all protocols provide a good quality uniform random stream of peers to each node locally—traditional theoretical assumptions about the randomness of the unstructured overlays as a whole do not hold in any of the instances. We also show that different design decisions result in severe differences from the point of view of two crucial aspects: load balancing and fault tolerance. Our simulations are validated by means of a wide-area implementation

[Go to top]

GAS: Overloading a File Sharing Network as an Anonymizing System (PDF)
by Elias Athanasopoulos, Mema Roussopoulos, Kostas G. Anagnostakis, and Evangelos P. Markatos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity is considered as a valuable property as far as everyday transactions in the Internet are concerned. Users care about their privacy and they seek for new ways to keep secret as much as of their personal information from third parties. Anonymizing systems exist nowadays that provide users with the technology, which is able to hide their origin when they use applications such as the World Wide Web or Instant Messaging. However, all these systems are vulnerable to a number of attacks and some of them may collapse under a low strength adversary. In this paper we explore anonymity from a different perspective. Instead of building a new anonymizing system, we try to overload an existing file sharing system, Gnutella, and use it for a different purpose. We develop a technique that transforms Gnutella as an Anonymizing System (GAS) for a single download from the World Wide Web

[Go to top]

A Game Theoretic Model of a Protocol for Data Possession Verification (PDF)
by Nouha Oualha, Pietro Michiardi, and Yves Roudier.
In A World of Wireless, Mobile and Multimedia Networks, International Symposium on, 2007, pages 1-6. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper discusses how to model a protocol for the verification of data possession intended to secure a peer-to-peer storage application. The verification protocol is a primitive for storage assessment, and indirectly motivates nodes to behave cooperatively within the application. The capability of the protocol to enforce cooperation between a data holder and a data owner is proved theoretically by modeling the verification protocol as a Bayesian game, and demonstrating that the solution of the game is an equilibrium where both parties are cooperative

[Go to top]

End-to-end routing for dualradio sensor networks (PDF)
by Thanos Stathopoulos, John Heidemann, Martin Lukac, Deborah Estrin, and William J. Kaiser.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Dual-radio, dual-processor nodes are an emerging class of Wireless Sensor Network devices that provide both lowenergy operation as well as substantially increased computational performance and communication bandwidth for applications. In such systems, the secondary radio and processor operates with sufficiently low power that it may remain always vigilant, while the the main processor and primary, high-bandwidth radio remain off until triggered by the application. By exploiting the high energy efficiency of the main processor and primary radio along with proper usage, net operating energy benefits are enabled for applications. The secondary radio provides a constantly available multi-hop network, while paths in the primary network exist only when required. This paper describes a topology control mechanism for establishing an end-to-end path in a network of dual-radio nodes using the secondary radios as a control channel to selectively wake up nodes along the required end-to-end path. Using numerical models as well as testbed experimentation, we show that our proposed mechanism provides significant energy savings of more than 60 compared to alternative approaches, and that it incurs only moderately greater application latency

[Go to top]

Enabling Adaptive Video Streaming in P2P Systems (PDF)
by Dan Jurca, Jacob Chakareski, Jean-Paul Wagner, and Pascal Frossard.
In IEEE Communications Magazine 45, 2007, pages 108-114. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) systems are becoming increasingly popular due to their ability to deliver large amounts of data at a reduced deployment cost. In addition to fostering the development of novel media applications, P2P systems also represent an interesting alternative paradigm for media streaming applications that can benefit from the inherent self organization and resource scalability available in such environments. This article presents an overview of application and network layer mechanisms that enable successful streaming frameworks in peer-to-peer systems. We describe media delivery architectures that can be deployed over P2P networks to address the specific requirements of streaming applications. In particular, we show how video-streaming applications can benefit from the diversity offered by P2P systems and implement distributed-streaming and scheduling solutions with multi-path packet transmission

[Go to top]

$$-diversity: Privacy beyond k-anonymity
by Ashwin Machanavajjhala, Daniel Kifer, Johannes Gehrke, and Muthuramakrishnan Venkitasubramaniam.
In ACM Transactions on Knowledge Discovery from Data (TKDD) 1(1), 2007. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Efficient selectivity and backup operators in Monte-Carlo tree search (PDF)
by Rémi Coulom.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations. The method can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to minmax as the number of simulations grows. This approach provides a finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity. The resulting algorithm was implemented in a 9 9 Go-playing program, Crazy Stone, that won the 10th KGS computer-Go tournament

[Go to top]

Dynamic Multipath Onion Routing in Anonymous Peer-To-Peer Overlay Networks
by Olaf Landsiedel, Alexis Pimenidis, and Klaus Wehrle.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Although recent years provided many protocols for anonymous routing in overlay networks, they commonly rely on the same communication paradigm: Onion Routing. In Onion Routing a static tunnel through an overlay network is build via layered encryption. All traffic exchanged by its end points is relayed through this tunnel. In contrast, this paper introduces dynamic multipath Onion Routing to extend the static Onion Routing paradigm. This approach allows each packet exchanged between two end points to travel along a different path. To provide anonymity the first half of this path is selected by the sender and the second half by the receiver of the packet. The results are manifold: First, dynamic multipath Onion Routing increases the resilience against threats, especially pattern and timing based analysis attacks. Second, the dynamic paths reduce the impact of misbehaving and overloaded relays. Finally, inspired by Internet routing, the forwarding nodes do not need to maintain any state about ongoing flows and so reduce the complexity of the router. In this paper, we describe the design of our dynamic Multipath Onion RoutEr (MORE) for peer-to-peer overlay networks, and evaluate its performance. Furthermore, we integrate address virtualization to abstract from Internet addresses and provide transparent support for IP applications. Thus, no application-level gateways, proxies or modifications of applications are required to sanitize protocols from network level information. Acting as an IP-datagram service, our scheme provides a substrate for anonymous communication to a wide range of applications using TCP and UDP

[Go to top]

Design principles for low latency anonymous network systems secure against timing attacks (PDF)
by Rungrat Wiangsripanawan, Willy Susilo, and Rei Safavi-Naini.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Low latency anonymous network systems, such as Tor, were considered secure against timing attacks when the threat model does not include a global adversary. In this threat model the adversary can only see part of the links in the system. In a recent paper entitled Low-cost traffic analysis of Tor, it was shown that a variant of timing attack that does not require a global adversary can be applied to Tor. More importantly, authors claimed that their attack would work on any low latency anonymous network systems. The implication of the attack is that all low latency anonymous networks will be vulnerable to this attack even if there is no global adversary. In this paper, we investigate this claim against other low latency anonymous networks, including Tarzan and Morphmix. Our results show that in contrast to the claim of the aforementioned paper, the attack may not be applicable in all cases. Based on our analysis, we draw design principles for secure low latency anonymous network system (also secure against the above attack)

[Go to top]

Dependability Evaluation of Cooperative Backup Strategies for Mobile Devices (PDF)
by Ludovic Courtès, Ossama Hamouda, Mohamed Kaaniche, Marc-Olivier Killijian, and David Powell.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile devices (e.g., laptops, PDAs, cell phones) are increasingly relied on but are used in contexts that put them at risk of physical damage, loss or theft. This paper discusses the dependability evaluation of a cooperative backup service for mobile devices. Participating devices leverage encounters with other devices to temporarily replicate critical data. Permanent backups are created when the participating devices are able to access the fixed infrastructure. Several data replication and scattering strategies are presented,including the use of erasure codes. A methodology to model and evaluate them using Petri nets and Markov chains is described. We demonstrate that our cooperative backup service decreases the probability of data loss by a factor up to the ad hoc to Internet connectivity ratio

[Go to top]

Countering Statistical Disclosure with Receiver-Bound Cover Traffic (PDF)
by Nayantara Mallesh and Matthew Wright.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous communications provides an important privacy service by keeping passive eavesdroppers from linking communicating parties. However, using long-term statistical analysis of traffic sent to and from such a system, it is possible to link senders with their receivers. Cover traffic is an effective, but somewhat limited, counter strategy against this attack. Earlier work in this area proposes that privacy-sensitive users generate and send cover traffic to the system. However, users are not online all the time and cannot be expected to send consistent levels of cover traffic, drastically reducing the impact of cover traffic. We propose that the mix generate cover traffic that mimics the sending patterns of users in the system. This receiver-bound cover helps to make up for users that aren't there, confusing the attacker. We show through simulation how this makes it difficult for an attacker to discern cover from real traffic and perform attacks based on statistical analysis. Our results show that receiver-bound cover substantially increases the time required for these attacks to succeed. When our approach is used in combination with user-generated cover traffic, the attack takes a very long time to succeed

[Go to top]

A cooperative SIP infrastructure for highly reliable telecommunication services
by Ali Fessi, Heiko Niedermayer, Holger Kinkelin, and Georg Carle.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

On compact routing for the internet (PDF)
by Dmitri Krioukov, Kevin Fall, and Arthur Brady.
In SIGCOMM Comput. Commun. Rev 37(3), 2007, pages 41-52. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Internet's routing system is facing stresses due to its poor fundamental scaling properties. Compact routing is a research field that studies fundamental limits of routing scalability and designs algorithms that try to meet these limits. In particular, compact routing research shows that shortest-path routing, forming a core of traditional routing algorithms, cannot guarantee routing table (RT) sizes that on all network topologies grow slower than linearly as functions of the network size. However, there are plenty of compact routing schemes that relax the shortest-path requirement and allow for improved, sublinear RT size scaling that is mathematically provable for all static network topologies. In particular, there exist compact routing schemes designed for grids, trees, and Internet-like topologies that offer RT sizes that scale logarithmically with the network size. In this paper, we demonstrate that in view of recent results in compact routing research, such logarithmic scaling on Internet-like topologies is fundamentally impossible in the presence of topology dynamics or topology-independent (flat) addressing. We use analytic arguments to show that the number of routing control messages per topology change cannot scale better than linearly on Internet-like topologies. We also employ simulations to confirm that logarithmic RT size scaling gets broken by topology-independent addressing, a cornerstone of popular locator-identifier split proposals aiming at improving routing scaling in the presence of network topology dynamics or host mobility. These pessimistic findings lead us to the conclusion that a fundamental re-examination of assumptions behind routing models and abstractions is needed in order to find a routing architecture that would be able to scale "indefinitely

[Go to top]

Closed-Circuit Unobservable Voice Over IP (PDF)
by Carlos Aguilar Melchor, Yves Deswarte, and Julien Iguchi-Cartigny.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Among all the security issues in Voice over IP (VoIP) communications, one of the most difficult to achieve is traf- fic analysis resistance. Indeed, classical approaches pro- vide a reasonable degree of security but induce large round- trip times that are incompatible with VoIP. In this paper, we describe some of the privacy and secu- rity issues derived from traffic analysis in VoIP. We also give an overview of how to provide low-latency VoIP communi- cation with strong resistance to traffic analysis. Finally, we present a server which can provide such resistance to hun- dreds of users even if the server is compromised

[Go to top]

CISS: An efficient object clustering framework for DHT-based peer-to-peer applications
by Jinwon Lee, Hyonik Lee, Seungwoo Kang, Su Myeon Kim, and Junehwa Song.
In Comput. Netw 51(4), 2007, pages 1072-1094. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Cheat-proof event ordering for large-scale distributed multiplayer games
by Chis GauthierDickey.
phd, University of Oregon, 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Real-time, interactive, multi-user (RIM) applications are networked applications that allow users to collaborate and interact with each other over the Internet for work, education and training, or entertainment purposes. Multiplayer games, distance learning applications, collaborative whiteboards, immersive educational and training simulations, and distributed interactive simulations are examples of these applications. Of these RIM applications, multiplayer games are an important class for research due to their widespread deployment and popularity on the Internet. Research with multiplayer games will have a direct impact on all RIM applications. While large-scale multiplayer games have typically used a client/server architecture for network communication, we propose using a peer-to-peer architecture to solve the scalability problems inherent in centralized systems. Past research and actual deployments of peer-to-peer networks show that they can scale to millions of users. However, these prior peer-to-peer networks do not meet the low latency and interactive requirements that multi-player games need. Indeed, the fundamental problem of maintaining consistency between all nodes in the face of failures, delays, and malicious attacks has to be solved to make a peer-to-peer networks a viable solution. We propose solving the consistency problem through secure and scalable event ordering. While traditional event ordering requires all-to-all message passing and at least two rounds of communication, we argue that multiplayer games lend themselves naturally to a hierarchical decomposition of their state space so that we can reduce the communication cost of event ordering. We also argue that by using cryptography, a discrete view of time, and majority voting, we can totally order events in a real-time setting. By applying these two concepts, we can scale multiplayer games to millions of players. We develop our solution in two parts: a cheat-proof and real-time event ordering protocol and a scalable, hierarchical structure that organizes peers in a tree according to their scope of interest in the game. Our work represents the first, complete solution to this problem and we show through both proofs and simulations that our protocols allow the creation of large-scale, peer-to-peer games that are resistant to cheating while maintaining real-time responsiveness in the system

[Go to top]

CFR: a peer-to-peer collaborative file repository system (PDF)
by Meng-Ru Lin, Ssu-Hsuan Lu, Tsung-Hsuan Ho, Peter Lin, and Yeh-Ching Chung.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Due to the high availability of the Internet, many large cross-organization collaboration projects, such as SourceForge, grid systems etc., have emerged. One of the fundamental requirements of these collaboration efforts is a storage system to store and exchange data. This storage system must be highly scalable and can efficiently aggregate the storage resources contributed by the participating organizations to deliver good performance for users. In this paper, we propose a storage system, Collaborative File Repository (CFR), for large scale collaboration projects. CFR uses peer-to-peer techniques to achieve scalability, efficiency, and ease of management. In CFR, storage nodes contributed by the participating organizations are partitioned according to geographical regions. Files stored in CFR are automatically replicated to all regions. Furthermore, popular files are duplicated to other storage nodes of the same region. By doing so, data transfers between users and storage nodes are confined within their regions and transfer efficiency is enhanced. Experiments show that our replication can achieve high efficiency with a small number of duplicates

[Go to top]

B.A.T.M.A.N Status Report (PDF)
by Axel Neumann, Corinna Elektra Aichele, and Marek Lindner.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

This report documents the current status of the development and implementation of the B.A.T.M.A.N (better approach to mobile ad-hoc networking) routing protocol. B.A.T.M.A.N uses a simple and robust algorithm for establishing multi-hop routes in mobile ad-hoc networks.It ensures highly adaptive and loop-free routing while causing only low processing and traffic cost

[Go to top]

Application of DHT-Inspired Routing for Object Tracking (PDF)
by Pengfei Di, Yaser Houri, Qing Wei, Jörg Widmer, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A major problem in tracking objects in sensor networks is trading off update traffic and timeliness of the data that is available to a monitoring site. Typically, either all objects regularly update some central registry with their location information, or the monitoring instance floods the network with a request when it needs information for a particular object. More sophisticated approaches use a P2P-like distributed storage structure on top of geographic routing. The applicability of the latter is limited to certain topologies, and having separate storage and routing algorithms reduces efficiency. In this paper, we present a different solution which is based on the scalable source routing (SSR) protocol. SSR is a network layer routing protocol that has been inspired by distributed hash tables (DHT). It provides key-based routing in large networks of resource-limited devices such as sensor networks. We argue that this approach is more suitable for object tracking in sensor networks because it evenly spreads the updates over the whole network without being limited to a particular network topology. We support our argument with extensive simulations

[Go to top]

2006

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

Distributed k-ary System: Algorithms for Distributed Hash Tables (PDF)
by Ali Ghodsi.
Doctoral, KTH/Royal Institute of Technology, December 2006. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This dissertation presents algorithms for data structures called distributed hash tables (DHT) or structured overlay networks, which are used to build scalable self-managing distributed systems. The provided algorithms guarantee lookup consistency in the presence of dynamism: they guarantee consistent lookup results in the presence of nodes joining and leaving. Similarly, the algorithms guarantee that routing never fails while nodes join and leave. Previous algorithms for lookup consistency either suffer from starvation, do not work in the presence of failures, or lack proof of correctness. Several group communication algorithms for structured overlay networks are presented. We provide an overlay broadcast algorithm, which unlike previous algorithms avoids redundant messages, reaching all nodes in O(log n) time, while using O(n) messages, where n is the number of nodes in the system. The broadcast algorithm is used to build overlay multicast. We introduce bulk operation, which enables a node to efficiently make multiple lookups or send a message to all nodes in a specified set of identifiers. The algorithm ensures that all specified nodes are reached in O(log n) time, sending maximum O(log n) messages per node, regardless of the input size of the bulk operation. Moreover, the algorithm avoids sending redundant messages. Previous approaches required multiple lookups, which consume more messages and can render the initiator a bottleneck. Our algorithms are used in DHT-based storage systems, where nodes can do thousands of lookups to fetch large files. We use the bulk operation algorithm to construct a pseudo-reliable broadcast algorithm. Bulk operations can also be used to implement efficient range queries. Finally, we describe a novel way to place replicas in a DHT, called symmetric replication, that enables parallel recursive lookups. Parallel lookups are known to reduce latencies. However, costly iterative lookups have previously been used to do parallel lookups. Moreover, joins or leaves only require exchanging O(1) messages, while other schemes require at least log(f) messages for a replication degree of f. The algorithms have been implemented in a middleware called the Distributed k-ary System (DKS), which is briefly described

[Go to top]

Understanding churn in peer-to-peer networks (PDF)
by Daniel Stutzbach and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The dynamics of peer participation, or churn, are an inherent property of Peer-to-Peer (P2P) systems and critical for design and evaluation. Accurately characterizing churn requires precise and unbiased information about the arrival and departure of peers, which is challenging to acquire. Prior studies show that peer participation is highly dynamic but with conflicting characteristics. Therefore, churn remains poorly understood, despite its significance.In this paper, we identify several common pitfalls that lead to measurement error. We carefully address these difficulties and present a detailed study using three widely-deployed P2P systems: an unstructured file-sharing system (Gnutella), a content-distribution system (BitTorrent), and a Distributed Hash Table (Kad). Our analysis reveals several properties of churn: (i) overall dynamics are surprisingly similar across different systems, (ii) session lengths are not exponential, (iii) a large portion of active peers are highly stable while the remaining peers turn over quickly, and (iv) peer session lengths across consecutive appearances are correlated. In summary, this paper advances our understanding of churn by improving accuracy, comparing different P2P file sharingdistribution systems, and exploring new aspects of churn

[Go to top]

A Survey of Solutions to the Sybil Attack (PDF)
by Brian Neil Levine, Clay Shields, and N. Boris Margolin.
In unknown(2006-052), October 2006. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many security mechanisms are based on specific assumptions of identity and are vulnerable to attacks when these assumptions are violated. For example, impersonation is the well-known consequence when authenticating credentials are stolen by a third party. Another attack on identity occurs when credentials for one identity are purposely shared by multiple individuals, for example to avoid paying twice for a service. In this paper, we survey the impact of the Sybil attack, an attack against identity in which an individual entity masquerades as multiple simultaneous identities. The Sybil attack is a fundamental problem in many systems, and it has so far resisted a universally applicable solution

[Go to top]

Salsa: A Structured Approach to Large-Scale Anonymity (PDF)
by Arjun Nambiar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Highly distributed anonymous communications systems have the promise to reduce the effectiveness of certain attacks and improve scalability over more centralized approaches. Existing approaches, however, face security and scalability issues. Requiring nodes to have full knowledge of the other nodes in the system, as in Tor and Tarzan, limits scalability and can lead to intersection attacks in peer-to-peer configurations. MorphMix avoids this requirement for complete system knowledge, but users must rely on untrusted peers to select the path. This can lead to the attacker controlling the entire path more often than is acceptable.To overcome these problems, we propose Salsa, a structured approach to organizing highly distributed anonymous communications systems for scalability and security. Salsa is designed to select nodes to be used in anonymous circuits randomly from the full set of nodes, even though each node has knowledge of only a subset of the network. It uses a distributed hash table based on hashes of the nodes' IP addresses to organize the system. With a virtual tree structure, limited knowledge of other nodes is enough to route node lookups throughout the system. We use redundancy and bounds checking when performing lookups to prevent malicious nodes from returning false information without detection. We show that our scheme prevents attackers from biasing path selection, while incurring moderate overheads, as long as the fraction of malicious nodes is less than 20. Additionally, the system prevents attackers from obtaining a snapshot of the entire system until the number of attackers grows too large (e.g. 15 for 10000 peers and 256 groups). The number of groups can be used as a tunable parameter in the system, depending on the number of peers, that can be used to balance performance and security

[Go to top]

Measuring Relationship Anonymity in Mix Networks (PDF)
by Vitaly Shmatikov and Ming-Hsui Wang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many applications of mix networks such as anonymousWeb browsing require relationship anonymity: it should be hard for the attacker to determine who is communicating with whom. Conventional methods for measuring anonymity, however, focus on sender anonymity instead. Sender anonymity guarantees that it is difficult for the attacker to determine the origin of any given message exiting the mix network, but this may not be sufficient to ensure relationship anonymity. Even if the attacker cannot identify the origin of messages arriving to some destination, relationship anonymity will fail if he can determine with high probability that at least one of the messages originated from a particular sender, without necessarily being able to recognize this message among others. We give a formal definition and a calculation methodology for relationship anonymity. Our techniques are similar to those used for sender anonymity, but, unlike sender anonymity, relationship anonymity is sensitive to the distribution of message destinations. In particular, Zipfian distributions with skew values characteristic of Web browsing provide especially poor relationship anonymity. Our methodology takes route selection algorithms into account, and incorporates information-theoretic metrics such as entropy and min-entropy. We illustrate our methodology by calculating relationship anonymity in several simulated mix networks

[Go to top]

Inferring the Source of Encrypted HTTP Connections (PDF)
by Marc Liberatore and Brian Neil Levine.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We examine the effectiveness of two traffic analysis techniques for identifying encrypted HTTP streams. The techniques are based upon classification algorithms, identifying encrypted traffic on the basis of similarities to features in a library of known profiles. We show that these profiles need not be collected immediately before the encrypted stream; these methods can be used to identify traffic observed both well before and well after the library is created. We give evidence that these techniques will exhibit the scalability necessary to be effective on the Internet. We examine several methods of actively countering the techniques, and we find that such countermeasures are effective, but at a significant increase in the size of the traffic stream. Our claims are substantiated by experiments and simulation on over 400,000 traffic streams we collected from 2,000 distinct web sites during a two month period

[Go to top]

Hot or Not: Revealing Hidden Services by their Clock Skew (PDF)
by Steven J. Murdoch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Location-hidden services, as offered by anonymity systems such as Tor, allow servers to be operated under a pseudonym. As Tor is an overlay network, servers hosting hidden services are accessible both directly and over the anonymous channel. Traffic patterns through one channel have observable effects on the other, thus allowing a service's pseudonymous identity and IP address to be linked. One proposed solution to this vulnerability is for Tor nodes to provide fixed quality of service to each connection, regardless of other traffic, thus reducing capacity but resisting such interference attacks. However, even if each connection does not influence the others, total throughput would still affect the load on the CPU, and thus its heat output. Unfortunately for anonymity, the result of temperature on clock skew can be remotely detected through observing timestamps. This attack works because existing abstract models of anonymity-network nodes do not take into account the inevitable imperfections of the hardware they run on. Furthermore, we suggest the same technique could be exploited as a classical covert channel and can even provide geolocation

[Go to top]

Cryptree: A Folder Tree Structure for Cryptographic File Systems (PDF)
by Dominik Grolimund, Luzius Meisser, Stefan Schmid, and Roger Wattenhofer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present Cryptree, a cryptographic tree structure which facilitates access control in file systems operating on untrusted storage. Cryptree leverages the file system's folder hierarchy to achieve efficient and intuitive, yet simple, access control. The highlights are its ability to recursively grant access to a folder and all its subfolders in constant time, the dynamic inheritance of access rights which inherently prevents scattering of access rights, and the possibility to grant someone access to a file or folder without revealing the identities of other accessors. To reason about and to visualize Cryptree, we introduce the notion of cryptographic links. We describe the Cryptrees we have used to enforce read and write access in our own file system. Finally, we measure the performance of the Cryptree and compare it to other approaches

[Go to top]

Combating Hidden Action in Unstructured Peer-to-Peer Systems (PDF)
by Qi Zhao, Jianzhong Zhang, and Jingdong Xu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In unstructured peer-to-peer systems, cooperation by the intermediate peers are essential for the success of queries. However, intermediate peers may choose to forward packets at a low priority or not forward the packets at all, which is referred as peers' hidden action. Hidden action may lead to significant decrement of search efficiency. In contrast to building a global system with reputations or economics, we proposed MSSF, an improved search method, to help queries route around the peers with hidden action. MSSF does not need to check other peers' behavior. It automatically adapts to change query routes according to the previous query results. Simulation results show that MSSF is more robust than Gnutella flooding when peers with hidden action increase

[Go to top]

Attribute-based encryption for fine-grained access control of encrypted data (PDF)
by Vipul Goyal, Omkant Pandey, Amit Sahai, and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE)

[Go to top]

Access Control in Peer-to-Peer Storage Systems
by Erol Ko& cedil;c.
Master's Thesis, Eidgenössische Technische Hochschule Zürich (ETH), October 2006. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Timing Analysis in Low-Latency Mix Networks: Attacks and Defenses (PDF)
by Vitaly Shmatikov and Ming-Hsui Wang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mix networks are a popular mechanism for anonymous Internet communications. By routing IP traffic through an overlay chain of mixes, they aim to hide the relationship between its origin and destination. Using a realistic model of interactive Internet traffic, we study the problem of defending low-latency mix networks against attacks based on correlating inter-packet intervals on two or more links of the mix chain. We investigate several attack models, including an active attack which involves adversarial modification of packet flows in order to fingerprint them, and analyze the tradeoffs between the amount of cover traffic, extra latency, and anonymity properties of the mix network. We demonstrate that previously proposed defenses are either ineffective, or impose a prohibitively large latency and/or bandwidth overhead on communicating applications. We propose a new defense based on adaptive padding

[Go to top]

SybilGuard: defending against sybil attacks via social networks (PDF)
by Haifeng Yu, Michael Kaminsky, Phillip B. Gibbons, and Abraham Flaxman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer and other decentralized,distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack,a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system,the malicious user is able to "out vote" the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks.Our protocol is based on the "social network "among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately-small "cut" in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create.We show the effectiveness of SybilGuard both analytically and experimentally

[Go to top]

Scalable Routing in Sensor Actuator Networks with Churn
by unknown.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Routing in wireless networks is inherently difficult since their network topologies are typically unstructured and unstable. Therefore, many routing protocols for ad-hoc networks and sensor networks revert to flooding to acquire routes to previously unknown destinations. However, such an approach does not scale to large networks, especially when nodes need to communicate with many different destinations. This paper advocates a novel approach, the scalable source routing (SSR) protocol. It combines overlay-like routing in a virtual network structure with source routing in the physical network structure. As a consequence, SSR can efficiently provide the routing semantics of a structured routing overlay, making it an efficient basis for the scalable implementation of fully decentralized applications. In T. Fuhrmann (2005) it has been demonstrated that SSR can almost entirely avoid flooding, thus leading to a both memory and message efficient routing mechanism for large unstructured networks. This paper extends SSR to unstable networks, i. e. networks with churn where nodes frequently join and leave, the latter potentially ungracefully

[Go to top]

Breaking Four Mix-related Schemes Based on Universal Re-encryption (PDF)
by George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Universal Re-encryption allows El-Gamal ciphertexts to be re-encrypted without knowledge of their corresponding public keys. This has made it an enticing building block for anonymous communications protocols. In this work we analyze four schemes related to mix networks that make use of Universal Re-encryption and find serious weaknesses in all of them. Universal Re-encryption of signatures is open to existential forgery; two-mix schemes can be fully compromised by a passive adversary observing a single message close to the sender; the fourth scheme, the rWonGoo anonymous channel, turns out to be less secure than the original Crowds scheme, on which it is based. Our attacks make extensive use of unintended services provided by the network nodes acting as decryption and re-routing oracles. Finally, our attacks against rWonGoo demonstrate that anonymous channels are not automatically composable: using two of them in a careless manner makes the system more vulnerable to attack

[Go to top]

2Fast: Collaborative Downloads in P2P Networks (PDF)
by Pawel Garbacki, Alexandru Iosup, Dick H. J. Epema, and Maarten van Steen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

P2P systems that rely on the voluntary contribution of bandwidth by the individual peers may suffer from free riding. To address this problem, mechanisms enforcing fairness in bandwidth sharing have been designed, usually by limiting the download bandwidth to the available upload bandwidth. As in real environments the latter is much smaller than the former, these mechanisms severely affect the download performance of most peers. In this paper we propose a system called 2Fast, which solves this problem while preserving the fairness of bandwidth sharing. In 2Fast, we form groups of peers that collaborate in downloading a file on behalf of a single group member, which can thus use its full download bandwidth. A peer in our system can use its currently idle bandwidth to help other peers in their ongoing downloads, and get in return help during its own downloads. We assess the performance of 2Fast analytically and experimentally, the latter in both real and simulated environments. We find that in realistic bandwidth limit settings, 2Fast improves the download speed by up to a factor of 3.5 in comparison to state-of-the-art P2P download protocols

[Go to top]

Minimizing churn in distributed systems (PDF)
by Brighten Godfrey, S Shenker, and Ion Stoica.
In SIGCOMM Computer Communication Review 36, August 2006, pages 147-158. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A pervasive requirement of distributed systems is to deal with churn-change in the set of participating nodes due to joins, graceful leaves, and failures. A high churn rate can increase costs or decrease service quality. This paper studies how to reduce churn by selecting which subset of a set of available nodes to use.First, we provide a comparison of the performance of a range of different node selection strategies in five real-world traces. Among our findings is that the simple strategy of picking a uniform-random replacement whenever a node fails performs surprisingly well. We explain its performance through analysis in a stochastic model.Second, we show that a class of strategies, which we call "Preference List" strategies, arise commonly as a result of optimizing for a metric other than churn, and produce high churn relative to more randomized strategies under realistic node failure patterns. Using this insight, we demonstrate and explain differences in performance for designs that incorporate varying degrees of randomization. We give examples from a variety of protocols, including anycast, over-lay multicast, and distributed hash tables. In many cases, simply adding some randomization can go a long way towards reducing churn

[Go to top]

Peer counting and sampling in overlay networks: random walk methods (PDF)
by Laurent Massoulié, Erwan Le Merrer, Anne-Marie Kermarrec, and Ayalvadi Ganesh.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this article we address the problem of counting the number of peers in a peer-to-peer system, and more generally of aggregating statistics of individual peers over the whole system. This functionality is useful in many applications, but hard to achieve when each node has only a limited, local knowledge of the whole system. We propose two generic techniques to solve this problem. The Random Tour method is based on the return time of a continuous time random walk to the node originating the query. The Sample and Collide method is based on counting the number of random samples gathered until a target number of redundant samples are obtained. It is inspired by the "birthday paradox" technique of [6], upon which it improves by achieving a target variance with fewer samples. The latter method relies on a sampling sub-routine which returns randomly chosen peers. Such a sampling algorithm is of independent interest. It can be used, for instance, for neighbour selection by new nodes joining the system. We use a continuous time random walk to obtain such samples. We analyse the complexity and accuracy of the two methods. We illustrate in particular how expansion properties of the overlay affect their performance

[Go to top]

M2: Multicasting Mixes for Efficient and Anonymous Communication (PDF)
by Ginger Perng, Michael K. Reiter, and Chenxi Wang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a technique to achieve anonymous multicasting in mix networks to deliver content from producers to consumers. Employing multicast allows content producers to send (and mixes to forward) information to multiple consumers without repeating work for each individual consumer. In our approach, consumers register interest for content by creating paths in the mix network to the content's producers. When possible, these paths are merged in the network so that paths destined for the same producer share a common path suffix to the producer. When a producer sends content, the content travels this common suffix toward its consumers (in the reverse direction) and "branches" into multiple messages when necessary. We detail the design of this technique and then analyze the unlinkability of our approach against a global, passive adversary who controls both the producer and some mixes. We show that there is a subtle degradation of unlinkability that arises from multicast. We discuss techniques to tune our design to mitigate this degradation while retaining the benefits of multicast

[Go to top]

Valet Services: Improving Hidden Servers with a Personal Touch (PDF)
by Lasse Øverlier and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Location hidden services have received increasing attention as a means to resist censorship and protect the identity of service operators. Research and vulnerability analysis to date has mainly focused on how to locate the hidden service. But while the hiding techniques have improved, almost no progress has been made in increasing the resistance against DoS attacks directly or indirectly on hidden services. In this paper we suggest improvements that should be easy to adopt within the existing hidden service design, improvements that will both reduce vulnerability to DoS attacks and add QoS as a service option. In addition we show how to hide not just the location but the existence of the hidden service from everyone but the users knowing its service address. Not even the public directory servers will know how a private hidden service can be contacted, or know it exists

[Go to top]

On the Security of the Tor Authentication Protocol (PDF)
by Ian Goldberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is a popular anonymous Internet communication system, used by an estimated 250,000 users to anonymously exchange over five terabytes of data per day. The security of Tor depends on properly authenticating nodes to clients, but Tor uses a custom protocol, rather than an established one, to perform this authentication. In this paper, we provide a formal proof of security of this protocol, in the random oracle model, under reasonable cryptographic assumptions

[Go to top]

Privacy for Public Transportation (PDF)
by Thomas S. Heydt-Benjamin, Hee-Jin Chae, Benessa Defend, and Kevin Fu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose an application of recent advances in e-cash, anonymous credentials, and proxy re-encryption to the problem of privacy in public transit systems with electronic ticketing. We discuss some of the interesting features of transit ticketing as a problem domain, and provide an architecture sufficient for the needs of a typical metropolitan transit system. Our system maintains the security required by the transit authority and the user while significantly increasing passenger privacy. Our hybrid approach to ticketing allows use of passive RFID transponders as well as higher powered computing devices such as smartphones or PDAs. We demonstrate security and privacy features offered by our hybrid system that are unavailable in a homogeneous passive transponder architecture, and which are advantageous for users of passive as well as active devices

[Go to top]

Peer to peer size estimation in large and dynamic networks: A comparative study (PDF)
by Erwan Le Merrer, Anne-Marie Kermarrec, and Laurent Massoulié.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

As the size of distributed systems keeps growing, the peer to peer communication paradigm has been identified as the key to scalability. Peer to peer overlay networks are characterized by their self-organizing capabilities, resilience to failure and fully decentralized control. In a peer to peer overlay, no entity has a global knowledge of the system. As much as this property is essential to ensure the scalability, monitoring the system under such circumstances is a complex task. Yet, estimating the size of the system is core functionality for many distributed applications to parameter setting or monitoring purposes. In this paper, we propose a comparative study between three algorithms that estimate in a fully decentralized way the size of a peer to peer overlay. Candidate approaches are generally applicable irrespective of the underlying structure of the peer to peer overlay. The paper reports the head to head comparison of estimation system size algorithms. The simulations have been conducted using the same simulation framework and inputs and highlight the differences in cost and accuracy of the estimation between the algorithms both in static and dynamic settings

[Go to top]

Linking Anonymous Transactions: The Consistent View Attack (PDF)
by Andreas Pashalidis and Bernd Meyer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we study a particular attack that may be launched by cooperating organisations in order to link the transactions and the pseudonyms of the users of an anonymous credential system. The results of our analysis are both positive and negative. The good (resp. bad) news, from a privacy protection (resp. evidence gathering) viewpoint, is that the attack may be computationally intensive. In particular, it requires solving a problem that is polynomial time equivalent to ALLSAT . The bad (resp. good) news is that a typical instance of this problem may be efficiently solvable

[Go to top]

Incentive-compatible interdomain routing (PDF)
by Joan Feigenbaum, Vijay Ramachandran, and Michael Schapira.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The routing of traffic between Internet domains, or Autonomous Systems (ASes), a task known as interdomain routing, is currently handled by the Border Gateway Protocol (BGP). Using BGP, autonomous systems can apply semantically rich routing policies to choose interdomain routes in a distributed fashion. This expressiveness in routing-policy choice supports domains' autonomy in network operations and in business decisions, but it comes at a price: The interaction of locally defined routing policies can lead to unexpected global anomalies, including route oscillations or overall protocol divergence. Networking researchers have addressed this problem by devising constraints on policies that guarantee BGP convergence without unduly limiting expressiveness and autonomy.In addition to taking this engineering or "protocol-design" approach, researchers have approached interdomain routing from an economic or "mechanism-design" point of view. It is known that lowest-cost-path (LCP) routing can be implemented in a truthful, BGP-compatible manner but that several other natural classes of routing policies cannot. In this paper, we present a natural class of interdomain-routing policies that is more realistic than LCP routing and admits incentive-compatible, BGP-compatible implementation. We also present several positive steps toward a general theory of incentive-compatible interdomain routing

[Go to top]

Improving Sender Anonymity in a Structured Overlay with Imprecise Routing (PDF)
by Giuseppe Ciaccio.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In the framework of peer to peer distributed systems, the problem of anonymity in structured overlay networks remains a quite elusive one. It is especially unclear how to evaluate and improve sender anonymity, that is, untraceability of the peers who issue messages to other participants in the overlay. In a structured overlay organized as a chordal ring, we have found that a technique originally developed for recipient anonymity also improves sender anonymity. The technique is based on the use of imprecise entries in the routing tables of each participating peer. Simulations show that the sender anonymity, as measured in terms of average size of anonymity set, decreases slightly if the peers use imprecise routing; yet, the anonymity takes a better distribution, with good anonymity levels becoming more likely at the expenses of very high and very low levels. A better quality of anonymity service is thus provided to participants

[Go to top]

Improving Robustness of Peer-to-Peer Streaming with Incentives (PDF)
by Vinay Pai and Alexander E. Mohr.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper we argue that a robust incentive mechanism is important in a real-world peer-to-peer streaming system to ensure that nodes contribute as much upload bandwidth as they can. We show that simple tit-for-tat mechanisms which work well in file-sharing systems like BitTorrent do not perform well given the additional delay and bandwidth constraints imposed by live streaming. We present preliminary experimental results for an incentive mechanism based on the Iterated Prisoner's Dilemma problem that allows all nodes to download with low packet loss when there is sufficient capacity in the system, but when the system is resource-starved, nodes that contribute upload bandwidth receive better service than those that do not. Moreover, our algorithm does not require nodes to rely on any information other than direct observations of its neighbors ' behavior towards it

[Go to top]

Ignoring the Great Firewall of China (PDF)
by Richard Clayton, Steven J. Murdoch, and Robert N. M. Watson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The so-called Great Firewall of China operates, in part, by inspecting TCP packets for keywords that are to be blocked. If the keyword is present, TCP reset packets (viz: with the RST flag set) are sent to both endpoints of the connection, which then close. However, because the original packets are passed through the firewall unscathed, if the endpoints completely ignore the firewall's resets, then the connection will proceed unhindered. Once one connection has been blocked, the firewall makes further easy-to-evade attempts to block further connections from the same machine. This latter behaviour can be leveraged into a denial-of-service attack on third-party machines

[Go to top]

Havelaar: A Robust and Efficient Reputation System for Active Peer-to-Peer Systems (PDF)
by Dominik Grolimund, Luzius Meisser, Stefan Schmid, and Roger Wattenhofer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (p2p) systems have the potential to harness huge amounts of resources. Unfortunately, however, it has been shown that most of today's p2p networks suffer from a large fraction of free-riders, which mostly consume resources without contributing much to the system themselves. This results in an overall performance degradation. One particularly interesting resource is bandwidth. Thereby, a service differentiation approach seems appropriate, where peers contributing higher upload bandwidth are rewarded with higher download bandwidth in return. Keeping track of the contribution of each peer in an open, decentralized environment, however, is not trivial; many systems which have been proposed are susceptible to false reports. Besides being prone to attacks, some solutions have a large communication and computation overhead, which can even be linear in the number of transactionsan unacceptable burden in practical and active systems. In this paper, we propose a reputation system which overcomes this scaling problem. Our analytical and simulation results are promising, indicating that the mechanism is accurate and efficient, especially when applied to systems where there are lots of transactions (e.g., due to erasure coding)

[Go to top]

The Economics of Mass Surveillance and the Questionable Value of Anonymous Communications (PDF)
by George Danezis and Bettina Wittneben.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a model of surveillance based on social network theory, where observing one participant also leaks some information about third parties. We examine how many nodes an adversary has to observe in order to extract information about the network, but also how the method for choosing these nodes (target selection) greatly influences the resulting intelligence. Our results provide important insights into the actual security of anonymous communication, and their ability to minimise surveillance and disruption in a social network. They also allow us to draw interesting policy conclusions from published interception figures, and get a better estimate of the amount of privacy invasion and the actual volume of surveillance taking place

[Go to top]

Breaking the Collusion Detection Mechanism of MorphMix (PDF)
by Parisa Tabriz and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

MorphMix is a peer-to-peer circuit-based mix network designed to provide low-latency anonymous communication. MorphMix nodes incrementally construct anonymous communication tunnels based on recommendations from other nodes in the system; this P2P approach allows it to scale to millions of users. However, by allowing unknown peers to aid in tunnel construction, MorphMix is vulnerable to colluding attackers that only offer other attacking nodes in their recommendations. To avoid building corrupt tunnels, MorphMix employs a collusion detection mechanism to identify this type of misbehavior. In this paper, we challenge the assumptions of the collusion detection mechanism and demonstrate that colluding adversaries can compromise a significant fraction of all anonymous tunnels, and in some cases, a majority of all tunnels built. Our results suggest that mechanisms based solely on a node's local knowledge of the network are not sufficient to solve the difficult problem of detecting colluding adversarial behavior in a P2P system and that more sophisticated schemes may be needed

[Go to top]

Blending Different Latency Traffic with Alpha-Mixing (PDF)
by Roger Dingledine, Andrei Serjantov, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Currently fielded anonymous communication systems either introduce too much delay and thus have few users and little security, or have many users but too little delay to provide protection against large attackers. By combining the user bases into the same network, and ensuring that all traffic is mixed together, we hope to lower delay and improve anonymity for both sets of users. Alpha-mixing is an approach that can be added to traditional batching strategies to let senders specify for each message whether they prefer security or speed. Here we describe how to add alpha-mixing to various mix designs, and show that mix networks with this feature can provide increased anonymity for all senders in the network. Along the way we encounter subtle issues to do with the attacker's knowledge of the security parameters of the users

[Go to top]

Anonymity Loves Company: Usability and the Network Effect (PDF)
by Roger Dingledine and Nick Mathewson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A growing field of literature is studying how usability impacts security [4]. One class of security software is anonymizing networks— overlay networks on the Internet that provide privacy by letting users transact (for example, fetch a web page or send an email) without revealing their communication partners. In this position paper we focus on the network effects of usability on privacy and security: usability is a factor as before, but the size of the user base also becomes a factor. We show that in anonymizing networks, even if you were smart enough and had enough time to use every system perfectly, you would nevertheless be right to choose your system based in part on its usability for other users

[Go to top]

Locating Hidden Servers (PDF)
by Lasse Øverlier and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Hidden services were deployed on the Tor anonymous communication network in 2004. Announced properties include server resistance to distributed DoS. Both the EFF and Reporters Without Borders have issued guides that describe using hidden services via Tor to protect the safety of dissidents as well as to resist censorship. We present fast and cheap attacks that reveal the location of a hidden server. Using a single hostile Tor node we have located deployed hidden servers in a matter of minutes. Although we examine hidden services over Tor, our results apply to any client using a variety of anonymity networks. In fact, these are the first actual intersection attacks on any deployed public network: thus confirming general expectations from prior theory and simulation. We recommend changes to route selection design and implementation for Tor. These changes require no operational increase in network overhead and are simple to make; but they prevent the attacks we have demonstrated. They have been implemented

[Go to top]

Deterring Voluntary Trace Disclosure in Re-encryption Mix Networks (PDF)
by Philippe Golle, XiaoFeng Wang, Markus Jakobsson, and Alex Tsow.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mix-networks, a family of anonymous messaging protocols, have been engineered to withstand a wide range of theoretical internal and external adversaries. An undetectable insider threatvoluntary partial trace disclosures by server administratorsremains a troubling source of vulnerability. An administrator's cooperation could be the resulting coercion, bribery, or a simple change of interests. While eliminating this insider threat is impossible, it is feasible to deter such unauthorized disclosures by bundling them with additional penalties. We abstract these costs with collateral keys, which grant access to customizable resources. This article introduces the notion of trace-deterring mix-networks, which encode collateral keys for every server-node into every end-to-end message trace. The network reveals no keying material when the input-to-output transitions of individual servers remain secret. Two permutation strategies for encoding key information into traces, mix-and-flip and all-or-nothing, are presented. We analyze their trade-offs with respect to computational efficiency, anonymity sets, and colluding message senders. Our techniques have sufficiently low overhead for deployment in large-scale elections, thereby providing a sort of publicly verifiable privacy guarantee

[Go to top]

PULSE, a Flexible P2P Live Streaming System (PDF)
by Fabio Pianese, Joaquín Keller, and E W Biersack.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

With the widespread availability of inexpensive broadband Internet connections for home-users, a large number of bandwidth-intensive applications previously not feasible have now become practical. This is the case for multimedia live streaming, for which end-user's dial-up/ISDN modem connections once were the bottleneck. The bottleneck is now mostly found on the server side: the bandwidth required for serving many clients at once is large and thus very costly to the broadcasting entity. Peer-to-peer systems for on-demand and live streaming have proved to be an encouraging solution, since they can shift the burden of content distribution from the server to the users of the network. In this work we introduce PULSE, a P2P system for live streaming whose main goals are flexibility, scalability, and robustness. We present the fundamental concepts that stand behind the design of PULSE along with its intended global behavior, and describe in detail the main algorithms running on its nodes

[Go to top]

Fair Trading of Information: A Proposal for the Economics of Peer-to-Peer Systems (PDF)
by Kenji Saito, Eiichi Morino, and Jun Murai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A P2P currency can be a powerful tool for promoting exchanges in a trusted way that make use of under-utilized resources both in computer networks and in real life. There are three classes of resource that can be exchanged in a P2P system: atoms (ex. physical goods by way of auctions), bits (ex. data files) and presences (ex. time slots for computing resources such as CPU, storage or bandwidth). If these are equally treated as commodities, however, the economy of the system is likely to collapse, because data files can be reproduced at a negligibly small cost whereas time slots for computing resources cannot even be stockpiled for future use. This paper clarifies this point by simulating a small world of traders, and proposes a novel way for applying the "reduction over time" feature[14] of i-WAT[11], a P2P currency. In the proposed new economic order (NEO), bits are freely shared among participants, whereas their producers are supported by peers, being given freedom to issue exchange tickets whose values are reduced over time

[Go to top]

Fair Trading of Information: A Proposal for the Economics of Peer-to-Peer Systems (PDF)
by Kenji Saito, Eiichi Morino, and Jun Murai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A P2P currency can be a powerful tool for promoting exchanges in a trusted way that make use of under-utilized resources both in computer networks and in real life. There are three classes of resource that can be exchanged in a P2P system: atoms (ex. physical goods by way of auctions), bits (ex. data files) and presences (ex. time slots for computing resources such as CPU, storage or bandwidth). If these are equally treated as commodities, however, the economy of the system is likely to collapse, because data files can be reproduced at a negligibly small cost whereas time slots for computing resources cannot even be stockpiled for future use. This paper clarifies this point by simulating a small world of traders, and proposes a novel way for applying the "reduction over time" feature[14] of i-WAT[11], a P2P currency. In the proposed new economic order (NEO), bits are freely shared among participants, whereas their producers are supported by peers, being given freedom to issue exchange tickets whose values are reduced over time

[Go to top]

Defending the Sybil Attack in P2P Networks: Taxonomy, Challenges, and a Proposal for Self-Registration (PDF)
by Jochen Dinger and Hannes Hartenstein.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The robustness of Peer-to-Peer (P2P) networks, in particular of DHT-based overlay networks, suffers significantly when a Sybil attack is performed. We tackle the issue of Sybil attacks from two sides. First, we clarify, analyze, and classify the P2P identifier assignment process. By clearly separating network participants from network nodes, two challenges of P2P networks under a Sybil attack become obvious: i) stability over time, and ii) identity differentiation. Second, as a starting point for a quantitative analysis of time-stability of P2P networks under Sybil attacks and under some assumptions with respect to identity differentiation, we propose an identity registration procedure called self-registration that makes use of the inherent distribution mechanisms of a P2P network

[Go to top]

Taxonomy of trust: Categorizing P2P reputation systems (PDF)
by Sergio Marti and Hector Garcia-Molina.
In Management in Peer-to-Peer Systems 50(4), March 2006, pages 472-484. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The field of peer-to-peer reputation systems has exploded in the last few years. Our goal is to organize existing ideas and work to facilitate system design. We present a taxonomy of reputation system components, their properties, and discuss how user behavior and technical constraints can conflict. In our discussion, we describe research that exemplifies compromises made to deliver a useable, implementable system

[Go to top]

A survey on networking games in telecommunications (PDF)
by Eitan Altman, Thomas Boulogne, Rachid El-Azouzi, Tania Jiménez, and Laura Wynter.
In Computers amp; Operations Research 33, February 2006, pages 286-311. (BibTeX entry) (Download bibtex record)
(direct link)

In this survey, we summarize different modeling and solution concepts of networking games, as well as a number of different applications in telecommunications that make use of or can make use of networking games. We identify some of the mathematical challenges and methodologies that are involved in these problems. We include here work that has relevance to networking games in telecommunications from other areas, in particular from transportation planning

[Go to top]

Parameterized graph separation problems (PDF)
by Dániel Marx.
In Theoretical Computer Science 351, February 2006, pages 394-406. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider parameterized problems where some separation property has to be achieved by deleting as few vertices as possible. The following five problems are studied: delete k vertices such that (a) each of the given l terminals is separated from the others, (b) each of the given l pairs of terminals is separated, (c) exactly l vertices are cut away from the graph, (d) exactly l connected vertices are cut away from the graph, (e) the graph is separated into at least l components. We show that if both k and l are parameters, then (a), (b) and (d) are fixed-parameter tractable, while (c) and (e) are W[1]-hard

[Go to top]

On Object Maintenance in Peer-to-Peer Systems (PDF)
by Kiran Tati and Geoffrey M. Voelker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

This paper, we revisit object maintenance in peer-to-peer systems, focusing on how temporary and permanent churn impact the overheads associated with object maintenance. We have a number of goals: to highlight how different environments exhibit different degrees of temporary and permanent churn; to provide further insight into how churn in different environments affects the tuning of object maintenance strategies; and to examinehow object maintenance and churn interact with other constraints such as storage capacity. When possible, we highlight behavior independent of particular object maintenance strategies. When an issue depends on a particular strategy, though, we explore it in the context of a strategy in essence similar to TotalRecall, which uses erasure coding, lazy repair of data blocks, and random indirect placement (we also assume that repairs incorporate remaining blocks rather than regenerating redundancy from scratch)

[Go to top]

An Experimental Study of the Skype Peer-to-Peer VoIP System (PDF)
by Saikat Guha, Neil Daswani, and Ravi Jain.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Despite its popularity, relatively little is known about the traf- fic characteristics of the Skype VoIP system and how they differ from other P2P systems. We describe an experimental study of Skype VoIP traffic conducted over a one month period, where over 30 million datapoints were collected regarding the population of online clients, the number of supernodes, and their traffic characteristics. The results indicate that although the structure of the Skype system appears to be similar to other P2P systems, particularly KaZaA, there are several significant differences in traffic. The number of active clients shows diurnal and work-week behavior, correlating with normal working hours regardless of geography. The population of supernodes in the system tends to be relatively stable; thus node churn, a significant concern in other systems, seems less problematic in Skype. The typical bandwidth load on a supernode is relatively low, even if the supernode is relaying VoIP traffic. The paper aims to aid further understanding of a signifi- cant, successful P2P VoIP system, as well as provide experimental data that may be useful for design and modeling of such systems. These results also imply that the nature of a VoIP P2P system like Skype differs fundamentally from earlier P2P systems that are oriented toward file-sharing, and music and video download applications, and deserves more attention from the research community

[Go to top]

Curve25519: new Diffie-Hellman speed records (PDF)
by Daniel J. Bernstein.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Network Coding: an Instant Primer (PDF)
by Christina Fragouli, Jean-Yves Le Boudec, and Jörg Widmer.
In SIGCOMM Computer Communication Review 36, January 2006, pages 63-68. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network coding is a new research area that may have interesting applications in practical networking systems. With network coding, intermediate nodes may send out packets that are linear combinations of previously received information. There are two main benefits of this approach: potential throughput improvements and a high degree of robustness. Robustness translates into loss resilience and facilitates the design of simple distributed algorithms that perform well, even if decisions are based only on partial information. This paper is an instant primer on network coding: we explain what network coding does and how it does it. We also discuss the implications of theoretical results on network coding for realistic settings and show how network coding can be used in practice

[Go to top]

i-WAT: The Internet WAT System–An Architecture for Maintaining Trust and Facilitating Peer-to-Peer Barter Relationships (PDF)
by Kenji Saito.
Ph.D. thesis, Keio University,, January 2006. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Improving traffic locality in BitTorrent via biased neighbor selection (PDF)
by Ruchir Bindal, Pei Cao, William Chan, Jan Medved, George Suwala, Tony Bates, and Amy Zhang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) applications such as BitTorrent ignore traffic costs at ISPs and generate a large amount of cross-ISP traffic. As a result, ISPs often throttle BitTorrent traffic to control the cost. In this paper, we examine a new approach to enhance BitTorrent traffic locality, biased neighbor selection, in which a peer chooses the majority, but not all, of its neighbors from peers within the same ISP. Using simulations, we show that biased neighbor selection maintains the nearly optimal performance of Bit- Torrent in a variety of environments, and fundamentally reduces the cross-ISP traffic by eliminating the traffic's linear growth with the number of peers. Key to its performance is the rarest first piece replication algorithm used by Bit- Torrent clients. Compared with existing locality-enhancing approaches such as bandwidth limiting, gateway peers, and caching, biased neighbor selection requires no dedicated servers and scales to a large number of BitTorrent networks

[Go to top]

Energy-aware lossless data compression
by Kenneth Barr and Krste Asanović.
In ACM Trans. Comput. Syst 24(3), January 2006, pages 250-291. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless transmission of a single bit can require over 1000 times more energy than a single computation. It can therefore be beneficial to perform additional computation to reduce the number of bits transmitted. If the energy required to compress data is less than the energy required to send it, there is a net energy savings and an increase in battery life for portable computers. This article presents a study of the energy savings possible by losslessly compressing data prior to transmission. A variety of algorithms were measured on a StrongARM SA-110 processor. This work demonstrates that, with several typical compression algorithms, there is a actually a net energy increase when compression is applied before transmission. Reasons for this increase are explained and suggestions are made to avoid it. One such energy-aware suggestion is asymmetric compression, the use of one compression algorithm on the transmit side and a different algorithm for the receive path. By choosing the lowest-energy compressor and decompressor on the test platform, overall energy to send and receive data can be reduced by 11 compared with a well-chosen symmetric pair, or up to 57 over the default symmetric zlib scheme

[Go to top]

Complementary currency innovations: Self-guarantee in peer-to-peer currencies (PDF)
by Mitra Ardron and Bernard Lietaer.
In International Journal of Community Currency Research 10, January 2006, pages 1-7. (BibTeX entry) (Download bibtex record)
(direct link)

The WAT system, as used in Japan, allows for businesses to issue their own tickets (IOU's) which can circulate as a complementary currency within a community. This paper proposes a variation on that model, where the issuer of a ticket can offer a guarantee, in the form of some goods or services. The difference in value, along with a reasonable acceptance that the issuer is capable of delivering the service or goods, allows for a higher degree of confidence in the ticket, and therefore a greater liquidity

[Go to top]

Verifiable shuffles: a formal model and a Paillier-based three-round construction with provable security
by Lan Nguyen, Rei Safavi-Naini, and Kaoru Kurosawa.
In International Journal of Information Security 5(4), 2006, pages 241-255. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A shuffle takes a list of ciphertexts and outputs a permuted list of re-encryptions of the input ciphertexts. Mix-nets, a popular method for anonymous routing, can be constructed from a sequence of shuffles and decryption. We propose a formal model for security of verifiable shuffles and a new verifiable shuffle system based on the Paillier encryption scheme, and prove its security in the proposed dmodel. The model is general and can be extended to provide provable security for verifiable shuffle decryption

[Go to top]

Unconditionally Secure Constant-Rounds Multi-party Computation for Equality, Comparison, Bits and Exponentiation (PDF)
by Ivan Damgárd, Matthias Fitzi, Eike Kiltz, JesperBuus Nielsen, and Tomas Toft.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We show that if a set of players hold shares of a value a Fp for some prime p (where the set of shares is written [a] p ), it is possible to compute, in constant rounds and with unconditional security, sharings of the bits of a, i.e., compute sharings [a0] p , ..., [al- 1] p such that l = ⌈ log2 p ⌉, a0,...,al–1 0,1 and a = summation of ai * 2^i where 0 <= i <= l- 1. Our protocol is secure against active adversaries and works for any linear secret sharing scheme with a multiplication protocol. The complexity of our protocol is O(llogl) invocations of the multiplication protocol for the underlying secret sharing scheme, carried out in O(1) rounds. This result immediately implies solutions to other long-standing open problems such as constant-rounds and unconditionally secure protocols for deciding whether a shared number is zero, comparing shared numbers, raising a shared number to a shared exponent and reducing a shared number modulo a shared modulus

[Go to top]

A Trust Evaluation Framework in Distributed Networks: Vulnerability Analysis and Defense Against Attacks (PDF)
by Yan L. Sun, Zhu Han, Wei Yu, and K. J. Ray Liu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Evaluation of trustworthiness of participating entities is an effective method to stimulate collaboration and improve network security in distributed networks. Similar to other security related protocols, trust evaluation is an attractive target for adversaries. Currently, the vulnerabilities of trust evaluation system have not been well understood. In this paper, we present several attacks that can undermine the accuracy of trust evaluation, and then develop defense techniques. Based on our investigation on attacks and defense, we implement a trust evaluation system in ad hoc networks for securing ad hoc routing and assisting malicious node detection. Extensive simulations are performed to illustrate various attacks, the effectiveness of the proposed defense techniques, and the overall performance of the trust evaluation system

[Go to top]

Storage Tradeoffs in a Collaborative Backup Service for Mobile Devices (PDF)
by Ludovic Courtès, Marc-Olivier Killijian, and David Powell.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile devices are increasingly relied on but are used in contexts that put them at risk of physical dam- age, loss or theft. We consider a fault-tolerance ap- proach that exploits spontaneous interactions to imple- ment a collaborative backup service. We define the con- straints implied by the mobile environment,analyze how they translate into the storage layer of such a backup system and examine various design options. The paper concludes with a presentation of our prototype imple- mentation of the storage layer, an evaluation of the im- pact of several compression methods,and directions for future work

[Go to top]

Similarity Queries on Structured Data in Structured Overlays
by Marcel Karnstedt, Kai-Uwe Sattler, Manfred Hauswirth, and Roman Schmidt.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Security Considerations in Space and Delay Tolerant Networks
by Stephen Farrell and Vinny Cahill.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper reviews the Internet-inspired security work on delay tolerant networking, in particular, as it might apply to space missions, and identifies some challenges arising, for both the Internet security community and for space missions. These challenges include the development of key management schemes suited for space missions as well as a characterization of the actual security requirements applying. A specific goal of this paper is therefore to elicit feedback from space mission IT specialists in order to guide the development of security mechanisms for delay tolerant networking

[Go to top]

Securing the Scalable Source Routing Protocol (PDF)
by Kendy Kutzner, Christian Wallenta, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Scalable Source Routing (SSR) protocol combines overlay-like routing in a virtual network structure with source routing in the physical network to a single cross-layer architecture. Thereby, it can provide indirect routing in networks that lack a well-crafted structure. SSR is well suited for mobile ad hoc networks, sensor-actuator networks, and especially for mesh networks. Moreover, SSR directly provides the routing semantics of a structured routing overlay, making it an efficient basis for the scalable implementation of fully decentralized applications. In this paper we analyze SSR with regard to security: We show where SSR is prone to attacks, and we describe protocol modifications that make SSR robust in the presence of malicious nodes. The core idea is to introduce cryptographic certificates that allow nodes to discover forged protocol messages. We evaluate our proposed modifications by means of simulations, and thus demonstrate that they are both effective and efficient

[Go to top]

Secure User Identification Without Privacy Erosion (PDF)
by Stefan Brands.
In University of Ottawa Law amp; Technology Journal 3, 2006, pages 205-223. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Individuals are increasingly confronted with requests to identify themselves when accessing services provided by government organizations, companies, and other service providers. At the same time, traditional transaction mechanisms are increasingly being replaced by electronic mechanisms that underneath their hood automatically capture and record globally unique identifiers. Taken together, these interrelated trends are currently eroding the privacy and security of individuals in a manner unimaginable just a few decades ago. Privacy activists are facing an increasingly hopeless battle against new privacy-invasive identification initiatives: the cost of computerized identification systems is rapidly going down, their accuracy and efficiency is improving all the time, much of the required data communication infrastructure is now in place, forgery of non-electronic user credentials is getting easier all the time, and data sharing imperatives have gone up dramatically. This paper argues that the privacy vs. identification debate should be moved into less polarized territory. Contrary to popular misbelief, identification and privacy are not opposite interests that need to be balanced: the same technological advances that threaten to annihilate privacy can be exploited to save privacy in an electronic age. The aim of this paper is to clarify that premise on the basis of a careful analysis of the concept of user identification itself. Following an examination of user identifiers and its purposes, I classify identification technologies in a manner that enables their privacy and security implications to be clearly articulated and contrasted. I also include an overview of a modern privacy-preserving approach to user identification

[Go to top]

Secure Collaborative Planning, Forecasting, and Replenishment (PDF)
by Mikhail Atallah, Marina Blanton, Vinayak Deshpand, Keith Frikken, Jiangtao Li, and Leroy Schwarz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Although the benefits of information sharing between supply-chain partners are well known, many companies are averse to share their private information due to fear of adverse impact of information leakage. This paper uses techniques from Secure Multiparty Computation (SMC) to develop secure protocols for the CPFR (Collaborative Planning, Forecasting, and Replenishment) business process. The result is a process that permits supply-chain partners to capture all of the benefits of information-sharing and collaborative decision-making, but without disclosing their private demandsignal (e.g., promotions) and cost information to one another. In our collaborative CPFR) scenario, the retailer and supplier engage in SMC protocols that result in: (1) a forecast that uses both the retailers and the suppliers observed demand signals to better forecast demand; and (2) prescribed order/shipment quantities based on system-wide costs and inventory levels (and on the joint forecasts) that minimize supply-chain expected cost/period. Our contributions are as follows: (1) we demonstrate that CPFR can be securely implemented without disclosing the private information of either partner; (2) we show that the CPFR business process is not incentive compatible without transfer payments and develop an incentive-compatible linear transfer-payment scheme for collaborative forecasting; (3) we demonstrate that our protocols are not only secure (i.e., privacy preserving), but that neither partner is able to make accurate inferences about the others future demand signals from the outputs of the protocols; and (4) we illustrate the benefits of secure collaboration using simulation

[Go to top]

Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control (PDF)
by Mark Samuel Miller.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Permission is hereby granted to make and distribute verbatim copies of this document without royalty or fee. Permission is granted to quote excerpts from this documented provided the original source is properly cited. ii When separately written programs are composed so that they may cooperate, they may instead destructively interfere in unanticipated ways. These hazards limit the scale and functionality of the software systems we can successfully compose. This dissertation presents a framework for enabling those interactions between components needed for the cooperation we intend, while minimizing the hazards of destructive interference. Great progress on the composition problem has been made within the object paradigm, chiefly in the context of sequential, single-machine programming among benign components. We show how to extend this success to support robust composition of concurrent and potentially malicious components distributed over potentially malicious machines. We present E, a distributed, persistent, secure programming language, and CapDesk, a virus-safe desktop built in E, as embodiments of the techniques we explain

[Go to top]

Reputation Mechanisms (PDF)
by Chrysanthos Dellarocas.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Regroup-And-Go mixes to counter the (n-1) attack
by Jin-Qiao Shi, Bin-Xing Fang, and Li-Jie Shao.
In Journal of Internet Research 16(2), 2006, pages 213-223. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The (n-1) attack is the most powerful attack against mix which is the basic building block of many modern anonymous systems. This paper aims to present a strategy that can be implemented in mix networks to detect and counter the active attacks, especially the (n-1) attack and its variants

[Go to top]

Reactive Clustering in MANETs
by Curt Cramer, Oliver Stanze, Kilian Weniger, and Martina Zitterbart.
In International Journal of Pervasive Computing and Communications 2, 2006, pages 81-90. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many clustering protocols for mobile ad hoc networks (MANETs) have been proposed in the literature. With only one exception so far (1), all these protocols are proactive, thus wasting bandwidth when their function is not currently needed. To reduce the signalling traffic load, reactive clustering may be employed.We have developed a clustering protocol named On-Demand Group Mobility-Based Clustering (ODGMBC) (2), (3) which is reactive. Its goal is to build clusters as a basis for address autoconfiguration and hierarchical routing. In contrast to the protocol described in ref. (1), the design process especially addresses the notions of group mobility and of multi-hop clusters in a MANET. As a result, ODGMBC maps varying physical node groups onto logical clusters. In this paper, ODGMBC is described. It was implemented for the ad hoc network simulator GloMoSim (4) and evaluated using several performance indicators. Simulation results are promising and show that ODGMBC leads to stable clusters. This stability is advantageous for autoconfiguration and routing mechansims to be employed in conjunction with the clustering algorithm

[Go to top]

Raptor codes (PDF)
by M. Amin Shokrollahi.
In IEEE/ACM Trans. Netw 14(SI), 2006, pages 2551-2567. (BibTeX entry) (Download bibtex record)
(direct link)

LT-codes are a new class of codes introduced by Luby for the purpose of scalable and fault-tolerant distribution of data over computer networks. In this paper, we introduce Raptor codes, an extension of LT-codes with linear time encoding and decoding. We will exhibit a class of universal Raptor codes: for a given integer k and any real > 0, Raptor codes in this class produce a potentially infinite stream of symbols such that any subset of symbols of size k(1 + ) is sufficient to recover the original k symbols with high probability. Each output symbol is generated using O(log(1/ )) operations, and the original symbols are recovered from the collected ones with O(k log(1/)) operations.We will also introduce novel techniques for the analysis of the error probability of the decoder for finite length Raptor codes. Moreover, we will introduce and analyze systematic versions of Raptor codes, i.e., versions in which the first output elements of the coding system coincide with the original k elements

[Go to top]

The rainbow skip graph: a fault-tolerant constant-degree distributed data structure (PDF)
by Michael T. Goodrich, Michael J. Nelson, and Jonathan Z. Sun.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We present a distributed data structure, which we call the rainbow skip graph. To our knowledge, this is the first peer-to-peer data structure that simultaneously achieves high fault-tolerance, constant-sized nodes, and fast update and query times for ordered data. It is a non-trivial adaptation of the SkipNet/skip-graph structures of Harvey et al. and Aspnes and Shah, so as to provide fault-tolerance as these structures do, but to do so using constant-sized nodes, as in the family tree structure of Zatloukal and Harvey. It supports successor queries on a set of n items using O(log n) messages with high probability, an improvement over the expected O(log n) messages of the family tree. Our structure achieves these results by using the following new constructs: Rainbow connections: parallel sets of pointers between related components of nodes, so as to achieve good connectivity between "adjacent" components, using constant-sized nodes. Hydra components: highly-connected, highly fault-tolerant components of constant-sized nodes, which will contain relatively large connected subcomponents even under the failure of a constant fraction of the nodes in the component.We further augment the hydra components in the rainbow skip graph by using erasure-resilient codes to ensure that any large subcomponent of nodes in a hydra component is sufficient to reconstruct all the data stored in that component. By carefully maintaining the size of related components and hydra components to be O(log n), we are able to achieve fast times for updates and queries in the rainbow skip graph. In addition, we show how to make the communication complexity for updates and queries be worst case, at the expense of more conceptual complexity and a slight degradation in the node congestion of the data structure

[Go to top]

Pushing Chord into the Underlay: Scalable Routing for Hybrid MANETs (PDF)
by Thomas Fuhrmann, Pengfei Di, Kendy Kutzner, and Curt Cramer.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SCALABLE SOURCE ROUTING is a novel routing approach for large unstructured networks, for example hybrid mobile ad hoc networks (MANETs), mesh networks, or sensor-actuator networks. It is especially suited for organically growing networks of many resource-limited mobile devices supported by a few fixed-wired nodes. SCALABLE SOURCE ROUTING is a full-fledged routing protocol that directly provides the semantics of a structured peer-to-peer overlay. Hence, it can serve as an efficient basis for fully decentralized applications on mobile devices. SCALABLE SOURCE ROUTING combines source routing in the physical network with Chord-like routing in the virtual ring formed by the address space. Message forwarding greedily decreases the distance in the virtual ring while preferring physically short paths. Unlike previous approaches, scalability is achieved without imposing artificial hierarchies or assigning location-dependent addresses. SCALABLE SOURCE ROUTING enables any-to-any communication in a flat address space without maintaining any-to-any routes. Each node proactively discovers its virtual vicinity using an iterative process. Additionally, it passively caches a limited amount of additional paths. By means of extensive simulation, we show that SCALABLE SOURCE ROUTING is resource-efficient and scalable well beyond 10,000 nodes

[Go to top]

PlanetLab application management using Plush (PDF)
by J. Albrecht, C. Tuttle, A.C. Snoeren, and A. Vahdat.
In ACM SIGOPS Operating Systems Review 40(1), 2006, pages 33-40. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Performance evaluation of chord in mobile ad hoc networks (PDF)
by Curt Cramer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile peer-to-peer applications recently have received growing interest. However, it is often assumed that structured peer-to-peer overlays cannot efficiently operate in mobile ad hoc networks (MANETs). The prevailing opinion is that this is due to the protocols' high overhead cost. In this paper, we show that this opinion is misguided.We present a thorough simulation study evaluating Chord in the well-known MANET simulator GloMoSim. We found the main issue of deploying Chord in a MANET not to be its overhead, but rather the protocol's pessimistic timeout and failover strategy. This strategy enables fast lookup resolution in spite of highly dynamic node membership, which is a significant problem in the Internet context. However, with the inherently higher packet loss rate in a MANET, this failover strategy results in lookups being inconsistently forwarded even if node membership does not change

[Go to top]

PastryStrings: A Comprehensive Content-Based Publish/Subscribe DHT Network
by Ioannis Aekaterinidis and Peter Triantafillou.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Packet coding for strong anonymity in ad hoc networks (PDF)
by Imad Aad, Claude Castelluccia, and Jean-Pierre Hubaux.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several techniques to improve anonymity have been proposed in the literature. They rely basically on multicast or on onion routing to thwart global attackers or local attackers respectively. None of the techniques provide a combined solution due to the incompatibility between the two components, as we show in this paper. We propose novel packet coding techniques that make the combination possible, thus integrating the advantages in a more complete and robust solution

[Go to top]

Our Data, Ourselves: Privacy via Distributed Noise Generation (PDF)
by Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14, 4, 13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form sum over all rows 'i' in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution

[Go to top]

Optimally efficient multi-valued byzantine agreement (PDF)
by Matthias Fitzi and Martin Hirt.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

OmniStore: A system for ubiquitous personal storage management (PDF)
by Alexandros Karypidis and Spyros Lalis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As personal area networking becomes a reality, the collective management of storage in portable devices such as mobile phones, cameras and music players will grow in importance. The increasing wireless communication capability of such devices makes it possible for them to interact with each other and implement more advanced storage functionality. This paper introduces OmniStore, a system which employs a unified data management approach that integrates portable and backend storage, but also exhibits self-organizing behavior through spontaneous device collaboration

[Go to top]

Nonesuch: a mix network with sender unobservability (PDF)
by Andrei Serjantov and Benessa Defend.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Oblivious submission to anonymity systems is a process by which a message may be submitted in such a way that neither the anonymity network nor a global passive adversary may determine that a valid message has been sent. We present Nonesuch: a mix network with steganographic submission and probabilistic identification and attenuation of cover traffic. In our system messages are submitted as stegotext hidden inside Usenet postings. The steganographic extraction mechanism is such that the the vast majority of the Usenet postings which do not contain keyed stegotext will produce meaningless output which serves as cover traffic, thus increasing the anonymity of the real messages. This cover traffic is subject to probabilistic attenuation in which nodes have only a small probability of distinguishing cover messages from "real" messages. This attenuation prevents cover traffic from travelling through the network in an infinite loop, while making it infeasible for an entrance node to distinguish senders

[Go to top]

MyriadStore: A Peer-to-Peer Backup System (PDF)
by Birgir Stefansson, Antonios Thodis, Ali Ghodsi, and Seif Haridi.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional backup methods are error prone, cumbersome and expensive. Distributed backup applications have emerged as promising tools able to avoid these disadvantages, by exploiting unused disk space of remote computers. In this paper we propose MyriadStore, a distributed peer-to-peer backup system. MyriadStore makes use of a trading scheme that ensures that a user has as much available storage space in the system as the one he/she contributes to it. A mechanism for making challenges between the system's nodes ensures that this restriction is fulfilled. Furthermore, MyriadStore minimizes bandwidth requirements and migration costs by treating separately the storage of the system's meta-data and the storage of the backed up data. This approach also offers great flexibility on the placement of the backed up data, a property that facilitates the deployment of the trading scheme

[Go to top]

Linyphi: An IPv6-Compatible Implementation of SSR (PDF)
by Pengfei Di, Massimiliano Marcon, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Scalable source routing (SSR) is a self-organizing routing protocol designed for supporting peer-to-peer applications. It is especially suited for networks that do not have a well crafted structure, e. g. ad-hoc and mesh-networks. SSR is based on the combination of source routes and a virtual ring structure. This ring is used in a Chord-like manner to obtain source routes to destinations that are not yet in the respective router cache. This approach makes SSR more message efficient than flooding based ad-hoc routing protocols. Moreover, it directly provides the semantics of a structured routing overlay. In this paper we present Linyphi, an implementation of SSR for wireless accesses routers. Linyphi combines IPv6 and SSR so that unmodified IPv6 hosts have transparent connectivity to both the Linyphi mesh network and the IPv4/v6 Internet. We give a basic outline of the implementation and demonstrate its suitability in real-world mesh network scenarios. Linyphi is available for download (www.linyphi.net)

[Go to top]

Less Hashing, Same Performance: Building a Better Bloom Filter (PDF)
by Adam Kirsch and Michael Mitzenmacher.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A standard technique from the hashing literature is to use two hash functions h1(x) and h2(x) to simulate additional hash functions of the form gi (x) = h1(x) + ih2(x). We demonstrate that this technique can be usefully applied to Bloom filters and related data structures. Specifically, only two hash functions are necessary to effectively implement a Bloom filter without any loss in the asymptotic false positive probability. This leads to less computation and potentially less need for randomness in practice

[Go to top]

On Inferring Application Protocol Behaviors in Encrypted Network Traffic (PDF)
by Charles Wright, Fabian Monrose, and Gerald M. Masson.
In Journal of Machine Learning Research 7, 2006, pages 2745-2769. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several fundamental security mechanisms for restricting access to network resources rely on the ability of a reference monitor to inspect the contents of traffic as it traverses the network. However, with the increasing popularity of cryptographic protocols, the traditional means of inspecting packet contents to enforce security policies is no longer a viable approach as message contents are concealed by encryption. In this paper, we investigate the extent to which common application protocols can be identified using only the features that remain intact after encryption—namely packet size, timing, and direction. We first present what we believe to be the first exploratory look at protocol identification in encrypted tunnels which carry traffic from many TCP connections simultaneously, using only post-encryption observable features. We then explore the problem of protocol identification in individual encrypted TCP connections, using much less data than in other recent approaches. The results of our evaluation show that our classifiers achieve accuracy greater than 90 for several protocols in aggregate traffic, and, for most protocols, greater than 80 when making fine-grained classifications on single connections. Moreover, perhaps most surprisingly, we show that one can even estimate the number of live connections in certain classes of encrypted tunnels to within, on average, better than 20

[Go to top]

Increasing Data Resilience of Mobile Devices with a Collaborative Backup Service (PDF)
by Damien Martin-Guillerez, Michel Banâtre, and Paul Couderc.
In CoRR abs/cs/0611016, 2006. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Whoever has had his cell phone stolen knows how frustrating it is to be unable to get his contact list back. To avoid data loss when losing or destroying a mobile device like a PDA or a cell phone, data is usually backed-up to a fixed station. However, in the time between the last backup and the failure, important data can have been produced and then lost. To handle this issue, we propose a transparent collaborative backup system. Indeed, by saving data on other mobile devices between two connections to a global infrastructure, we can resist to such scenarios. In this paper, after a general description of such a system, we present a way to replicate data on mobile devices to attain a prerequired resilience for the backup

[Go to top]

Improving Lookup Performance Over a Widely-Deployed DHT (PDF)
by Daniel Stutzbach and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

During recent years, Distributed Hash Tables (DHTs) have been extensively studied through simulation and analysis. However, due to their limited deployment, it has not been possible to observe the behavior of a widely-deployed DHT in practice. Recently, the popular eMule file-sharing software incorporated a Kademlia-based DHT, called Kad, which currently has around one million simultaneous users. In this paper, we empirically study the performance of the key DHT operation, lookup, over Kad. First, we analytically derive the benefits of different ways to increase the richness of routing tables in Kademlia-based DHTs. Second, we empirically characterize two aspects of the accuracy of routing tables in Kad, namely completeness and freshness, and characterize their impact on Kad's lookup performance. Finally, we investigate how the efficiency and consistency of lookup in Kad can be improved by performing parallel lookup and maintaining multiple replicas, respectively. Our results pinpoint the best operating point for the degree of lookup parallelism and the degree of replication for Kad

[Go to top]

The IGOR File System for Efficient Data Distribution in the GRID (PDF)
by Kendy Kutzner and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many GRID applications such as drug discovery in the pharmaceutical industry or simulations in meteorology and generally in the earth sciences rely on large data bases. Historically, these data bases are flat files on the order of several hundred megabytes each. Today, sites often need to download dozens or hundreds of such files before they can start a simulation or analysis run, even if the respective application accesses only small fractions of the respective files. The IGOR file system (which has been developed within the EU FP6 SIMDAT project), addresses the need for an easy and efficient way to access large files across the Internet. IGOR-FS is especially suited for (potentially globally) distributed sites that read or modify only small portions of the files. IGOR-FS provides fine grained versioning and backup capabilities; and it is built on strong cryptography to protect confidential data both in the network and on the local sites storage systems

[Go to top]

iDIBS: An Improved Distributed Backup System (PDF)
by Faruck Morcos, Thidapat Chantem, Philip Little, Tiago Gasiba, and Douglas Thain.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

iDIBS is a peer-to-peer backup system which optimizes the Distributed Internet Backup System (DIBS). iDIBS offers increased reliability by enhancing the robustness of existing packet transmission mechanism. Reed-Solomon erasure codes are replaced with Luby Transform codes to improve computation speed and scalability of large files. Lists of peers are automatically stored onto nodes to reduce recovery time. To realize these optimizations, an acceptable amount of data overhead and an increase in network utilization are imposed on the iDIBS system. Through a variety of experiments, we demonstrate that iDIBS significantly outperforms DIBS in the areas of data computational complexity, backup reliability, and overall performance

[Go to top]

How to win the clonewars: efficient periodic n-times anonymous authentication (PDF)
by Jan Camenisch, Susan Hohenberger, Markulf Kohlweiss, Anna Lysyanskaya, and Mira Meyerovich.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We create a credential system that lets a user anonymously authenticate at most $n$ times in a single time period. A user withdraws a dispenser of n e-tokens. She shows an e-token to a verifier to authenticate herself; each e-token can be used only once, however, the dispenser automatically refreshes every time period. The only prior solution to this problem, due to Damg ard et al. [29], uses protocols that are a factor of k slower for the user and verifier, where k is the security parameter. Damg ard et al. also only support one authentication per time period, while we support n. Because our construction is based on e-cash, we can use existing techniques to identify a cheating user, trace all of her e-tokens, and revoke her dispensers. We also offer a new anonymity service: glitch protection for basically honest users who (occasionally) reuse e-tokens. The verifier can always recognize a reused e-token; however, we preserve the anonymity of users who do not reuse e-tokens too often

[Go to top]

Free Riding in BitTorrent is Cheap (PDF)
by Thomas Locher, Patrick Moor, Stefan Schmid, and Roger Wattenhofer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

While it is well-known that BitTorrent is vulnerable to selfish behavior, this paper demonstrates that even entire files can be downloaded without reciprocating at all in BitTorrent. To this end, we present BitThief, a free riding client that never contributes any real data. First, we show that simple tricks suffice in order to achieve high download rates, even in the absence of seeders. We also illustrate how peers in a swarm react to various sophisticated attacks. Moreover, our analysis reveals that sharing communitiescommunities originally intended to offer downloads of good quality and to promote cooperation among peersprovide many incentives to cheat

[Go to top]

Fireflies: scalable support for intrusion-tolerant network overlays (PDF)
by H avard Johansen, André Allavena, and Robbert Van Renesse.
In SIGOPS Oper. Syst. Rev 40(4), 2006, pages 3-13. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes and evaluates Fireflies, a scalable protocol for supporting intrusion-tolerant network overlays. While such a protocol cannot distinguish Byzantine nodes from correct nodes in general, Fireflies provides correct nodes with a reasonably current view of which nodes are live, as well as a pseudo-random mesh for communication. The amount of data sent by correct nodes grows linearly with the aggregate rate of failures and recoveries, even if provoked by Byzantine nodes. The set of correct nodes form a connected submesh; correct nodes cannot be eclipsed by Byzantine nodes. Fireflies is deployed and evaluated on PlanetLab

[Go to top]

Experiences in building and operating ePOST, a reliable peer-to-peer application (PDF)
by Alan Mislove, Ansley Post, Andreas Haeberlen, and Peter Druschel.
In SIGOPS Oper. Syst. Rev 40(4), 2006, pages 147-159. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer (p2p) technology can potentially be used to build highly reliable applications without a single point of failure. However, most of the existing applications, such as file sharing or web caching, have only moderate reliability demands. Without a challenging proving ground, it remains unclear whether the full potential of p2p systems can be realized.To provide such a proving ground, we have designed, deployed and operated a p2p-based email system. We chose email because users depend on it for their daily work and therefore place high demands on the availability and reliability of the service, as well as the durability, integrity, authenticity and privacy of their email. Our system, ePOST, has been actively used by a small group of participants for over two years.In this paper, we report the problems and pitfalls we encountered in this process. We were able to address some of them by applying known principles of system design, while others turned out to be novel and fundamental, requiring us to devise new solutions. Our findings can be used to guide the design of future reliable p2p systems and provide interesting new directions for future research

[Go to top]

Estimation based erasure-coding routing in delay tolerant networks (PDF)
by Yong Liao, Kun Tan, Zhensheng Zhang, and Lixin Gao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless Delay Tolerant Networks (DTNs) are intermittently connected mobile wireless networks. Some well-known assumptions of traditional networks are no longer true in DTNs, which makes routing in DTNs a challenging problem. We observe that mobile nodes in realistic wireless DTNs may always have some mobility pattern information which can be used to estimate one node's ability to deliver a specific message. This estimation can greatly enhance the routing performance in DTNs. Furthermore, we adopt an alternative way to generate redundancy using erasure coding. With a fixed overhead, the erasure coding can generate a large number of message-blocks instead of a few replications, and therefore it allows the transmission of only a portion of message to a relay. This can greatly increase the routing diversity when combined with estimation-based approaches. We have conducted extensive simulations to evaluate the performance of our scheme. The results demonstrate that our scheme outperforms previously proposed schemes

[Go to top]

E.: Anonymous Secure Communication in Wireless Mobile Ad-hoc Networks (PDF)
by Sk. Md. Mizanur Rahman, Atsuo Inomata, Takeshi Okamoto, and Masahiro Mambo.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The main characteristic of a mobile ad-hoc network is its infrastructure-less, highly dynamic topology, which is subject to malicious traffic analysis. Malicious intermediate nodes in wireless mobile ad-hoc networks are a threat concerning security as well as anonymity of exchanged information. To protect anonymity and achieve security of nodes in mobile ad-hoc networks, an anonymous on-demand routing protocol, termed RIOMO, is proposed. For this purpose, pseudo IDs of the nodes are generated considering Pairing-based Cryptography. Nodes can generate their own pseudo IDs independently. As a result RIOMO reduces pseudo IDs maintenance costs. Only trust-worthy nodes are allowed to take part in routing to discover a route. To ensure trustiness each node has to make authentication to its neighbors through an anonymous authentication process. Thus RIOMO safely communicates between nodes without disclosing node identities; it also provides different desirable anonymous properties such as identity privacy, location privacy, route anonymity, and robustness against several attacks

[Go to top]

DNS-Based Service Discovery in Ad Hoc Networks: Evaluation and Improvements
by Celeste Campo and Carlos García-Rubio.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In wireless networks, devices must be able to dynamically discover and share services in the environment. The problem of service discovery has attracted great research interest in the last years, particularly for ad hoc networks. Recently, the IETF has proposed the use of the DNS protocol for service discovery. For ad hoc networks, the IETF works in two proposals of distributed DNS, Multicast DNS and LLMNR, that can both be used for service discovery. In this paper we describe and compare through simulation the performance of service discovery based in these two proposals of distributed DNS. We also propose four simple improvements that reduce the traffic generated, and so the power consumption, especially of the most limited, battery powered, devices. We present simulation results that show the impact of our improvements in a typical scenario

[Go to top]

Distributed Routing in Small-World Networks (PDF)
by Oskar Sandberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Theoretical basis for the routing protocol of Freenet 0.7

[Go to top]

Distributed Pattern Matching: A Key to Flexible and Efficient P2P Search
by R. Ahmed and R. Boutaba.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Flexibility and efficiency are the prime requirements for any P2P search mechanism. Existing P2P systems do not seem to provide satisfactory solution for achieving these two conflicting goals. Unstructured search protocols (as adopted in Gnutella and FastTrack), provide search flexibility but exhibit poor performance characteristics. Structured search techniques (mostly distributed hash table (DHT)-based), on the other hand, can efficiently route queries to target peers but support exact-match queries only. In this paper we present a novel P2P system, called distributed pattern matching system (DPMS), for enabling flexible and efficient search. Distributed pattern matching can be used to solve problems like wildcard searching (for file-sharing P2P systems), partial service description matching (for service discovery systems) etc. DPMS uses a hierarchy of indexing peers for disseminating advertised patterns. Patterns are aggregated and replicated at each level along the hierarchy. Replication improves availability and resilience to peer failure, and aggregation reduces storage overhead. An advertised pattern can be discovered using any subset of its 1-bits; this allows inexact matching and queries in conjunctive normal form. Search complexity (i.e., the number of peers to be probed) in DPMS is O (log N + zetalog N/log N), where N is the total number of peers and zeta is proportional to the number of matches, required in a search result. The impact of churn problem is less severe in DPMS than DHT-based systems. Moreover, DPMS provides guarantee on search completeness for moderately stable networks. We demonstrate the effectiveness of DPMS using mathematical analysis and simulation results

[Go to top]

A distributed data caching framework for mobile ad hoc networks (PDF)
by Ying-Hong Wang, Chih-Feng Chao, Shih-Wei Lin, and Wei-Ting Chen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile ad hoc networks (MANETs), enabling multi-hop communication between mobile nodes, are characterized by variable network topology and the demand for efficient dynamic routing protocols. MANETs need no stationary infrastructure or preconstructed base station to coordinate packet transmissions or to advertise information of network topology for mobile nodes. The objective of this paper is to provide MANETs with a distributed data caching framework, which could cache the repetition of data and data path, shorten routes and time span to access data, and enhance data reusable rate to further reduce the use of bandwidth and the consumption of power

[Go to top]

Differential Privacy (PDF)
by Cynthia Dwork.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database.The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy

[Go to top]

Designing Economics Mechanisms
by Leonid Hurwicz and Stanley Reiter.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

A mechanism is a mathematical structure that models institutions through which economic activity is guided and coordinated. There are many such institutions; markets are the most familiar ones. Lawmakers, administrators and officers of private companies create institutions in order to achieve desired goals. They seek to do so in ways that economize on the resources needed to operate the institutions, and that provide incentives that induce the required behaviors. This book presents systematic procedures for designing mechanisms that achieve specified performance, and economize on the resources required to operate the mechanism. The systematic design procedures are algorithms for designing informationally efficient mechanisms. Most of the book deals with these procedures of design. When there are finitely many environments to be dealt with, and there is a Nash-implementing mechanism, our algorithms can be used to make that mechanism into an informationally efficient one. Informationally efficient dominant strategy implementation is also studied. Leonid Hurwicz is the Nobel Prize Winner 2007 for The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with colleagues Eric Maskin and Roger Myerson, for his work on the effectiveness of markets

[Go to top]

Cryptography from Anonymity (PDF)
by Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai.
In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06)-Volume 00, 2006, pages 239-248. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There is a vast body of work on implementing anonymous communication. In this paper, we study the possibility of using anonymous communication as a building block, and show that one can leverage on anonymity in a variety of cryptographic contexts. Our results go in two directions.–Feasibility. We show that anonymous communication over insecure channels can be used to implement unconditionally secure point-to-point channels, broadcast, and generalmulti-party protocols that remain unconditionally secure as long as less than half of the players are maliciously corrupted.–Efficiency. We show that anonymous channels can yield substantial efficiency improvements for several natural secure computation tasks. In particular, we present the first solution to the problem of private information retrieval (PIR) which can handle multiple users while being close to optimal with respect to both communication and computation.A key observation that underlies these results is that local randomization of inputs, via secret-sharing, when combined with the global mixing of the shares, provided by anonymity, allows to carry out useful computations on the inputs while keeping the inputs private

[Go to top]

Compare-by-hash: a reasoned analysis (PDF)
by John Black.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Compare-by-hash is the now-common practice used by systems designers who assume that when the digest of a cryptographic hash function is equal on two distinct files, then those files are identical. This approach has been used in both real projects and in research efforts (for example rysnc [16] and LBFS [12]). A recent paper by Henson criticized this practice [8]. The present paper revisits the topic from an advocate's standpoint: we claim that compare-by-hash is completely reasonable, and we offer various arguments in support of this viewpoint in addition to addressing concerns raised by Henson

[Go to top]

Communication Networks On the fundamental communication abstraction supplied by P2P overlay networks
by Cramer Curt and Thomas Fuhrmann.
In unknown, 2006. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The disruptive advent of peer-to-peer (P2P) file sharing in 2000 attracted significant interest. P2P networks have matured from their initial form, unstructured overlays, to structured overlays like distributed hash tables (DHTs), which are considered state-of-the-art. There are huge efforts to improve their performance. Various P2P applications like distributed storage and application-layer multicast were proposed. However, little effort was spent to understand the communication abstraction P2P overlays supply. Only when it is understood, the reach of P2P ideas will significantly broaden. Furthermore, this clarification reveals novel approaches and highlights future directions. In this paper, we reconsider well-known P2P overlays, linking them to insights from distributed systems research. We conclude that the main communication abstraction is that of a virtual address space or application-specific naming. On this basis, P2P systems build a functional layer implementing, for example lookup, indirection and distributed processing. Our insights led us to identify interesting and unexplored points in the design space

[Go to top]

Combining Virtual and Physical Structures for Self-organized Routing (PDF)
by Thomas Fuhrmann.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Our recently proposed scalable source routing (SSR) protocol combines source routing in the physical network with Chord-like routing in the virtual ring that is formed by the address space. Thereby, SSR provides self-organized routing in large unstructured networks of resource-limited devices. Its ability to quickly adapt to changes in the network topology makes it suitable not only for sensor-actuator networks but also for mobile ad-hoc networks. Moreover, SSR directly provides the key-based routing semantics, thereby making it an efficient basis for the scalable implementation of self-organizing, fully decentralized applications. In this paper we review SSR's self-organizing features and demonstrate how the combination of virtual and physical structures leads to emergence of stability and efficiency. In particular, we focus on SSR's resistance against node churn. Following the principle of combining virtual and physical structures, we propose an extension that stabilizes SSR in face of heavy node churn. Simulations demonstrate the effectiveness of this extension

[Go to top]

Combinatorial Auctions
by Peter Cramton, Yoav Shoham, and Richard Steinberg.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The study of combinatorial auctions – auctions in which bidders can bid on combinations of items or "packages" – draws on the disciplines of economics, operations research, and computer science. This landmark collection integrates these three perspectives, offering a state-of-the art survey of developments in combinatorial auction theory and practice by leaders in the field.Combinatorial auctions (CAs), by allowing bidders to express their preferences more fully, can lead to improved economic efficiency and greater auction revenues. However, challenges arise in both design and implementation. Combinatorial Auctions addresses each of these challenges. After describing and analyzing various CA mechanisms, the book addresses bidding languages and questions of efficiency. Possible strategies for solving the computationally intractable problem of how to compute the objective-maximizing allocation (known as the winner determination problem) are considered, as are questions of how to test alternative algorithms. The book discusses five important applications of CAs: spectrum auctions, airport takeoff and landing slots, procurement of freight transportation services, the London bus routes market, and industrial procurement. This unique collection makes recent work in CAs available to a broad audience of researchers and practitioners. The integration of work from the three disciplines underlying CAs, using a common language throughout, serves to advance the field in theory and practice

[Go to top]

A Classification for Privacy Techniques (PDF)
by Carlisle Adams.
In University of Ottawa Law amp; Technology Journal 3, 2006, pages 35-52. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper proposes a classification for techniques that encourage, preserve, or enhance privacy in online environments. This classification encompasses both automated mechanisms (those that exclusively or primarily use computers and software to implement privacy techniques) and nonautomated mechanisms (those that exclusively or primarily use human means to implement privacy techniques). We give examples of various techniques and show where they fit within this classification. The importance of such a classification is discussed along with its use as a tool for the comparison and evaluation of privacy techniques

[Go to top]

Churn Resistant de Bruijn Networks for Wireless on Demand Systems (PDF)
by Manuel Thiele, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless on demand systems typically need authentication, authorization and accounting (AAA) services. In a peer-to-peer (P2P) environment these AAA-services need to be provided in a fully decentralized manner. This excludes many cryptographic approaches since they need and rely on a central trusted instance. One way to accomplish AAA in a P2P manner are de Bruijn-networks, since there data can be routed over multiple non-overlapping paths, thereby hampering malicious nodes from manipulation that data. Originally, de Bruijn-networks required a rather fixed network structure which made them unsuitable for wireless networks. In this paper we generalize de Bruijn-networks to an arbitrary number of nodes while keeping all their desired properties. This is achieved by decoupling link degree and character set of the native de Bruijn graph. Furthermore we describe how this makes the resulting network resistant against node churn

[Go to top]

Building an AS-topology model that captures route diversity (PDF)
by Wolfgang Mühlbauer, Anja Feldmann, Olaf Maennel, Matthew Roughan, and Steve Uhlig.
In SIGCOMM Comput. Commun. Rev 36(4), 2006, pages 195-206. (BibTeX entry) (Download bibtex record)
(direct link) (website)

An understanding of the topological structure of the Internet is needed for quite a number of networking tasks, e. g., making decisions about peering relationships, choice of upstream providers, inter-domain traffic engineering. One essential component of these tasks is the ability to predict routes in the Internet. However, the Internet is composed of a large number of independent autonomous systems (ASes) resulting in complex interactions, and until now no model of the Internet has succeeded in producing predictions of acceptable accuracy.We demonstrate that there are two limitations of prior models: (i) they have all assumed that an Autonomous System (AS) is an atomic structure–it is not, and (ii) models have tended to oversimplify the relationships between ASes. Our approach uses multiple quasi-routers to capture route diversity within the ASes, and is deliberately agnostic regarding the types of relationships between ASes. The resulting model ensures that its routing is consistent with the observed routes. Exploiting a large number of observation points, we show that our model provides accurate predictions for unobserved routes, a first step towards developing structural mod-els of the Internet that enable real applications

[Go to top]

Bootstrapping Chord in Ad hoc Networks: Not Going Anywhere for a While (PDF)
by Curt Cramer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

With the growing prevalence of wireless devices, infrastructure-less ad hoc networking is coming closer to reality. Research in this field has mainly been concerned with routing. However, to justify the relevance of ad hoc networks, there have to be applications. Distributed applications require basic services such as naming. In an ad hoc network, these services have to be provided in a decentralized way. We believe that structured peer-to-peer overlays are a good basis for their design. Prior work has been focused on the long-run performance of virtual peer-to-peer overlays over ad hoc networks. In this paper, we consider a vital functionality of any peer-to-peer network: bootstrapping. We formally show that the self-configuration process of a spontaneously deployed Chord network has a time complexity linear in the network size. In addition to that, its centralized bootstrapping procedure causes an unfavorable traffic load imbalance

[Go to top]

Anonymity Protocols as Noisy Channels? (PDF)
by Konstantinos Chatzikokolakis, Catuscia Palamidessi, and Prakash Panangaden.
In Proc. 2nd Symposium on Trustworthy Global Computing, LNCS. Springer 4661/2007, 2006, pages 281-300. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a framework in which anonymity protocols are interpreted as particular kinds of channels, and the degree of anonymity provided by the protocol as the converse of the channel's capacity. We also investigate how the adversary can test the system to try to infer the user's identity, and we study how his probability of success depends on the characteristics of the channel. We then illustrate how various notions of anonymity can be expressed in this framework, and show the relation with some definitions of probabilistic anonymity in literature. This work has been partially supported by the INRIA DREI Équipe Associée PRINTEMPS. The work of Konstantinos Chatzikokolakis and Catuscia Palamidessi has been also supported by the INRIA ARC project ProNoBiS

[Go to top]

Algorithms to accelerate multiple regular expressions matching for deep packet inspection
by Sailesh Kumar, Sarang Dharmapurikar, Fang Yu, Patrick Crowley, and Jonathan Turner.
In SIGCOMM Comput. Commun. Rev 36(4), 2006, pages 339-350. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Algorithms to accelerate multiple regular expressions matching for deep packet inspection
by Sailesh Kumar, Sarang Dharmapurikar, Fang Yu, Patrick Crowley, and Jonathan Turner.
In SIGCOMM Comput. Commun. Rev 36(4), 2006, pages 339-350. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

2005

Anonymity and Privacy in Electronic Services (PDF)
by Claudia Diaz.
phd, Katholieke Universiteit Leuven, December 2005. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Tracking anonymous peer-to-peer VoIP calls on the internet (PDF)
by Xinyuan Wang, Shiping Chen, and Sushil Jajodia.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer VoIP calls are becoming increasingly popular due to their advantages in cost and convenience. When these calls are encrypted from end to end and anonymized by low latency anonymizing network, they are considered by many people to be both secure and anonymous.In this paper, we present a watermark technique that could be used for effectively identifying and correlating encrypted, peer-to-peer VoIP calls even if they are anonymized by low latency anonymizing networks. This result is in contrast to many people's perception. The key idea is to embed a unique watermark into the encrypted VoIP flow by slightly adjusting the timing of selected packets. Our analysis shows that it only takes several milliseconds time adjustment to make normal VoIP flows highly unique and the embedded watermark could be preserved across the low latency anonymizing network if appropriate redundancy is applied. Our analytical results are backed up by the real-time experiments performed on leading peer-to-peer VoIP client and on a commercially deployed anonymizing network. Our results demonstrate that (1) tracking anonymous peer-to-peer VoIP calls on the Internet is feasible and (2) low latency anonymizing networks are susceptible to timing attacks

[Go to top]

The Pynchon Gate: A Secure Method of Pseudonymous Mail Retrieval (PDF)
by Len Sassaman, Bram Cohen, and Nick Mathewson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe the Pynchon Gate, a practical pseudonymous message retrieval system. Our design uses a simple distributed-trust private information retrieval protocol to prevent adversaries from linking recipients to their pseudonyms, even when some of the infrastructure has been compromised. This approach resists global traffic analysis significantly better than existing deployed pseudonymous email solutions, at the cost of additional bandwidth. We examine security concerns raised by our model, and propose solutions

[Go to top]

Provable Anonymity (PDF)
by Flavio D. Garcia, Ichiro Hasuo, Wolter Pieters, and Peter van Rossum.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper provides a formal framework for the analysis of information hiding properties of anonymous communication protocols in terms of epistemic logic.The key ingredient is our notion of observational equivalence, which is based on the cryptographic structure of messages and relations between otherwise random looking messages. Two runs are considered observationally equivalent if a spy cannot discover any meaningful distinction between them.We illustrate our approach by proving sender anonymity and unlinkability for two anonymizing protocols, Onion Routing and Crowds. Moreover, we consider a version of Onion Routing in which we inject a subtle error and show how our framework is capable of capturing this flaw

[Go to top]

Obfuscated Ciphertext Mixing (PDF)
by Ben Adida and Douglas Wikström.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mixnets are a type of anonymous channel composed of a handful of trustees that, each in turn, shu$$e and rerandomize a batch ciphertexts. For applications that require verifiability, each trustee provides a proof of correct mixing. Though mixnets have recently been made quite e$$cient, they still require secret computation and proof generation after the mixing process. We introduce and implement Obfuscated Ciphertext Mixing, the obfuscation of a mixnet program. Using this technique, all proofs can be performed before the mixing process, even before the inputs are available. In addition, the mixing program does not need to be secret: anyone can publicly compute the shuffle (though not the decryption). We frame this functionality in the strongest obfuscation setting proposed by Barak et. al. [4], tweaked for the public-key setting. For applications where the secrecy of the shuffle permutation is particularly important (e.g. voting), we also consider the Distributed Obfuscation of a Mixer, where multiple trustees cooperate to generate an obfuscated mixer program such that no single trustee knows the composed shuffle permutation

[Go to top]

Chainsaw: Eliminating Trees from Overlay Multicast (PDF)
by Vinay Pai, Kapil Kumar, Karthik Tamilmani, Vinay Sambamurthy, and Alexander E. Mohr.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present Chainsaw, a p2p overlay multicast system that completely eliminates trees. Peers are notified of new packets by their neighbors and must explicitly request a packet from a neighbor in order to receive it. This way, duplicate data can be eliminated and a peer can ensure it receives all packets. We show with simulations that Chainsaw has a short startup time, good resilience to catastrophic failure and essentially no packet loss. We support this argument with real-world experiments on Planetlab and compare Chainsaw to Bullet and Splitstream using MACEDON

[Go to top]

Measurements, analysis, and modeling of BitTorrent-like systems (PDF)
by Lei Guo, Songqing Chen, Zhen Xiao, Enhua Tan, Xiaoning Ding, and Xiaodong Zhang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Existing studies on BitTorrent systems are single-torrent based, while more than 85 of all peers participate in multiple torrents according to our trace analysis. In addition, these studies are not sufficiently insightful and accurate even for single-torrent models, due to some unrealistic assumptions. Our analysis of representative Bit-Torrent traffic provides several new findings regarding the limitations of BitTorrent systems: (1) Due to the exponentially decreasing peer arrival rate in reality, service availability in such systems becomes poor quickly, after which it is difficult for the file to be located and downloaded. (2) Client performance in the BitTorrent-like systems is unstable, and fluctuates widely with the peer population. (3) Existing systems could provide unfair services to peers, where peers with high downloading speed tend to download more and upload less. In this paper, we study these limitations on torrent evolution in realistic environments. Motivated by the analysis and modeling results, we further build a graph based multi-torrent model to study inter-torrent collaboration. Our model quantitatively provides strong motivation for inter-torrent collaboration instead of directly stimulating seeds to stay longer. We also discuss a system design to show the feasibility of multi-torrent collaboration

[Go to top]

Pastis: A Highly-Scalable Multi-user Peer-to-Peer File System (PDF)
by Jean-Michel Busca, Fabio Picconi, and Pierre Sens.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We introduce Pastis, a completely decentralized multi-user read-write peer-to-peer file system. In Pastis every file is described by a modifiable inode-like structure which contains the addresses of the immutable blocks in which the file contents are stored. All data are stored using the Past distributed hash table (DHT), which we have modified in order to reduce the number of network messages it generates, thus optimizing replica retrieval. Pastis' design is simple compared to other existing systems, as it does not require complex algorithms like Byzantine-fault tolerant (BFT) replication or a central administrative authority. It is also highly scalable in terms of the number of network nodes and users sharing a given file or portion of the file system. Furthermore, Pastis takes advantage of the fault tolerance and good locality properties of its underlying storage layer, the Past DHT. We have developed a prototype based on the FreePastry open-source implementation of the Past DHT. We have used this prototype to evaluate several characteristics of our file system design. Supporting the close-to-open consistency model, plus a variant of the read-your-writes model, our prototype shows that Pastis is between 1.4 to 1.8 times slower than NFS. In comparison, Ivy and Oceanstore are between two to three times slower than NFS

[Go to top]

Local View Attack on Anonymous Communication (PDF)
by Marcin Gogolewski, Marek Klonowski, and Miroslaw Kutylowski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider anonymous communication protocols based on onions: each message is sent in an encrypted form through a path chosen at random by its sender, and the message is re-coded by each server on the path. Recently, it has been shown that if the anonymous paths are long enough, then the protocols provide provable security for some adversary models. However, it was assumed that all users choose intermediate servers uniformly at random from the same set of servers. We show that if a single user chooses only from a constrained subset of possible intermediate servers, anonymity level may dramatically decrease. A thumb rule is that if Alice is aware of much less than 50 of possible intermediate servers, then the anonymity set for her message becomes surprisingly small with high probability. Moreover, for each location in the anonymity set an adversary may compute probability that it gets a message of Alice. Since there are big differences in these probabilities, in most cases the true destination of the message from Alice is in a small group of locations with the highest probabilities. Our results contradict some beliefs that the protocols mentioned guarantee anonymity provided that the set of possible intermediate servers for each user is large

[Go to top]

Sybilproof reputation mechanisms (PDF)
by Alice Cheng and Eric Friedman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Due to the open, anonymous nature of many P2P networks, new identities–or sybils–may be created cheaply and in large numbers. Given a reputation system, a peer may attempt to falsely raise its reputation by creating fake links between its sybils. Many existing reputation mechanisms are not resistant to these types of strategies.Using a static graph formulation of reputation, we attempt to formalize the notion of sybilproofness. We show that there is no symmetric sybilproof reputation function. For nonsymmetric reputations, following the notion of reputation propagation along paths, we give a general asymmetric reputation function based on flow and give conditions for sybilproofness

[Go to top]

Self-recharging virtual currency (PDF)
by David Irwin, Jeff Chase, Laura Grit, and Aydan Yumerefendi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Market-based control is attractive for networked computing utilities in which consumers compete for shared resources (computers, storage, network bandwidth). This paper proposes a new self-recharging virtual currency model as a common medium of exchange in a computational market. The key idea is to recycle currency through the economy automatically while bounding the rate of spending by consumers. Currency budgets may be distributed among consumers according to any global policy; consumers spend their budgets to schedule their resource usage through time, but cannot hoard their currency or starve.We outline the design and rationale for self-recharging currency in Cereus, a system for market-based community resource sharing, in which participants are authenticated and sanctions are sufficient to discourage fraudulent behavior. Currency transactions in Cereus are accountable: offline third-party audits can detect and prove cheating, so participants may transfer and recharge currency autonomously without involvement of the trusted banking service

[Go to top]

A Quick Introduction to Bloom Filters (PDF)
by Christian Grothoff.
In unknown, August 2005. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

A new mechanism for the free-rider problem (PDF)
by Sujay Sanghavi and Bruce Hajek.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The free-rider problem arises in the provisioning of public resources, when users of the resource have to contribute towards the cost of production. Selfish users may have a tendency to misrepresent preferences – so as to minimize individual contributions – leading to inefficient levels of production of the resource. Groves and Loeb formulated a classic model capturing this problem, and proposed (what later came to be known as) the VCG mechanism as a solution. However, in the presence of heterogeneous users and communication constraints, or in decentralized settings, implementing this mechanism places an unrealistic communication burden. In this paper we propose a class of alternative mechanisms for the same problem as considered by Groves and Loeb, but with the added constraint of severely limited communication between users and the provisioning authority. When these mechanisms are used, efficient production is ensured as a Nash equilibrium outcome, for a broad class of users. Furthermore, a natural bid update strategy is shown to globally converge to efficient Nash equilibria. An extension to multiple public goods with inter-related valuations is also presented

[Go to top]

Influences on cooperation in BitTorrent communities (PDF)
by Nazareno Andrade, Miranda Mowbray, Aliandro Lima, Gustavo Wagner, and Matei Ripeanu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We collect BitTorrent usage data across multiple file-sharing communities and analyze the factors that affect users' cooperative behavior. We find evidence that the design of the BitTorrent protocol results in increased cooperative behavior over other P2P protocols used to share similar content (e.g. Gnutella). We also investigate two additional community-specific mechanisms that foster even more cooperation

[Go to top]

Incentives in BitTorrent Induce Free Riding (PDF)
by Seung Jun and Mustaque Ahamad.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We investigate the incentive mechanism of BitTorrent, which is a peer-to-peer file distribution system. As downloaders in BitTorrent are faced with the conflict between the eagerness to download and the unwillingness to upload, we relate this problem to the iterated prisoner's dilemma, which suggests guidelines to design a good incentive mechanism. Based on these guidelines, we propose a new, simple incentive mechanism. Our analysis and the experimental results using PlanetLab show that the original incentive mechanism of BitTorrent can induce free riding because it is not effective in rewarding and punishing downloaders properly. In contrast, a new mechanism proposed by us is shown to be more robust against free riders

[Go to top]

Gossip-based aggregation in large dynamic networks (PDF)
by Márk Jelasity, Alberto Montresor, and Ozalp Babaoglu.
In ACM Transactions on Computer Systems 23, August 2005, pages 219-252. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure—all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures

[Go to top]

A Formal Treatment of Onion Routing (PDF)
by Jan Camenisch and Anna Lysyanskaya.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous channels are necessary for a multitude of privacy-protecting protocols. Onion routing is probably the best known way to achieve anonymity in practice. However, the cryptographic aspects of onion routing have not been sufficiently explored: no satisfactory definitions of security have been given, and existing constructions have only had ad-hoc security analysis for the most part. We provide a formal definition of onion-routing in the universally composable framework, and also discover a simpler definition (similar to CCA2 security for encryption) that implies security in the UC framework. We then exhibit an efficient and easy to implement construction of an onion routing scheme satisfying this definition

[Go to top]

Cooperation among strangers with limited information about reputation (PDF)
by Gary E. Bolton, Elena Katok, and Axel Ockenfels.
In Journal of Public Economics 89, August 2005, pages 1457-1468. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The amount of institutional intervention necessary to secure efficiency-enhancing cooperation in markets and organizations, in circumstances where interactions take place among essentially strangers, depends critically on the amount of information informal reputation mechanisms need transmit. Models based on subgame perfection find that the information necessary to support cooperation is recursive in nature and thus information generating and processing requirements are quite demanding. Models that do not rely on subgame perfection, on the other hand, suggest that the information demands may be quite modest. The experiment we present indicates that even without any reputation information there is a non-negligible amount of cooperation that is, however, quite sensitive to the cooperation costs. For high costs, providing information about a partner's immediate past action increases cooperation. Recursive information about the partners' previous partners' reputation further promotes cooperation, regardless of the cooperation costs

[Go to top]

The Topology of Covert Conflict (PDF)
by Shishir Nagaraja and Ross Anderson.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This is a short talk on topology of covert conflict, comprising joint work I've been doing with Ross Anderson. The background of this work is the following. We consider a conflict, and there are parties to the conflict. There is communication going on that can be abstracted as a network of nodes (parties) and links (social ties between the nodes). We contend that once you've got a conflict and you've got enough parties to it, these guys start communicating as a result of the conflict. They form connections, that influences the conflict, and the dynamics of the conflict in turn feeds the connectivity of the unfolding network. Modern conflicts often turn on connectivity: consider, for instance, anything from the American army's attack on the Taleban in Afghanistan, and elsewhere, or medics who are trying to battle a disease, like Aids, or anything else. All of these turn on, making strategic decisions about which nodes to go after in the network. For instance, you could consider that a good first place to give condoms out and start any Aids programme, would be with prostitutes

[Go to top]

Selfish Routing with Incomplete Information (PDF)
by Martin Gairing, Burkhard Monien, and Karsten Tiemann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In his seminal work Harsanyi introduced an elegant approach to study non-cooperative games with incomplete information where the players are uncertain about some parameters. To model such games he introduced the Harsanyi transformation, which converts a game with incomplete information to a strategic game where players may have different types. In the resulting Bayesian game players' uncertainty about each others types is described by a probability distribution over all possible type profiles.In this work, we introduce a particular selfish routing game with incomplete information that we call Bayesian routing game. Here, n selfish users wish to assign their traffic to one of m links. Users do not know each others traffic. Following Harsanyi's approach, we introduce for each user a set of possible types.This paper presents a comprehensive collection of results for the Bayesian routing game.We prove, with help of a potential function, that every Bayesian routing game possesses a pure Bayesian Nash equilibrium. For the model of identical links and independent type distribution we give a polynomial time algorithm to compute a pure Bayesian Nash equilibrium.We study structural properties of fully mixed Bayesian Nash equilibria for the model of identical links and show that they maximize individual cost. In general there exists more than one fully mixed Bayesian Nash equilibrium. We characterize the class of fully mixed Bayesian Nash equilibria in the case of independent type distribution.We conclude with results on coordination ratio for the model of identical links for three social cost measures, that is, social cost as expected maximum congestion, sum of individual costs and maximum individual cost. For the latter two we are able to give (asymptotic) tight bounds using our results on fully mixed Bayesian Nash equilibria.To the best of our knowledge this is the first time that mixed Bayesian Nash equilibria have been studied in conjunction with social cost

[Go to top]

Query Forwarding Algorithm Supporting Initiator Anonymity in GNUnet (PDF)
by Kohei Tatara, Y. Hori, and Kouichi Sakurai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Anonymity in peer-to-peer network means that it is difficult to associate a particular communication with a sender or a recipient. Recently, anonymous peer-to-peer framework, called GNUnet, was developed. A primary feature of GNUnet is resistance to traffic-analysis. However, Kugler analyzed a routing protocol in GNUnet, and pointed out traceability of initiator. In this paper, we propose an alternative routing protocol applicable in GNUnet, which is resistant to Kugler's shortcut attacks

[Go to top]

Preprocessing techniques for accelerating the DCOP algorithm ADOPT (PDF)
by Syed Ali, Sven Koenig, and Milind Tambe.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Methods for solving Distributed Constraint Optimization Problems (DCOP) have emerged as key techniques for distributed reasoning. Yet, their application faces significant hurdles in many multiagent domains due to their inefficiency. Preprocessing techniques have successfully been used to speed up algorithms for centralized constraint satisfaction problems. This paper introduces a framework of different preprocessing techniques that are based on dynamic programming and speed up ADOPT, an asynchronous complete and optimal DCOP algorithm. We investigate when preprocessing is useful and which factors influence the resulting speedups in two DCOP domains, namely graph coloring and distributed sensor networks. Our experimental results demonstrate that our preprocessing techniques are fast and can speed up ADOPT by an order of magnitude

[Go to top]

Overcoming free-riding behavior in peer-to-peer systems (PDF)
by Michal Feldman and John Chuang.
In ACM SIGecom Exchanges 5, July 2005, pages 41-50. (BibTeX entry) (Download bibtex record)
(direct link) (website)

While the fundamental premise of peer-to-peer (P2P) systems is that of voluntary resource sharing among individual peers, there is an inherent tension between individual rationality and collective welfare that threatens the viability of these systems. This paper surveys recent research at the intersection of economics and computer science that targets the design of distributed systems consisting of rational participants with diverse and selfish interests. In particular, we discuss major findings and open questions related to free-riding in P2P systems: factors affecting the degree of free-riding, incentive mechanisms to encourage user cooperation, and challenges in the design of incentive mechanisms for P2P systems

[Go to top]

Determining the Peer Resource Contributions in a P2P Contract (PDF)
by Behrooz Khorshadi, Xin Liu, and Dipak Ghosal.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we study a scheme called P2P contract which explicitly specifies the resource contributions that are required from the peers. In particular, we consider a P2P file sharing system in which when a peer downloads the file it is required to serve the file to upto N other peers within a maximum period of time T. We study the behavior of this contribution scheme in both centralized and decentralized P2P networks. In a centralized architecture, new requests are forwarded to a central server which hands out the contract along with a list of peers from where the file can be downloaded. We show that a simple fixed contract (i.e., fixed values of N and T) is sufficient to create the required server capacity which adapts to the load. Furthermore, we show that T, the time part of the contract is a more important control parameter than N. In the case of a decentralized P2P architecture, each new request is broadcast to a certain neighborhood determined by the time-to-live (TTL) parameter. Each server receiving the request independently doles out a contract and the requesting peer chooses the one which is least constraining. If there are no servers in the neighborhood, the request fails. To achieve a good request success ratio, we propose an adaptive scheme to set the contracts without requiring global information. Through both analysis and simulation, we show that the proposed scheme adapts to the load and achieves low request failure rate with high server efficiency

[Go to top]

Decentralized Schemes for Size Estimation in Large and Dynamic Groups (PDF)
by Dionysios Kostoulas, Dimitrios Psaltoulis, Indranil Gupta, Kenneth P. Birman, and Alan Demers.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Large-scale and dynamically changing distributed systems such as the Grid, peer-to-peer overlays, etc., need to collect several kinds of global statistics in a decentralized manner. In this paper, we tackle a specific statistic collection problem called Group Size Estimation, for estimating the number of non-faulty processes present in the global group at any given point of time. We present two new decentralized algorithms for estimation in dynamic groups, analyze the algorithms, and experimentally evaluate them using real-life traces. One scheme is active: it spreads a gossip into the overlay first, and then samples the receipt times of this gossip at different processes. The second scheme is passive: it measures the density of processes when their identifiers are hashed into a real interval. Both schemes have low latency, scalable perprocess overheads, and provide high levels of probabilistic accuracy for the estimate. They are implemented as part of a size estimation utility called PeerCounter that can be incorporated modularly into standard peer-to-peer overlays. We present experimental results from both the simulations and PeerCounter, running on a cluster of 33 Linux servers

[Go to top]

Some observations on BitTorrent performance (PDF)
by Ashwin R. Bharambe, Cormac Herley, and Venkata N. Padmanabhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present a simulation-based study of BitTorrent. Our results confirm that BitTorrent performs near-optimally in terms of uplink bandwidth utilization and download time, except under certain extreme conditions. On fairness, however, our work shows that low bandwidth peers systematically download more than they upload to the network when high bandwidth peers are present. We find that the rate-based tit-for-tat policy is not effective in preventing unfairness. We show how simple changes to the tracker and a stricter, block-based tit-for-tat policy, greatly improves fairness, while maintaining high utilization

[Go to top]

Reading File Metadata with extract and libextractor
by Christian Grothoff.
In Linux Journal 6-2005, June 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Provable Anonymity for Networks of Mixes (PDF)
by Marek Klonowski and Miroslaw Kutylowski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We analyze networks of mixes used for providing untraceable communication. We consider a network consisting of k mixes working in parallel and exchanging the outputs – which is the most natural architecture for composing mixes of a certain size into networks able to mix a larger number of inputs at once. We prove that after O(log k) rounds the network considered provides a fair level of privacy protection for any number of messages. No mathematical proof of this kind has been published before. We show that if at least one of server is corrupted we need substantially more rounds to meet the same requirements of privacy protection

[Go to top]

Off-line Karma: A Decentralized Currency for Peer-to-peer and Grid Applications (PDF)
by Flavio D. Garcia and Jaap-Henk Hoepman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) and grid systems allow their users to exchange information and share resources, with little centralised or hierarchical control, instead relying on the fairness of the users to make roughly as much resources available as they use. To enforce this balance, some kind of currency or barter (called karma) is needed that must be exchanged for resources thus limiting abuse. We present a completely decentralised, off-line karma implementation for P2P and grid systems, that detects double-spending and other types of fraud under varying adversarial scenarios. The system is based on tracing the spending pattern of coins, and distributing the normally central role of a bank over a predetermined, but random, selection of nodes. The system is designed to allow nodes to join and leave the system at arbitrary times

[Go to top]

Hidden-action in multi-hop routing (PDF)
by Michal Feldman, John Chuang, Ion Stoica, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action

[Go to top]

Free Riding on Gnutella Revisited: The Bell Tolls? (PDF)
by Daniel Hughes, Geoff Coulson, and James Walkerdine.
In IEEE Distributed Systems Online 6, June 2005. (BibTeX entry) (Download bibtex record)
(direct link)

Individuals who use peer-to-peer (P2P) file-sharing networks such as Gnutella face a social dilemma. They must decide whether to contribute to the common good by sharing files or to maximize their personal experience by free riding, downloading files while not contributing any to the network. Individuals gain no personal benefits from uploading files (in fact, it's inconvenient), so it's "rational" for users to free ride. However, significant numbers of free riders degrade the entire system's utility, creating a "tragedy of the digital commons." In this article, a new analysis of free riding on the Gnutella network updates data from 2000 and points to an increasing downgrade in the network's overall performance and the emergence of a "metatragedy" of the commons among Gnutella developers

[Go to top]

Coupon replication systems (PDF)
by Laurent Massoulié and Milan Vojnović.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Motivated by the study of peer-to-peer file swarming systems à la BitTorrent, we introduce a probabilistic model of coupon replication systems. These systems consist of users, aiming to complete a collection of distinct coupons. Users are characterised by their current collection of coupons, and leave the system once they complete their coupon collection. The system evolution is then specified by describing how users of distinct types meet, and which coupons get replicated upon such encounters.For open systems, with exogenous user arrivals, we derive necessary and sufficient stability conditions in a layered scenario, where encounters are between users holding the same number of coupons. We also consider a system where encounters are between users chosen uniformly at random from the whole population. We show that performance, captured by sojourn time, is asymptotically optimal in both systems as the number of coupon types becomes large.We also consider closed systems with no exogenous user arrivals. In a special scenario where users have only one missing coupon, we evaluate the size of the population ultimately remaining in the system, as the initial number of users, N, goes to infinity. We show that this decreases geometrically with the number of coupons, K. In particular, when the ratio K/log(N) is above a critical threshold, we prove that this number of left-overs is of order log(log(N)).These results suggest that performance of file swarming systems does not depend critically on either altruistic user behavior, or on load balancing strategies such as rarest first

[Go to top]

Countering Hidden-action Attacks on Networked Systems (PDF)
by Tyler Moore.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We define an economic category of hidden-action attacks: actions made attractive by a lack of observation. We then consider its implications for computer systems. Rather than structure contracts to compensate for incentive problems, we rely on insights from social capital theory to design network topologies and interactions that undermine the potential for hidden-action attacks

[Go to top]

Compulsion Resistant Anonymous Communications (PDF)
by George Danezis and Jolyon Clulow.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We study the effect compulsion attacks, through which an adversary can request a decryption or key from an honest node, have on the security of mix based anonymous communication systems. Some specific countermeasures are proposed that increase the cost of compulsion attacks, detect that tracing is taking place and ultimately allow for some anonymity to be preserved even when all nodes are under compulsion. Going beyond the case when a single message is traced, we also analyze the effect of multiple messages being traced and devise some techniques that could retain some anonymity. Our analysis highlights that we can reason about plausible deniability in terms of the information theoretic anonymity metrics

[Go to top]

Censorship Resistance Revisited (PDF)
by Ginger Perng, Michael K. Reiter, and Chenxi Wang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Censorship resistant systems attempt to prevent censors from imposing a particular distribution of content across a system. In this paper, we introduce a variation of censorship resistance (CR) that is resistant to selective filtering even by a censor who is able to inspect (but not alter) the internal contents and computations of each data server, excluding only the server's private signature key. This models a service provided by operators who do not hide their identities from censors. Even with such a strong adversarial model, our definition states that CR is only achieved if the censor must disable the entire system to filter selected content. We show that existing censorship resistant systems fail to meet this definition; that Private Information Retrieval (PIR) is necessary, though not sufficient, to achieve our definition of CR; and that CR is achieved through a modification of PIR for which known implementations exist

[Go to top]

On Blending Attacks For Mixes with Memory (PDF)
by Luke O'Connor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Blending attacks are a general class of traffic-based attacks, exemplified by the (n–1)-attack. Adding memory or pools to mixes mitigates against such attacks, however there are few known quantitative results concerning the effect of pools on blending attacks. In this paper we give a precise analysis of the number of rounds required to perform an (n–1)-attack on the pool mix, timed pool mix, timed dynamic pool mix and the binomial mix

[Go to top]

Unmixing Mix Traffic (PDF)
by Ye Zhu and Riccardo Bettati.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We apply blind source separation techniques from statistical signal processing to separate the traffic in a mix network. Our experiments show that this attack is effective and scalable. By combining the flow separation method and frequency spectrum matching method, a passive attacker can get the traffic map of the mix network. We use a non-trivial network to show that the combined attack works. The experiments also show that multicast traffic can be dangerous for anonymity networks

[Go to top]

Privacy Vulnerabilities in Encrypted HTTP Streams (PDF)
by George Dean Bissias, Marc Liberatore, and Brian Neil Levine.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Encrypting traffic does not prevent an attacker from performing some types of traffic analysis. We present a straightforward traffic analysis attack against encrypted HTTP streams that is surprisingly effective in identifying the source of the traffic. An attacker starts by creating a profile of the statistical characteristics of web requests from interesting sites, including distributions of packet sizes and inter-arrival times. Later, candidate encrypted streams are compared against these profiles. In our evaluations using real traffic, we find that many web sites are subject to this attack. With a training period of 24 hours and a 1 hour delay afterwards, the attack achieves only 23 accuracy. However, an attacker can easily pre-determine which of trained sites are easily identifiable. Accordingly, against 25 such sites, the attack achieves 40 accuracy; with three guesses, the attack achieves 100 accuracy for our data. Longer delays after training decrease accuracy, but not substantially. We also propose some countermeasures and improvements to our current method. Previous work analyzed SSL traffic to a proxy, taking advantage of a known flaw in SSL that reveals the length of each web object. In contrast, we exploit the statistical characteristics of web streams that are encrypted as a single flow, which is the case with WEP/WPA, IPsec, and SSH tunnels

[Go to top]

Mix-network with Stronger Security
by Jan Camenisch and Anton Mityagin.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider a mix-network as a cryptographic primitive that provides anonymity. A mix-network takes as input a number of ciphertexts and outputs a random shuffle of the corresponding plaintexts. Common applications of mix-nets are electronic voting and anonymous network traffic. In this paper, we present a novel construction of a mix-network, which is based on shuffling ElGamal encryptions. Our scheme is the first mix-net to meet the strongest security requirements: it is robust and secure against chosen ciphertext attacks as well as against active attacks in the Universally Composable model. Our construction allows one to securely execute several mix-net instances concurrently, as well as to run multiple mix-sessions without changing a set of keys. Nevertheless, the scheme is efficient: it requires a linear work (in the number of input messages) per mix-server

[Go to top]

Message Splitting Against the Partial Adversary (PDF)
by Andrei Serjantov and Steven J. Murdoch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We review threat models used in the evaluation of anonymity systems' vulnerability to traffic analysis. We then suggest that, under the partial adversary model, if multiple packets have to be sent through these systems, more anonymity can be achieved if senders route the packets via different paths. This is in contrast to the normal technique of using the same path for them all. We comment on the implications of this for message-based and connection-based anonymity systems. We then proceed to examine the only remaining traffic analysis attack – one which considers the entire system as a black box. We show that it is more difficult to execute than the literature suggests, and attempt to empirically estimate the parameters of the Mixmaster and the Mixminion systems needed in order to successfully execute the attack

[Go to top]

Low-Cost Traffic Analysis of Tor (PDF)
by Steven J. Murdoch and George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is the second generation Onion Router, supporting the anonymous transport of TCP streams over the Internet. Its low latency makes it very suitable for common tasks, such as web browsing, but insecure against traffic-analysis attacks by a global passive adversary. We present new traffic-analysis techniques that allow adversaries with only a partial view of the network to infer which nodes are being used to relay the anonymous streams and therefore greatly reduce the anonymity provided by Tor. Furthermore, we show that otherwise unrelated streams can be linked back to the same initiator. Our attack is feasible for the adversary anticipated by the Tor designers. Our theoretical attacks are backed up by experiments performed on the deployed, albeit experimental, Tor network. Our techniques should also be applicable to any low latency anonymous network. These attacks highlight the relationship between the field of traffic-analysis and more traditional computer security issues, such as covert channel analysis. Our research also highlights that the inability to directly observe network links does not prevent an attacker from performing traffic-analysis: the adversary can use the anonymising network as an oracle to infer the traffic load on remote nodes in order to perform traffic-analysis

[Go to top]

Fuzzy Identity-Based Encryption (PDF)
by Amit Sahai and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, ω, to decrypt a ciphertext encrypted with an identity, ω , if and only if the identities ω and ω are close to each other as measured by the set overlap distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term attribute-based encryption. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model

[Go to top]

Anonymity in Structured Peer-to-Peer Networks (PDF)
by Nikita Borisov and Jason Waddle.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Existing peer-to-peer systems that aim to provide anonymity to its users are based on networks with unstructured or loosely-structured routing algorithms. Structured routing offers performance and robustness guarantees that these systems are unable to achieve. We therefore investigate adding anonymity support to structured peer-to-peer networks. We apply an entropy-based anonymity metric to Chord and use this metric to quantify the improvements in anonymity afforded by several possible extensions. We identify particular properties of Chord that have the strongest effect on anonymity and propose a routing extension that allows a general trade-off between anonymity and performance. Our results should be applicable to other structured peer-to-peer systems

[Go to top]

An Analysis of Parallel Mixing with Attacker-Controlled Inputs (PDF)
by Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Parallel mixing [7] is a technique for optimizing the latency of a synchronous re-encryption mix network. We analyze the anonymity of this technique when an adversary can learn the output positions of some of the inputs to the mix network. Using probabilistic modeling, we show that parallel mixing falls short of achieving optimal anonymity in this case. In particular, when the number of unknown inputs is small, there are significant anonymity losses in the expected case. This remains true even if all the mixes in the network are honest, and becomes worse as the number of mixes increases. We also consider repeatedly applying parallel mixing to the same set of inputs. We show that an attacker who knows some input–output relationships will learn new information with each mixing and can eventually link previously unknown inputs and outputs

[Go to top]

Peer-to-Peer Communication Across Network Address Translators (PDF)
by Pyda Srisuresh, Bryan Ford, and Dan Kegel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network Address Translation (NAT) causes well-known difficulties for peer-to-peer (P2P) communication, since the peers involved may not be reachable at any globally valid IP address. Several NAT traversal techniques are known, but their documentation is slim, and data about their robustness or relative merits is slimmer. This paper documents and analyzes one of the simplest but most robust and practical NAT traversal techniques, commonly known as hole punching. Hole punching is moderately well-understood for UDP communication, but we show how it can be reliably used to set up peer-to-peer TCP streams as well. After gathering data on the reliability of this technique on a wide variety of deployed NATs, we nd that about 82 of the NATs tested support hole punching for UDP, and about 64 support hole punching for TCP streams. As NAT vendors become increasingly conscious of the needs of important P2P applications such as Voice over IP and online gaming protocols, support for hole punching is likely to increase in the future

[Go to top]

How good is random linear coding based distributed networked storage? (PDF)
by Szymon Acedański, Supratim Deb, Muriel Médard, and Ralf Koetter.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We consider the problem of storing a large file or multiple large files in a distributed manner over a network. In the framework we consider, there are multiple storage locations, each of which only have very limited storage space for each file. Each storage location chooses a part (or a coded version of the parts) of the file without the knowledge of what is stored in the other locations. We want a file-downloader to connect to as few storage locations as possible and retrieve the entire file. We compare the performance of three strategies: uncoded storage, traditional erasure coding based storage, random linear coding based storage motivated by network coding. We demonstrate that, in principle, a traditional erasure coding based storage (eg: Reed-Solomon Codes) strategy can almost do as well as one can ask for with appropriate choice of parameters. However, the cost is a large amount of additional storage space required at the centralized server before distribution among multiple locations. The random linear coding based strategy performs as well without suffering from any such disadvantage. Further, with a probability close to one, the minimum number of storage location a downloader needs to connect to (for reconstructing the entire file), can be very close to the case where there is complete coordination between the storage locations and the downloader. We also argue that an uncoded strategy performs poorly

[Go to top]

On Flow Marking Attacks in Wireless Anonymous Communication Networks (PDF)
by Xinwen Fu, Ye Zhu, Bryan Graham, Riccardo Bettati, and Wei Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper studies the degradation of anonymity in a flow-based wireless mix network under flow marking attacks, in which an adversary embeds a recognizable pattern of marks into wireless traffic flows by electromagnetic interference. We find that traditional mix technologies are not effective in defeating flow marking attacks, and it may take an adversary only a few seconds to recognize the communication relationship between hosts by tracking suchartificial marks. Flow marking attacks utilize frequency domain analytical techniques and convert time domain marks into invariant feature frequencies. To counter flow marking attacks, we propose a new countermeasure based on digital filtering technology, and show that this filter-based counter-measure can effectively defend a wireless mix network from flow marking attacks

[Go to top]

P2P Contracts: a Framework for Resource and Service Exchange (PDF)
by Dipak Ghosal, Benjamin K. Poon, and Keith Kong.
In FGCS. Future Generations Computer Systems 21, March 2005, pages 333-347. (BibTeX entry) (Download bibtex record)
(direct link)

A crucial aspect of Peer-to-Peer (P2P) systems is that of providing incentives for users to contribute their resources to the system. Without such incentives, empirical data show that a majority of the participants act asfree riders. As a result, a substantial amount of resource goes untapped, and, frequently, P2P systems devolve into client-server systems with attendant issues of performance under high load. We propose to address the free rider problem by introducing the notion of a P2P contract. In it, peers are made aware of the benefits they receive from the system as a function of their contributions. In this paper, we first describe a utility-based framework to determine the components of the contract and formulate the associated resource allocation problem. We consider the resource allocation problem for a flash crowd scenario and show how the contract mechanism implemented using a centralized server can be used to quickly create pseudoservers that can serve out the requests. We then study a decentralized implementation of the P2P contract scheme in which each node implements the contract based on local demand. We show that in such a system, other than contributing storage and bandwidth to serve out requests, it is also important that peer nodes function as application-level routers to connect pools of available pseudoservers. We study the performance of the distributed implementation with respect to the various parameters including the terms of the contract and the triggers to create pseudoservers and routers

[Go to top]

Network coding for large scale content distribution (PDF)
by Christos Gkantsidis and Pablo Rodriguez.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We propose a new scheme for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit encoded blocks of information. The randomization introduced by the coding process eases the scheduling of block propagation, and, thus, makes the distribution more efficient. This is particularly important in large unstructured overlay networks, where the nodes need to make block forwarding decisions based on local information only. We compare network coding to other schemes that transmit unencoded information (i.e. blocks of the original file) and, also, to schemes in which only the source is allowed to generate and transmit encoded packets. We study the performance of network coding in heterogeneous networks with dynamic node arrival and departure patterns, clustered topologies, and when incentive mechanisms to discourage free-riding are in place. We demonstrate through simulations of scenarios of practical interest that the expected file download time improves by more than 20-30 with network coding compared to coding at the server only and, by more than 2-3 times compared to sending unencoded information. Moreover, we show that network coding improves the robustness of the system and is able to smoothly handle extreme situations where the server and nodes leave the system

[Go to top]

Market-driven bandwidth allocation in selfish overlay networks (PDF)
by Weihong Wang and Baochun Li.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Selfish overlay networks consist of autonomous nodes that develop their own strategies by optimizing towards their local objectives and self-interests, rather than following prescribed protocols. It is thus important to regulate the behavior of selfish nodes, so that system-wide properties are optimized. In this paper, we investigate the problem of bandwidth allocation in overlay networks, and propose to use a market-driven approach to regulate the behavior of selfish nodes that either provide or consume services. In such markets, consumers of services select the best service providers, taking into account both the performance and the price of the service. On the other hand, service providers are encouraged to strategically decide their respective prices in a pricing game, in order to maximize their economic revenues and minimize losses in the long run. In order to overcome the limitations of previous models towards similar objectives, we design a decentralized algorithm that uses reinforcement learning to help selfish nodes to incrementally adapt to the local market, and to make optimized strategic decisions based on past experiences. We have simulated our proposed algorithm in randomly generated overlay networks, and have shown that the behavior of selfish nodes converges to their optimal strategies, and resource allocations in the entire overlay are near-optimal, and efficiently adapts to the dynamics of overlay networks

[Go to top]

Exploiting anarchy in networks: a game-theoretic approach to combining fairness and throughput (PDF)
by Sreenivas Gollapudi, D. Sivakumar, and Aidong Zhang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a novel mechanism for routing and bandwidth allocation that exploits the selfish and rational behavior of flows in a network. Our mechanism leads to allocations that simultaneously optimize throughput and fairness criteria. We analyze the performance of our mechanism in terms of the induced Nash equilibrium. We compare the allocations at the Nash equilibrium with throughput-optimal allocations as well as with fairness-optimal allocations. Our mechanism offers a smooth trade-off between these criteria, and allows us to produce allocations that are approximately optimal with respect to both. Our mechanism is also fairly simple and admits an efficient distributed implementation

[Go to top]

Exchange-based incentive mechanisms for peer-to-peer file sharing (PDF)
by Kostas G. Anagnostakis and Michael B. Greenwald.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Performance of peer-to-peer resource sharing networks depends upon the level of cooperation of the participants. To date, cash-based systems have seemed too complex, while lighter-weight credit mechanisms have not provided strong incentives for cooperation. We propose exchange-based mechanisms that provide incentives for cooperation in peer-to-peer file sharing networks. Peers give higher service priority to requests from peers that can provide a simultaneous and symmetric service in return. We generalize this approach to n-way exchanges among rings of peers and present a search algorithm for locating such rings. We have used simulation to analyze the effect of exchanges on performance. Our results show that exchange-based mechanisms can provide strong incentives for sharing, offering significant improvements in service times for sharing users compared to free-riders, without the problems and complexity of cash- or credit-based systems

[Go to top]

Designing Incentives for Peer-to-Peer Routing (PDF)
by Alberto Blanc, Yi-Kai Liu, and Amin Vahdat.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In a peer-to-peer network, nodes are typically required to route packets for each other. This leads to a problem of "free-loaders", nodes that use the network but refuse to route other nodes' packets. In this paper we study ways of designing incentives to discourage free-loading. We model the interactions between nodes as a "random matching game", and describe a simple reputation system that provides incentives for good behavior. Under certain assumptions, we obtain a stable subgame-perfect equilibrium. We use simulations to investigate the robustness of this scheme in the presence of noise and malicious nodes, and we examine some of the design trade-offs. We also evaluate some possible adversarial strategies, and discuss how our results might apply to real peer-to-peer systems

[Go to top]

High Availability in DHTs: Erasure Coding vs. Replication (PDF)
by Rodrigo Rodrigues and Barbara Liskov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

High availability in peer-to-peer DHTs requires data redundancy. This paper compares two popular redundancy schemes: replication and erasure coding. Unlike previous comparisons, we take the characteristics of the nodes that comprise the overlay into account, and conclude that in some cases the benefits from coding are limited, and may not be worth its disadvantages

[Go to top]

The BiTtorrent P2P File-sharing System: Measurements and Analysis (PDF)
by Johan Pouwelse, Pawel Garbacki, Dick H. J. Epema, and Henk J. Sips.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Of the many P2P file-sharing prototypes in existence, BitTorrent is one of the few that has managed to attract millions of users. BitTorrent relies on other (global) components for file search, employs a moderator system to ensure the integrity of file data, and uses a bartering technique for downloading in order to prevent users from freeriding. In this paper we present a measurement study of BitTorrent in which we focus on four issues, viz. availability, integrity, flashcrowd handling, and download performance. The purpose of this paper is to aid in the understanding of a real P2P system that apparently has the right mechanisms to attract a large user community, to provide measurement data that may be useful in modeling P2P systems, and to identify design issues in such systems

[Go to top]

The eMule Protocol Specification (PDF)
by Yoram Kulbak and Danny Bickson.
In unknown(TR-2005-03), January 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitle "GNU Free Documentation License"

[Go to top]

Anonymous Communication with On-line and Off-line Onion Encoding (PDF)
by Marek Klonowski, Miroslaw Kutylowski, and Filip Zagorski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous communication with onions requires that a user application determines the whole routing path of an onion. This scenario has certain disadvantages, it might be dangerous in some situations, and it does not fit well to the current layered architecture of dynamic communication networks. We show that applying encoding based on universal re-encryption can solve many of these problems by providing much flexibility – the onions can be created on-the-fly or in advance by different parties

[Go to top]

Using redundancy to cope with failures in a delay tolerant network (PDF)
by Sushant Jain, Michael J. Demmer, Rabin K. Patra, and Kevin Fall.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of routing in a delay tolerant network (DTN) in the presence of path failures. Previous work on DTN routing has focused on using precisely known network dynamics, which does not account for message losses due to link failures, buffer overruns, path selection errors, unscheduled delays, or other problems. We show how to split, replicate, and erasure code message fragments over multiple delivery paths to optimize the probability of successful message delivery. We provide a formulation of this problem and solve it for two cases: a 0/1 (Bernoulli) path delivery model where messages are either fully lost or delivered, and a Gaussian path delivery model where only a fraction of a message may be delivered. Ideas from the modern portfolio theory literature are borrowed to solve the underlying optimization problem. Our approach is directly relevant to solving similar problems that arise in replica placement in distributed file systems and virtual node placement in DHTs. In three different simulated DTN scenarios covering a wide range of applications, we show the effectiveness of our approach in handling failures

[Go to top]

The Use of Scalable Source Routing for Networked Sensors (PDF)
by