The GNUnet Bibliography

The GNUnet Bibliography | Selected Papers in Meshnetworking

By topic | By date | By author



Publications by topic

2fast

2Fast: Collaborative Downloads in P2P Networks (PDF)
by Pawel Garbacki, Alexandru Iosup, Dick H. J. Epema, and Maarten van Steen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

P2P systems that rely on the voluntary contribution of bandwidth by the individual peers may suffer from free riding. To address this problem, mechanisms enforcing fairness in bandwidth sharing have been designed, usually by limiting the download bandwidth to the available upload bandwidth. As in real environments the latter is much smaller than the former, these mechanisms severely affect the download performance of most peers. In this paper we propose a system called 2Fast, which solves this problem while preserving the fairness of bandwidth sharing. In 2Fast, we form groups of peers that collaborate in downloading a file on behalf of a single group member, which can thus use its full download bandwidth. A peer in our system can use its currently idle bandwidth to help other peers in their ongoing downloads, and get in return help during its own downloads. We assess the performance of 2Fast analytically and experimentally, the latter in both real and simulated environments. We find that in realistic bandwidth limit settings, 2Fast improves the download speed by up to a factor of 3.5 in comparison to state-of-the-art P2P download protocols

[Go to top]

802.11

Raptor codes (PDF)
by M. Amin Shokrollahi.
In IEEE/ACM Trans. Netw 14(SI), 2006, pages 2551-2567. (BibTeX entry) (Download bibtex record)
(direct link)

LT-codes are a new class of codes introduced by Luby for the purpose of scalable and fault-tolerant distribution of data over computer networks. In this paper, we introduce Raptor codes, an extension of LT-codes with linear time encoding and decoding. We will exhibit a class of universal Raptor codes: for a given integer k and any real > 0, Raptor codes in this class produce a potentially infinite stream of symbols such that any subset of symbols of size k(1 + ) is sufficient to recover the original k symbols with high probability. Each output symbol is generated using O(log(1/ )) operations, and the original symbols are recovered from the collected ones with O(k log(1/)) operations.We will also introduce novel techniques for the analysis of the error probability of the decoder for finite length Raptor codes. Moreover, we will introduce and analyze systematic versions of Raptor codes, i.e., versions in which the first output elements of the coding system coincide with the original k elements

[Go to top]

On Flow Marking Attacks in Wireless Anonymous Communication Networks (PDF)
by Xinwen Fu, Ye Zhu, Bryan Graham, Riccardo Bettati, and Wei Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper studies the degradation of anonymity in a flow-based wireless mix network under flow marking attacks, in which an adversary embeds a recognizable pattern of marks into wireless traffic flows by electromagnetic interference. We find that traditional mix technologies are not effective in defeating flow marking attacks, and it may take an adversary only a few seconds to recognize the communication relationship between hosts by tracking suchartificial marks. Flow marking attacks utilize frequency domain analytical techniques and convert time domain marks into invariant feature frequencies. To counter flow marking attacks, we propose a new countermeasure based on digital filtering technology, and show that this filter-based counter-measure can effectively defend a wireless mix network from flow marking attacks

[Go to top]

Energy-efficiency and storage flexibility in the blue file system (PDF)
by Edmund B. Nightingale and Jason Flinn.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental vision driving pervasive computing research is access to personal and shared data anywhere at anytime. In many ways, this vision is close to being realized. Wireless networks such as 802.11 offer connectivity to small, mobile devices. Portable storage, such as mobile disks and USB keychains, let users carry several gigabytes of data in their pockets. Yet, at least three substantial barriers to pervasive data access remain. First, power-hungry network and storage devices tax the limited battery capacity of mobile computers. Second, the danger of viewing stale data or making inconsistent updates grows as objects are replicated across more computers and portable storage devices. Third, mobile data access performance can suffer due to variable storage access times caused by dynamic power management, mobility, and use of heterogeneous storage devices. To overcome these barriers, we have built a new distributed file system called BlueFS. Compared to the Coda file system, BlueFS reduces file system energy usage by up to 55 and provides up to 3 times faster access to data replicated on portable storage

[Go to top]

Seven Degrees of Separation in Mobile Ad Hoc Networks (PDF)
by Maria Papadopouli and Henning G. Schulzrinne.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present an architecture that enables the sharing of information among mobile, wireless, collaborating hosts that experience intermittent connectivity to the Internet. Participants in the system obtain data objects from Internet-connected servers, cache them and exchange them with others who are interested in them. The system exploits the fact that there is a high locality of information access within a geographic area. It aims to increase the data availability to participants with lost connectivity to the Internet. We discuss the main components of the system and possible applications. Finally, we present simulation results that show that the ad hoc networks can be very e$$ective in distributing popular information. 1 Introduction In a few years, a large percentage of the population in metropolitan areas will be equipped with PDAs, laptops or cell phones with built-in web browsers. Thus, access to information and entertainment will become as important as voice communications

[Go to top]

ABE

Attribute-based encryption with non-monotonic access structures (PDF)
by Rafail Ostrovsky, Amit Sahai, and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes

[Go to top]

ADOPT algorithm

Preprocessing techniques for accelerating the DCOP algorithm ADOPT (PDF)
by Syed Ali, Sven Koenig, and Milind Tambe.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Methods for solving Distributed Constraint Optimization Problems (DCOP) have emerged as key techniques for distributed reasoning. Yet, their application faces significant hurdles in many multiagent domains due to their inefficiency. Preprocessing techniques have successfully been used to speed up algorithms for centralized constraint satisfaction problems. This paper introduces a framework of different preprocessing techniques that are based on dynamic programming and speed up ADOPT, an asynchronous complete and optimal DCOP algorithm. We investigate when preprocessing is useful and which factors influence the resulting speedups in two DCOP domains, namely graph coloring and distributed sensor networks. Our experimental results demonstrate that our preprocessing techniques are fast and can speed up ADOPT by an order of magnitude

[Go to top]

AI

A Survey of Monte Carlo Tree Search Methods (PDF)
by Cameron Browne, Edward Powley, Daniel Whitehouse, Simon Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton.
In IEEE Transactions on Computational Intelligence and AI in Games 4, March 2012, pages 1-43. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

[Go to top]

ALC

Design and evaluation of a low density generator matrix (PDF)
by Vincent Roca, Zainab Khallouf, and Julien Laboure.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional small block Forward Error Correction (FEC) codes, like the Reed-Solomon erasure (RSE) code, are known to raise efficiency problems, in particular when they are applied to the Asynchronous Layered Coding (ALC) reliable multicast protocol. In this paper we describe the design of a simple large block Low Density Generator Matrix (LDGM) codec, a particular case of LDPC code, which is capable of operating on source blocks that are several tens of megabytes long. We also explain how the iterative decoding feature of LDGM/LDPC can be used to protect a large number of small independent objects during time-limited partially-reliable sessions. We illustrate this feature with an example derived from a video streaming scheme over ALC. We then evaluate our LDGM codec and compare its performances with a well known RSE codec. Tests focus on the global efficiency and on encoding/decoding performances. This paper deliberately skips theoretical aspects to focus on practical results. It shows that LDGM/LDPC open many opportunities in the area of bulk data multicasting

[Go to top]

AN.ON

Malice versus AN.ON: Possible Risks of Missing Replay and Integrity Protection (PDF)
by Benedikt Westermann and Dogan Kesdogan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we investigate the impact of missing replay protection as well as missing integrity protection concerning a local attacker in AN.ON. AN.ON is a low latency anonymity network mostly used to anonymize web traffic. We demonstrate that both protection mechanisms are important by presenting two attacks that become feasible as soon as the mechanisms are missing. We mount both attacks on the AN.ON network which neither implements replay protection nor integrity protection yet

[Go to top]

AOMDV

Performance Evaluation of On-Demand Multipath Distance Vector Routing Protocol under Different Traffic Models (PDF)
by B. Malarkodi, P. Rakesh, and B. Venkataramani.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Traffic models are the heart of any performance evaluation of telecommunication networks. Understanding the nature of traffic in high speed, high bandwidth communication system is essential for effective operation and performance evaluation of the networks. Many routing protocols reported in the literature for Mobile ad hoc networks(MANETS) have been primarily designed and analyzed under the assumption of CBR traffic models, which is unable to capture the statistical characteristics of the actual traffic. It is necessary to evaluate the performance properties of MANETs in the context of more realistic traffic models. In an effort towards this end, this paper evaluates the performance of adhoc on demand multipath distance vector (AOMDV) routing protocol in the presence of poisson and bursty self similar traffic and compares them with that of CBR traffic. Different metrics are considered in analyzing the performance of routing protocol including packet delivery ratio, throughput and end to end delay. Our simulation results indicate that the packet delivery fraction and throughput in AOMDV is increased in the presence of self similar traffic compared to other traffic. Moreover, it is observed that the end to end delay in the presence of self similar traffic is lesser than that of CBR and higher than that of poisson traffic

[Go to top]

APFS

Responder Anonymity and Anonymous Peer-to-Peer File Sharing (PDF)
by Vincent Scarlata, Brian Neil Levine, and Clay Shields.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Data transfer over TCP/IP provides no privacy for network users. Previous research in anonymity has focused on the provision of initiator anonymity. We explore methods of adapting existing initiator-anonymous protocols to provide responder anonymity and mutual anonymity. We present Anonymous Peer-to-peer File Sharing (APFS) protocols, which provide mutual anonymity for peer-topeer le sharing. APFS addresses the problem of longlived Internet services that may outlive the degradation present in current anonymous protocols. One variant of APFS makes use of unicast communication, but requires a central coordinator to bootstrap the protocol. A second variant takes advantage of multicast routing to remove the need for any central coordination point. We compare the TCP performance of APFS protocol to existing overt le sharing systems such as Napster. In providing anonymity, APFS can double transfer times and requires that additional trac be carried by peers, but this overhead is constant with the size of the session. 1

[Go to top]

API

A Security API for Distributed Social Networks (PDF)
by Michael Backes, Matteo Maffei, and Kim Pecina.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a cryptographic framework to achieve access control, privacy of social relations, secrecy of resources, and anonymity of users in social networks. We illustrate our technique on a core API for social networking, which includes methods for establishing social relations and for sharing resources. The cryptographic protocols implementing these methods use pseudonyms to hide user identities, signatures on these pseudonyms to establish social relations, and zero-knowledge proofs of knowledge of such signatures to demonstrate the existence of social relations without sacrificing user anonymity. As we do not put any constraints on the underlying social network, our framework is generally applicable and, in particular, constitutes an ideal plug-in for decentralized social networks. We analyzed the security of our protocols by developing formal definitions of the aforementioned security properties and by verifying them using ProVerif, an automated theorem prover for cryptographic protocols. Finally, we built a prototypical implementation and conducted an experimental evaluation to demonstrate the efficiency and the scalability of our framework

[Go to top]

POSIX–Portable Operating System Interface
by The Open Group and IEEE.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Towards a Common API for Structured Peer-to-Peer Overlays (PDF)
by Frank Dabek, Ben Y. Zhao, Peter Druschel, John Kubiatowicz, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper, we describe an ongoing effort to define common APIs for structured peer-to-peer overlays and the key abstractions that can be built on them. In doing so, we hope to facilitate independent innovation in overlay protocols, services, and applications, to allow direct experimental comparisons, and to encourage application development by third parties. We provide a snapshot of our efforts and discuss open problems in an effort to solicit feedback from the research community

[Go to top]

Amazon

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Analysis of algorithms

On the False-positive Rate of Bloom Filters (PDF)
by Prosenjit Bose, Hua Guo, Evangelos Kranakis, Anil Maheshwari, Pat Morin, Jason Morrison, Michiel Smid, and Yihui Tang.
In Inf. Process. Lett 108, 2008, pages 210-213. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Bloom filters are a randomized data structure for membership queries dating back to 1970. Bloom filters sometimes give erroneous answers to queries, called false positives. Bloom analyzed the probability of such erroneous answers, called the false-positive rate, and Bloom's analysis has appeared in many publications throughout the years. We show that Bloom's analysis is incorrect and give a correct analysis

[Go to top]

And-Or trees

Analysis of random processes via And-Or tree evaluation (PDF)
by Michael Luby, Michael Mitzenmacher, and M. Amin Shokrollahi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce a new set of probabilistic analysis tools based on the analysis of And-Or trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above. 1 Introduction We introduce a new set of probabilistic analysis tools related to the amplification method introduced by [12] and further developed and used in [13, 5]. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including the random loss-resilient codes introduced

[Go to top]

Architecture

Zur Idee herrschaftsfreier kooperativer Internetdienste (PDF)
by Christian Ricardo Kühne.
In FIfF-Kommunikation, 2016. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

BACKLIT

Exposing Invisible Timing-based Traffic Watermarks with BACKLIT (PDF)
by Xiapu Luo, Peng Zhou, Junjie Zhang, Roberto Perdisci, Wenke Lee, and Rocky K. C. Chang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traffic watermarking is an important element in many network security and privacy applications, such as tracing botnet Camp;C communications and deanonymizing peer-to-peer VoIP calls. The state-of-the-art traffic watermarking schemes are usually based on packet timing information and they are notoriously difficult to detect. In this paper, we show for the first time that even the most sophisticated timing-based watermarking schemes (e.g., RAINBOW and SWIRL) are not invisible by proposing a new detection system called BACKLIT. BACKLIT is designed according to the observation that any practical timing-based traffic watermark will cause noticeable alterations in the intrinsic timing features typical of TCP flows. We propose five metrics that are sufficient for detecting four state-of-the-art traffic watermarks for bulk transfer and interactive traffic. BACKLIT can be easily deployed in stepping stones and anonymity networks (e.g., Tor), because it does not rely on strong assumptions and can be realized in an active or passive mode. We have conducted extensive experiments to evaluate BACKLIT's detection performance using the PlanetLab platform. The results show that BACKLIT can detect watermarked network flows with high accuracy and few false positives

[Go to top]

BANDWIDTH

Personalized Web search for improving retrieval effectiveness (PDF)
by Fang Liu, C. Yu, and Weiyi Meng.
In Knowledge and Data Engineering, IEEE Transactions on 16, January 2004, pages 28-40. (BibTeX entry) (Download bibtex record)
(direct link)

Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient

[Go to top]

Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh (PDF)
by Dejan Kostić, Adolfo Rodriguez, Jeannie Albrecht, and Amin Vahdat.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.Key contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing. In a tree, it is critical that a node's parent delivers a high rate of application data to each child. In Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate

[Go to top]

BDH

Attribute-based encryption with non-monotonic access structures (PDF)
by Rafail Ostrovsky, Amit Sahai, and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes

[Go to top]

BEC

Capacity-achieving ensembles for the binary erasure channel with bounded complexity (PDF)
by Henry D. Pfister, Igal Sason, and Rüdiger L. Urbanke.
In IEEE TRANS. INFORMATION THEORY 51(7), 2005, pages 2352-2379. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present two sequences of ensembles of nonsystematic irregular repeat–accumulate (IRA) codes which asymptotically (as their block length tends to infinity) achieve capacity on the binary erasure channel (BEC) with bounded complexity per information bit. This is in contrast to all previous constructions of capacity-achieving sequences of ensembles whose complexity grows at least like the log of the inverse of the gap (in rate) to capacity. The new bounded complexity result is achieved by puncturing bits, and allowing in this way a sufficient number of state nodes in the Tanner graph representing the codes. We derive an information-theoretic lower bound on the decoding complexity of randomly punctured codes on graphs. The bound holds for every memoryless binary-input output-symmetric (MBIOS) channel and is refined for the binary erasure channel

[Go to top]

Finite-length analysis of low-density parity-check codes on the binary erasure channel (PDF)
by Changyan Di, David Proietti, I. Emre Telatar, Thomas J. Richardson, and Rüdiger L. Urbanke.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area

[Go to top]

BFS-Tree

Anytime local search for distributed constraint optimization (PDF)
by Roie Zivan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most former studies of Distributed Constraint Optimization Problems (DisCOPs) search considered only complete search algorithms, which are practical only for relatively small problems. Distributed local search algorithms can be used for solving DisCOPs. However, because of the differences between the global evaluation of a system's state and the private evaluation of states by agents, agents are unaware of the global best state which is explored by the algorithm. Previous attempts to use local search algorithms for solving DisCOPs reported the state held by the system at the termination of the algorithm, which was not necessarily the best state explored. A general framework for implementing distributed local search algorithms for DisCOPs is proposed. The proposed framework makes use of a BFS-tree in order to accumulate the costs of the system's state in its different steps and to propagate the detection of a new best step when it is found. The resulting framework enhances local search algorithms for DisCOPs with the anytime property. The proposed framework does not require additional network load. Agents are required to hold a small (linear) additional space (beside the requirements of the algorithm in use). The proposed framework preserves privacy at a higher level than complete DisCOP algorithms which make use of a pseudo-tree (ADOPT, DPOP)

[Go to top]

BNymble

BNymble: More anonymous blacklisting at almost no cost (PDF)
by Peter Lofgren and Nicholas J. Hopper.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous blacklisting schemes allow online service providers to prevent future anonymous access by abusive users while preserving the privacy of all anonymous users (both abusive and non-abusive). The first scheme proposed for this purpose was Nymble, an extremely efficient scheme based only on symmetric primitives; however, Nymble relies on trusted third parties who can collude to de-anonymize users of the scheme. Two recently proposed schemes, Nymbler and Jack, reduce the trust placed in these third parties at the expense of using less-efficient asymmetric crypto primitives. We present BNymble, a scheme which matches the anonymity guarantees of Nymbler and Jack while (nearly) maintaining the efficiency of the original Nymble. The key insight of BNymble is that we can achieve the anonymity goals of these more recent schemes by replacing only the infrequent User Registration protocol from Nymble with asymmetric primitives. We prove the security of BNymble, and report on its efficiency

[Go to top]

Backup Systems

A Practical Study of Regenerating Codes for Peer-to-Peer Backup Systems (PDF)
by Alessandro Duminuco and E W Biersack.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In distributed storage systems, erasure codes represent an attractive solution to add redundancy to stored data while limiting the storage overhead. They are able to provide the same reliability as replication requiring much less storage space. Erasure coding breaks the data into pieces that are encoded and then stored on different nodes. However, when storage nodes permanently abandon the system, new redundant pieces must be created. For erasure codes, generating a new piece requires the transmission of k pieces over the network, resulting in a k times higher reconstruction traffic as compared to replication. Dimakis proposed a new class of codes, called Regenerating Codes, which are able to provide both the storage efficiency of erasure codes and the communication efficiency of replication. However, Dimakis gave only a theoretical description of the codes without discussing implementation issues or computational costs. We have done a real implementation of Random Linear Regenerating Codes that allows us to measure their computational cost, which can be significant if the parameters are not chosen properly. However, we also find that there exist parameter values that result in a significant reduction of the communication overhead at the expense of a small increase in storage cost and computation, which makes these codes very attractive for distributed storage systems

[Go to top]

Bayesian Nash equilibrium

An Introduction to Auction Theory (PDF)
by Flavio M. Menezes and Paulo K. Monteiro.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This book presents an in-depth discussion of the auction theory. It introduces the concept of Bayesian Nash equilibrium and the idea of studying auctions as games. Private, common, and affiliated values models and multi-object auction models are described. A general version of the Revenue Equivalence Theorem is derived and the optimal auction is characterized to relate the field of mechanism design to auction theory

[Go to top]

BitTorrent

Incentive-driven QoS in peer-to-peer overlays (PDF)
by Raul Leonardo Landa Gamiochipi.
Ph.D. thesis, University College London, May 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

[Go to top]

Detecting BitTorrent Blocking (PDF)
by Marcel Dischinger, Alan Mislove, Andreas Haeberlen, and P. Krishna Gummadi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recently, it has been reported that certain access ISPs are surreptitiously blocking their customers from uploading data using the popular BitTorrent file-sharing protocol. The reports have sparked an intense and wide-ranging policy debate on network neutrality and ISP traffic management practices. However, to date, end users lack access to measurement tools that can detect whether their access ISPs are blocking their BitTorrent traffic. And since ISPs do not voluntarily disclose their traffic management policies, no one knows how widely BitTorrent traffic blocking is deployed in the current Internet. In this paper, we address this problem by designing an easy-to-use tool to detect BitTorrent blocking and by presenting results from a widely used public deployment of the tool

[Go to top]

BitTorrent is an Auction: Analyzing and Improving BitTorrent's Incentives (PDF)
by Dave Levin, Katrina LaCurts, Neil Spring, and Bobby Bhattacharjee.
In SIGCOMM Computer Communication Review 38, August 2008, pages 243-254. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Incentives play a crucial role in BitTorrent, motivating users to upload to others to achieve fast download times for all peers. Though long believed to be robust to strategic manipulation, recent work has empirically shown that BitTorrent does not provide its users incentive to follow the protocol. We propose an auction-based model to study and improve upon BitTorrent's incentives. The insight behind our model is that BitTorrent uses, not tit-for-tat as widely believed, but an auction to decide which peers to serve. Our model not only captures known, performance-improving strategies, it shapes our thinking toward new, effective strategies. For example, our analysis demonstrates, counter-intuitively, that BitTorrent peers have incentive to intelligently under-report what pieces of the file they have to their neighbors. We implement and evaluate a modification to BitTorrent in which peers reward one another with proportional shares of bandwidth. Within our game-theoretic model, we prove that a proportional-share client is strategy-proof. With experiments on PlanetLab, a local cluster, and live downloads, we show that a proportional-share unchoker yields faster downloads against BitTorrent and BitTyrant clients, and that under-reporting pieces yields prolonged neighbor interest

[Go to top]

Experimental Analysis of Super-Seeding in BitTorrent (PDF)
by Zhijia Chen, Yang Chen, Chuang Lin, Vaibhav Nivargi, and Pei Cao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

With the popularity of BitTorrent, improving its performance has been an active research area. Super-seeding, a special upload policy for initial seeds, improves the efficiency in producing multiple seeds and reduces the uploading cost of the initial seeders. However, the overall benefit of super seeding remains a question. In this paper, we conduct an experimental study over the performance of super-seeding scheme of BitTornado. We attempt to answer the following questions: whether and how much super-seeding saves uploading cost, whether the download time of all peers is decreased by super-seeding, and in which scenario super-seeding performs worse. With varying seed bandwidth and peer behavior, we analyze the overall download time and upload cost of super seeding scheme during random period tests over 250 widely distributed PlanetLab nodes. The results show that benefits of super-seeding depend highly on the upload bandwidth of the initial seeds and the behavior of individual peers. Our work not only provides reference for the potential adoption of super-seeding in BitTorrent, but also much insights for the balance of enhancing Quality of Experience (QoE) and saving cost for a large-scale BitTorrent-like P2P commercial application

[Go to top]

Lightweight emulation to study peer-to-peer systems (PDF)
by Lucas Nussbaum and Olivier Richard.
In Concurrency and Computation: Practice and Experience 20(6), 2008, pages 735-749. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Do incentives build robustness in BitTorrent? (PDF)
by Michael Piatek, Tomas Isdal, Thomas Anderson, Arvind Krishnamurthy, and Arun Venkataramani.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental problem with many peer-to-peer systems is the tendency for users to "free ride"–to consume resources without contributing to the system. The popular file distribution tool BitTorrent was explicitly designed to address this problem, using a tit-for-tat reciprocity strategy to provide positive incentives for nodes to contribute resources to the swarm. While BitTorrent has been extremely successful, we show that its incentive mechanism is not robust to strategic clients. Through performance modeling parameterized by real world traces, we demonstrate that all peers contribute resources that do not directly improve their performance. We use these results to drive the design and implementation of BitTyrant, a strategic BitTorrent client that provides a median 70 performance gain for a 1 Mbit client on live Internet swarms. We further show that when applied universally, strategic clients can hurt average per-swarm performance compared to today's BitTorrent client implementations

[Go to top]

Understanding churn in peer-to-peer networks (PDF)
by Daniel Stutzbach and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The dynamics of peer participation, or churn, are an inherent property of Peer-to-Peer (P2P) systems and critical for design and evaluation. Accurately characterizing churn requires precise and unbiased information about the arrival and departure of peers, which is challenging to acquire. Prior studies show that peer participation is highly dynamic but with conflicting characteristics. Therefore, churn remains poorly understood, despite its significance.In this paper, we identify several common pitfalls that lead to measurement error. We carefully address these difficulties and present a detailed study using three widely-deployed P2P systems: an unstructured file-sharing system (Gnutella), a content-distribution system (BitTorrent), and a Distributed Hash Table (Kad). Our analysis reveals several properties of churn: (i) overall dynamics are surprisingly similar across different systems, (ii) session lengths are not exponential, (iii) a large portion of active peers are highly stable while the remaining peers turn over quickly, and (iv) peer session lengths across consecutive appearances are correlated. In summary, this paper advances our understanding of churn by improving accuracy, comparing different P2P file sharingdistribution systems, and exploring new aspects of churn

[Go to top]

Improving traffic locality in BitTorrent via biased neighbor selection (PDF)
by Ruchir Bindal, Pei Cao, William Chan, Jan Medved, George Suwala, Tony Bates, and Amy Zhang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) applications such as BitTorrent ignore traffic costs at ISPs and generate a large amount of cross-ISP traffic. As a result, ISPs often throttle BitTorrent traffic to control the cost. In this paper, we examine a new approach to enhance BitTorrent traffic locality, biased neighbor selection, in which a peer chooses the majority, but not all, of its neighbors from peers within the same ISP. Using simulations, we show that biased neighbor selection maintains the nearly optimal performance of Bit- Torrent in a variety of environments, and fundamentally reduces the cross-ISP traffic by eliminating the traffic's linear growth with the number of peers. Key to its performance is the rarest first piece replication algorithm used by Bit- Torrent clients. Compared with existing locality-enhancing approaches such as bandwidth limiting, gateway peers, and caching, biased neighbor selection requires no dedicated servers and scales to a large number of BitTorrent networks

[Go to top]

Free Riding in BitTorrent is Cheap (PDF)
by Thomas Locher, Patrick Moor, Stefan Schmid, and Roger Wattenhofer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

While it is well-known that BitTorrent is vulnerable to selfish behavior, this paper demonstrates that even entire files can be downloaded without reciprocating at all in BitTorrent. To this end, we present BitThief, a free riding client that never contributes any real data. First, we show that simple tricks suffice in order to achieve high download rates, even in the absence of seeders. We also illustrate how peers in a swarm react to various sophisticated attacks. Moreover, our analysis reveals that sharing communitiescommunities originally intended to offer downloads of good quality and to promote cooperation among peersprovide many incentives to cheat

[Go to top]

Influences on cooperation in BitTorrent communities (PDF)
by Nazareno Andrade, Miranda Mowbray, Aliandro Lima, Gustavo Wagner, and Matei Ripeanu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We collect BitTorrent usage data across multiple file-sharing communities and analyze the factors that affect users' cooperative behavior. We find evidence that the design of the BitTorrent protocol results in increased cooperative behavior over other P2P protocols used to share similar content (e.g. Gnutella). We also investigate two additional community-specific mechanisms that foster even more cooperation

[Go to top]

Incentives in BitTorrent Induce Free Riding (PDF)
by Seung Jun and Mustaque Ahamad.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We investigate the incentive mechanism of BitTorrent, which is a peer-to-peer file distribution system. As downloaders in BitTorrent are faced with the conflict between the eagerness to download and the unwillingness to upload, we relate this problem to the iterated prisoner's dilemma, which suggests guidelines to design a good incentive mechanism. Based on these guidelines, we propose a new, simple incentive mechanism. Our analysis and the experimental results using PlanetLab show that the original incentive mechanism of BitTorrent can induce free riding because it is not effective in rewarding and punishing downloaders properly. In contrast, a new mechanism proposed by us is shown to be more robust against free riders

[Go to top]

Some observations on BitTorrent performance (PDF)
by Ashwin R. Bharambe, Cormac Herley, and Venkata N. Padmanabhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present a simulation-based study of BitTorrent. Our results confirm that BitTorrent performs near-optimally in terms of uplink bandwidth utilization and download time, except under certain extreme conditions. On fairness, however, our work shows that low bandwidth peers systematically download more than they upload to the network when high bandwidth peers are present. We find that the rate-based tit-for-tat policy is not effective in preventing unfairness. We show how simple changes to the tracker and a stricter, block-based tit-for-tat policy, greatly improves fairness, while maintaining high utilization

[Go to top]

The BiTtorrent P2P File-sharing System: Measurements and Analysis (PDF)
by Johan Pouwelse, Pawel Garbacki, Dick H. J. Epema, and Henk J. Sips.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Of the many P2P file-sharing prototypes in existence, BitTorrent is one of the few that has managed to attract millions of users. BitTorrent relies on other (global) components for file search, employs a moderator system to ensure the integrity of file data, and uses a bartering technique for downloading in order to prevent users from freeriding. In this paper we present a measurement study of BitTorrent in which we focus on four issues, viz. availability, integrity, flashcrowd handling, and download performance. The purpose of this paper is to aid in the understanding of a real P2P system that apparently has the right mechanisms to attract a large user community, to provide measurement data that may be useful in modeling P2P systems, and to identify design issues in such systems

[Go to top]

Dissecting BitTorrent: Five Months in a Torrent's Lifetime (PDF)
by Mikel Izal, Guillaume Urvoy-Keller, E W Biersack, Pascal Felber, Anwar Al Hamra, and L Garcés-Erice.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peer-to-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers

[Go to top]

Incentives build robustness in BitTorrent (PDF)
by Bram Cohen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

The BitTorrent file distribution system uses tit-for-tat as a method to seeking pareto efficiency. It achieves a higher level of robustness and resource utilization than any currently known cooperative technique. We explain what BitTorrent does, and how economic methods are used to achieve that goal

[Go to top]

Bitwise Sharing

Multiparty Computation for Interval, Equality, and Comparison Without Bit-Decomposition Protocol (PDF)
by Takashi Nishide and Kazuo Ohta.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Damg ard et al. [11] showed a novel technique to convert a polynomial sharing of secret a into the sharings of the bits of a in constant rounds, which is called the bit-decomposition protocol. The bit-decomposition protocol is a very powerful tool because it enables bit-oriented operations even if shared secrets are given as elements in the field. However, the bit-decomposition protocol is relatively expensive. In this paper, we present a simplified bit-decomposition protocol by analyzing the original protocol. Moreover, we construct more efficient protocols for a comparison, interval test and equality test of shared secrets without relying on the bit-decomposition protocol though it seems essential to such bit-oriented operations. The key idea is that we do computation on secret a with c and r where c = a + r, c is a revealed value, and r is a random bitwise-shared secret. The outputs of these protocols are also shared without being revealed. The realized protocols as well as the original protocol are constant-round and run with less communication rounds and less data communication than those of [11]. For example, the round complexities are reduced by a factor of approximately 3 to 10

[Go to top]

Bloom filter

Bloom filters and overlays for routing in pocket switched networks (PDF)
by Christoph P. Mayer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Pocket Switched Networks (PSN) [3] have become a promising approach for providing communication between scarcely connected human-carried devices. Such devices, e.g. mobile phones or sensor nodes, are exposed to human mobility and can therewith leverage inter-human contacts for store-and-forward routing. Efficiently routing in such delay tolerant networks is complex due to incomplete knowledge about the network, and high dynamics of the network. In this work we want to develop an extension of Bloom filters for resource-efficient routing in pocket switched networks. Furthermore, we argue that PSNs may become densely populated in special situations. We want to exploit such situations to perform collaborative calculations of forwarding-decisions. In this paper we present a simple scheme for distributed decision calculation using overlays and a DHT-based distributed variant of Bloom filters

[Go to top]

A Quick Introduction to Bloom Filters (PDF)
by Christian Grothoff.
In unknown, August 2005. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Probabilistic Location and Routing (PDF)
by Sean C. Rhea and John Kubiatowicz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We propose probabilistic location to enhance the performance of existing peer-to-peer location mechanisms in the case where a replica for the queried data item exists close to the query source. We introduce the attenuated Bloom filter, a lossy distributed index data structure. We describe how to use these data structures for document location and how to maintain them despite document motion. We include a detailed performance study which indicates that our algorithm performs as desired, both finding closer replicas and finding them faster than deterministic algorithms alone

[Go to top]

Space/Time Trade-offs in Hash Coding with Allowable Errors
by Burton H. Bloom.
In Communications of the ACM 13, 1970, pages 422-426. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash- coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency

[Go to top]

Bluetooth

On Flow Marking Attacks in Wireless Anonymous Communication Networks (PDF)
by Xinwen Fu, Ye Zhu, Bryan Graham, Riccardo Bettati, and Wei Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper studies the degradation of anonymity in a flow-based wireless mix network under flow marking attacks, in which an adversary embeds a recognizable pattern of marks into wireless traffic flows by electromagnetic interference. We find that traditional mix technologies are not effective in defeating flow marking attacks, and it may take an adversary only a few seconds to recognize the communication relationship between hosts by tracking suchartificial marks. Flow marking attacks utilize frequency domain analytical techniques and convert time domain marks into invariant feature frequencies. To counter flow marking attacks, we propose a new countermeasure based on digital filtering technology, and show that this filter-based counter-measure can effectively defend a wireless mix network from flow marking attacks

[Go to top]

Using Bluetooth for Informationally Enhanced Environments Abstract
by Thomas Fuhrmann and Till Harbaum.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The continued miniaturization in computing and wireless communication is about to make informationally enhanced environments become a reality. Already today, devices like a notebook computer or a personal digital assistent (PDA) can easily connect to the Internet via IEEE 802.11 networks (WaveLAN) or similar technologies provided at so-called hot-spots. In the near future, even smaller devices can join a wireless network to exchange status information or send and receive commands. In this paper, we present sample uses of a generic Bluetooth component that we have developed and that has been successfully integrated into various mininature devices to transmit sensor data or exchange control commands. The use of standard protocols like TCP/IP, Obex, and HTTP simplifies the use of those devices with conventional devices (notebook, PDA, cell-phone) without even requiring special drivers or applications for these devices. While such scenarios have already often been dreamt of, we are able to present a working solution based on small and cost-effective standard elements. We describe two applications that illustrate the power this approach in the broad area of e-commerce, e-learning, and e-government: the BlueWand, a small, pen-like device that can control Bluetooth devices in its vincinity by simple gestures, and a door plate that can display messages that are posted to it e.g. by a Bluetooth PDA. Keywords: Human-Computer Interaction, Ubiquitous Computing, Wireless Communications (Bluetooth)

[Go to top]

BnB-ADOPT

BnB-ADOPT: an asynchronous branch-and-bound DCOP algorithm (PDF)
by William Yeoh, Ariel Felner, and Sven Koenig.
In Journal of Artificial Intelligence Research 38, 2010, pages 85-133. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed constraint optimization (DCOP) problems are a popular way of formulating and solving agent-coordination problems. It is often desirable to solve DCOP problems optimally with memory-bounded and asynchronous algorithms. We introduce Branch-and-Bound ADOPT (BnB-ADOPT), a memory-bounded asynchronous DCOP algorithm that uses the message passing and communication framework of ADOPT, a well known memory-bounded asynchronous DCOP algorithm, but changes the search strategy of ADOPT from best-first search to depth-first branch-and-bound search. Our experimental results show that BnB-ADOPT is up to one order of magnitude faster than ADOPT on a variety of large DCOP problems and faster than NCBB, a memory-bounded synchronous DCOP algorithm, on most of these DCOP problems

[Go to top]

Botnet

Adapting Blackhat Approaches to Increase the Resilience of Whitehat Application Scenarios (PDF)
by Bartlomiej Polot.
masters, Technische Universität München, 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Broadcast

Lightweight probabilistic broadcast (PDF)
by Patrick Eugster, Rachid Guerraoui, Sidath B. Handurukande, Petr Kouznetsov, and Anne-Marie Kermarrec.
In ACM Trans. Comput. Syst 21, November 2003, pages 341-374. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Byzantine Resilient Sampling

Brahms: Byzantine Resilient Random Membership Sampling (PDF)
by Edward Bortnikov, Maxim Gurevich, Idit Keidar, Gabriel Kliot, and Alexander Shraer.
In Computer Networks Journal (COMNET), Special Issue on Gossiping in Distributed Systems, April 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Byzantine storage

Building secure file systems out of Byzantine storage (PDF)
by David Mazières and Dennis Shasha.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper shows how to implement a trusted network file system on an untrusted server. While cryptographic storage techniques exist that allow users to keep data secret from untrusted servers, this work concentrates on the detection of tampering attacks and stale data. Ideally, users of an untrusted storage server would immediately and unconditionally notice any misbehavior on the part of the server. This ideal is unfortunately not achievable. However, we define a notion of data integrity called fork consistency in which, if the server delays just one user from seeing even a single change by another, the two users will never again see one another's changes—a failure easily detectable with on-line communication. We give a practical protocol for a multi-user network file system called SUNDR, and prove that SUNDR offers fork consistency whether or not the server obeys the protocol

[Go to top]

CADET

Improving Voice over GNUnet (PDF)
by Christian Ulrich.
B.S, TU Berlin, July 2017. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In contrast to ubiquitous cloud-based solutions the telephony application GNUnet conversation provides fully-decentralized, secure voice communication and thus impedes mass surveillance. The aim of this thesis is to investigate why GNUnet conversation currently provides poor Quality of Experience under typical wide area network conditions and to propose optimization measures. After network shaping and the initialization of two isolated GNUnet peers had been automated, delay measurements were done. With emulated network characteristics network delay, cryptography delays and audio codec delays were measured and transmitted speech was recorded. An analysis of the measurement results and a subjective assessment of the speech recordings revealed that extreme outliers occur in most scenarios and impair QoE. Moreover it was shown that GNUnet conversation introduces a large delay that confines the environment in which good QoE is possible. In the measurement environment at least 23 ms always ocurred of which large parts are were caused by cryptography. It was shown that optimization in the cryptography part and other components are possible. Finally the conditions for currently reaching good QoE were determined and ideas for further investigations were presented

[Go to top]

CADET: Confidential Ad-hoc Decentralized End-to-End Transport (PDF)
by Bartlomiej Polot and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes CADET, a new transport protocol for confidential and authenticated data transfer in decentralized networks. This transport protocol is designed to operate in restricted-route scenarios such as friend-to-friend or ad-hoc wireless networks. We have implemented CADET and evaluated its performance in various network scenarios, compared it to the well-known TCP/IP stack and tested its response to rapidly changing network topologies. While our current implementation is still significantly slower in high-speed low-latency networks, for typical Internet-usage our system provides much better connectivity and security with comparable performance to TCP/IP

[Go to top]

CAN

Selected DHT Algorithms (PDF)
by Stefan Götz, Simon Rieche, and Klaus Wehrle.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several different approaches to realizing the basic principles of DHTs have emerged over the last few years. Although they rely on the same fundamental idea, there is a large diversity of methods for both organizing the identifier space and performing routing. The particular properties of each approach can thus be exploited by specific application scenarios and requirements. This overview focuses on the three DHT systems that have received the most attention in the research community: Chord, Pastry, and Content Addressable Networks (CAN). Furthermore, the systems Symphony, Viceroy, and Kademlia are discussed because they exhibit interesting mechanisms and properties beyond those of the first three systems

[Go to top]

A scalable content-addressable network (PDF)
by Sylvia Paul Ratnasamy.
phd, University of California, Berkeley, 2002. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Exploiting network proximity in distributed hash tables (PDF)
by Miguel Castro, Peter Druschel, and Y. Charlie Hu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Self-organizing peer-to-peer (p2p) overlay networks like CAN, Chord, Pastry and Tapestry (also called distributed hash tables or DHTs) offer a novel platform for a variety of scalable and decentralized distributed applications. These systems provide efficient and fault-tolerant routing, object location, and load balancing within a self-organizing overlay network. One important aspect of these systems is how they exploit network proximity in the underlying Internet. Three basic approaches have been proposed to exploit network proximity in DHTs, geographic layout, proximity routing and proximity neighbour selection. In this position paper, we briefly discuss the three approaches, contrast their strengths and shortcomings, and consider their applicability in the different DHT routing protocols. We conclude that proximity neighbor selection, when used in DHTs with prefixbased routing like Pastry and Tapestry, is highly effective and appears to dominate the other approaches

[Go to top]

A scalable content-addressable network (PDF)
by Sylvia Paul Ratnasamy, Paul Francis, Mark Handley, Richard Karp, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Hash tables–which map "keys" onto "values"–are an essential building block in modern software systems. We believe a similar functionality would be equally valuable to large distributed systems. In this paper, we introduce the concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales. The CAN is scalable, fault-tolerant and completely self-organizing, and we demonstrate its scalability, robustness and low-latency properties through simulation

[Go to top]

Application-Level Multicast Using Content-Addressable Networks (PDF)
by Sylvia Paul Ratnasamy, Mark Handley, Richard Karp, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most currently proposed solutions to application-level multicast organise the group members into an application-level mesh over which a Distance-Vector routingp rotocol, or a similar algorithm, is used to construct source-rooted distribution trees. The use of a global routing protocol limits the scalability of these systems. Other proposed solutions that scale to larger numbers of receivers do so by restricting the multicast service model to be single-sourced. In this paper, we propose an application-level multicast scheme capable of scaling to large group sizes without restrictingthe service model to a single source. Our scheme builds on recent work on Content-Addressable Networks (CANs). Extendingthe CAN framework to support multicast comes at trivial additional cost and, because of the structured nature of CAN topologies, obviates the need for a multicast routingalg orithm. Given the deployment of a distributed infrastructure such as a CAN, we believe our CAN-based multicast scheme offers the dual advantages of simplicity and scalability

[Go to top]

CPFR

Secure Collaborative Planning, Forecasting, and Replenishment (PDF)
by Mikhail Atallah, Marina Blanton, Vinayak Deshpand, Keith Frikken, Jiangtao Li, and Leroy Schwarz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Although the benefits of information sharing between supply-chain partners are well known, many companies are averse to share their private information due to fear of adverse impact of information leakage. This paper uses techniques from Secure Multiparty Computation (SMC) to develop secure protocols for the CPFR (Collaborative Planning, Forecasting, and Replenishment) business process. The result is a process that permits supply-chain partners to capture all of the benefits of information-sharing and collaborative decision-making, but without disclosing their private demandsignal (e.g., promotions) and cost information to one another. In our collaborative CPFR) scenario, the retailer and supplier engage in SMC protocols that result in: (1) a forecast that uses both the retailers and the suppliers observed demand signals to better forecast demand; and (2) prescribed order/shipment quantities based on system-wide costs and inventory levels (and on the joint forecasts) that minimize supply-chain expected cost/period. Our contributions are as follows: (1) we demonstrate that CPFR can be securely implemented without disclosing the private information of either partner; (2) we show that the CPFR business process is not incentive compatible without transfer payments and develop an incentive-compatible linear transfer-payment scheme for collaborative forecasting; (3) we demonstrate that our protocols are not only secure (i.e., privacy preserving), but that neither partner is able to make accurate inferences about the others future demand signals from the outputs of the protocols; and (4) we illustrate the benefits of secure collaboration using simulation

[Go to top]

CPU

POSIX–Portable Operating System Interface
by The Open Group and IEEE.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Celeste

Deleting files in the Celeste peer-to-peer storage system (PDF)
by Gal Badishi, Germano Caronni, Idit Keidar, Raphael Rom, and Glenn Scott.
In Journal of Parallel and Distributed Computing 69, July 2009, pages 613-622. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Celeste is a robust peer-to-peer object store built on top of a distributed hash table (DHT). Celeste is a working system, developed by Sun Microsystems Laboratories. During the development of Celeste, we faced the challenge of complete object deletion, and moreover, of deleting ''files'' composed of several different objects. This important problem is not solved by merely deleting meta-data, as there are scenarios in which all file contents must be deleted, e.g., due to a court order. Complete file deletion in a realistic peer-to-peer storage system has not been previously dealt with due to the intricacy of the problem–the system may experience high churn rates, nodes may crash or have intermittent connectivity, and the overlay network may become partitioned at times. We present an algorithm that eventually deletes all file contents, data and meta-data, in the aforementioned complex scenarios. The algorithm is fully functional and has been successfully integrated into Celeste

[Go to top]

Chord

Performance evaluation of chord in mobile ad hoc networks (PDF)
by Curt Cramer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile peer-to-peer applications recently have received growing interest. However, it is often assumed that structured peer-to-peer overlays cannot efficiently operate in mobile ad hoc networks (MANETs). The prevailing opinion is that this is due to the protocols' high overhead cost. In this paper, we show that this opinion is misguided.We present a thorough simulation study evaluating Chord in the well-known MANET simulator GloMoSim. We found the main issue of deploying Chord in a MANET not to be its overhead, but rather the protocol's pessimistic timeout and failover strategy. This strategy enables fast lookup resolution in spite of highly dynamic node membership, which is a significant problem in the Internet context. However, with the inherently higher packet loss rate in a MANET, this failover strategy results in lookups being inconsistently forwarded even if node membership does not change

[Go to top]

Combining Virtual and Physical Structures for Self-organized Routing (PDF)
by Thomas Fuhrmann.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Our recently proposed scalable source routing (SSR) protocol combines source routing in the physical network with Chord-like routing in the virtual ring that is formed by the address space. Thereby, SSR provides self-organized routing in large unstructured networks of resource-limited devices. Its ability to quickly adapt to changes in the network topology makes it suitable not only for sensor-actuator networks but also for mobile ad-hoc networks. Moreover, SSR directly provides the key-based routing semantics, thereby making it an efficient basis for the scalable implementation of self-organizing, fully decentralized applications. In this paper we review SSR's self-organizing features and demonstrate how the combination of virtual and physical structures leads to emergence of stability and efficiency. In particular, we focus on SSR's resistance against node churn. Following the principle of combining virtual and physical structures, we propose an extension that stabilizes SSR in face of heavy node churn. Simulations demonstrate the effectiveness of this extension

[Go to top]

A Self-Organizing Job Scheduling Algorithm for a Distributed VDR (PDF)
by Kendy Kutzner, Curt Cramer, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In [CKF04], we have reported on our concept of a peer-to-peer extension to the popular video disk recorder (VDR) [Sch04], the Distributed Video Disk Recording (DVDR) system. The DVDR is a collaboration system of existing video disk recorders via a peer to peer network. There, the VDRs communicate about the tasks to be done and distribute the recordings afterwards. In this paper, we report on lessons learnt during its implementation and explain the considerations leading to the design of a new job scheduling algorithm. DVDR is an application which is based on a distributed hash table (DHT) employing proximity route selection (PRS)/proximity neighbor selection (PNS). For our implementation, we chose to use Chord [SMK + 01, GGG + 03]. Using a DHT with PRS/PNS yields two important features: (1) Each hashed key is routed to exactly one destination node within the system. (2) PRS/PNS forces messages originating in one region of the network destined to the same key to be routed through exactly one node in that region (route convergence). The first property enables per-key aggregation trees with a tree being rooted at the node which is responsible for the respective key. This node serves as a rendezvous point. The second property leads to locality (i.e., low latency) in this aggregation tree

[Go to top]

Selected DHT Algorithms (PDF)
by Stefan Götz, Simon Rieche, and Klaus Wehrle.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several different approaches to realizing the basic principles of DHTs have emerged over the last few years. Although they rely on the same fundamental idea, there is a large diversity of methods for both organizing the identifier space and performing routing. The particular properties of each approach can thus be exploited by specific application scenarios and requirements. This overview focuses on the three DHT systems that have received the most attention in the research community: Chord, Pastry, and Content Addressable Networks (CAN). Furthermore, the systems Symphony, Viceroy, and Kademlia are discussed because they exhibit interesting mechanisms and properties beyond those of the first three systems

[Go to top]

Non-transitive connectivity and DHTs (PDF)
by Michael J. Freedman, Karthik Lakshminarayanan, Sean C. Rhea, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The most basic functionality of a distributed hash table, or DHT, is to partition a key space across the set of nodes in a distributed system such that all nodes agree on the partitioning. For example, the Chord DHT assigns each node

[Go to top]

Making chord robust to byzantine attacks (PDF)
by Amos Fiat, Jared Saia, and Maxwell Young.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Chord is a distributed hash table (DHT) that requires only O(log n) links per node and performs searches with latency and message cost O(log n), where n is the number of peers in the network. Chord assumes all nodes behave according to protocol. We give a variant of Chord which is robust with high probability for any time period during which: 1) there are always at least z total peers in the network for some integer z; 2) there are never more than (1/4–)z Byzantine peers in the network for a fixed > 0; and 3) the number of peer insertion and deletion events is no more than zk for some tunable parameter k. We assume there is an adversary controlling the Byzantine peers and that the IP-addresses of all the Byzantine peers and the locations where they join the network are carefully selected by this adversary. Our notion of robustness is rather strong in that we not only guarantee that searches can be performed but also that we can enforce any set of proper behavior such as contributing new material, etc. In comparison to Chord, the resources required by this new variant are only a polylogarithmic factor greater in communication, messaging, and linking costs

[Go to top]

The Hybrid Chord Protocol: A Peer-to-peer Lookup Service for Context-Aware Mobile Applications (PDF)
by Stefan Zöls, Rüdiger Schollmeier, Wolfgang Kellerer, and Anthony Tarlano.
In IEEE ICN, Reunion Island, April 2005. LNCS 3421, 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental problem in Peer-to-Peer (P2P) overlay networks is how to efficiently find a node that shares a requested object. The Chord protocol is a distributed lookup protocol addressing this problem using hash keys to identify the nodes in the network and also the shared objects. However, when a node joins or leaves the Chord ring, object references have to be rearranged in order to maintain the hash key mapping rules. This leads to a heavy traffic load, especially when nodes stay in the Chord ring only for a short time. In mobile scenarios storage capacity, transmission data rate and battery power are limited resources, so the heavy traffic load generated by the shifting of object references can lead to severe problems when using Chord in a mobile scenario. In this paper, we present the Hybrid Chord Protocol (HCP). HCP solves the problem of frequent joins and leaves of nodes. As a further improvement of an efficient search, HCP supports the grouping of shared objects in interest groups. Our concept of using information profiles to describe shared objects allows defining special interest groups (context spaces) and a shared object to be available in multiple context spaces

[Go to top]

Heterogeneity and Load Balance in Distributed Hash Tables (PDF)
by Brighten Godfrey and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Existing solutions to balance load in DHTs incur a high overhead either in terms of routing state or in terms of load movement generated by nodes arriving or departing the system. In this paper, we propose a set of general techniques and use them to develop a protocol based on Chord, called Y0 , that achieves load balancing with minimal overhead under the typical assumption that the load is uniformly distributed in the identifier space. In particular, we prove that Y0 can achieve near-optimal load balancing, while moving little load to maintain the balance and increasing the size of the routing tables by at most a constant factor

[Go to top]

Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications (PDF)
by Ion Stoica, Robert Morris, David Karger, Frans M. Kaashoek, and Hari Balakrishnan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Efficiently determining the node that stores a data item in a distributed network is an important and challenging problem. This paper describes the motivation and design of the Chord system, a decentralized lookup service that stores key/value pairs for such networks. The Chord protocol takes as input an m-bit identifier (derived by hashing a higher-level application specific key), and returns the node that stores the value corresponding to that key. Each Chord node is identified by an m-bit identifier and each node stores the key identifiers in the system closest to the node's identifier. Each node maintains an m-entry routing table that allows it to look up keys efficiently. Results from theoretical analysis, simulations, and experiments show that Chord is incrementally scalable, with insertion and lookup costs scaling logarithmically with the number of Chord nodes

[Go to top]

Circuits

Towards Empirical Aspects of Secure Scalar Product (PDF)
by I-Cheng Wang, Chih-Hao Shen, Tsan-sheng Hsu, Churn-Chung Liao, Da-Wei Wang, and J. Zhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Privacy is ultimately important, and there is a fair amount of research about it. However, few empirical studies about the cost of privacy are conducted. In the area of secure multiparty computation, the scalar product has long been reckoned as one of the most promising building blocks in place of the classic logic gates. The reason is not only the scalar product complete, which is as good as logic gates, but also the scalar product is much more efficient than logic gates. As a result, we set to study the computation and communication resources needed for some of the most well-known and frequently referred secure scalar-product protocols, including the composite-residuosity, the invertible-matrix, the polynomial-sharing, and the commodity-based approaches. Besides the implementation remarks of these approaches, we analyze and compare their execution time, computation time, and random number consumption, which are the most concerned resources when talking about secure protocols. Moreover, Fairplay the benchmark approach implementing Yao's famous circuit evaluation protocol, is included in our experiments in order to demonstrate the potential for the scalar product to replace logic gates

[Go to top]

CliqueNet

CliqueNet: A Self-Organizing, Scalable, Peer-to-Peer Anonymous Communication Substrate (PDF)
by Emin Gün Sirer, Milo Polte, and Mark Robson.
In unknown, 2001. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity is critical for many networked applications. Yet current Internet protocols provide no support for masking the identity of communication endpoints. This paper outlines a design for a peer-to-peer, scalable, tamper-resilient communication protocol that provides strong anonymity and privacy. Called CliqueNet, our protocol provides an information-theoretic guarantee: an omnipotent adversary that can wiretap at any location in the network cannot determine the sender of a packet beyond a clique, that is, a set of k hosts, where k is an anonymizing factor chosen by the participants. CliqueNet is resilient to jamming by malicious hosts and can scale with the number of participants. This paper motivates the need for an anonymous communication layer and describes the self-organizing, novel divide-and-conquer approach that enables CliqueNet to scale while offering a strong anonymity guarantee. CliqueNet is widely applicable as a communication substrate for peer-to-peer applications that require anonymity, privacy and anti-censorship guarantees

[Go to top]

Clustering algorithms

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

Communication

A Secure and Resilient Communication Infrastructure for Decentralized Networking Applications (PDF)
by Matthias Wachs.
PhD, Technische Universität München, February 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis provides the design and implementation of a secure and resilient communication infrastructure for decentralized peer-to-peer networks. The proposed communication infrastructure tries to overcome limitations to unrestricted communication on today's Internet and has the goal of re-establishing unhindered communication between users. With the GNU name system, we present a fully decentralized, resilient, and privacy-preserving alternative to DNS and existing security infrastructures

[Go to top]

Complexity

Designing Economics Mechanisms
by Leonid Hurwicz and Stanley Reiter.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

A mechanism is a mathematical structure that models institutions through which economic activity is guided and coordinated. There are many such institutions; markets are the most familiar ones. Lawmakers, administrators and officers of private companies create institutions in order to achieve desired goals. They seek to do so in ways that economize on the resources needed to operate the institutions, and that provide incentives that induce the required behaviors. This book presents systematic procedures for designing mechanisms that achieve specified performance, and economize on the resources required to operate the mechanism. The systematic design procedures are algorithms for designing informationally efficient mechanisms. Most of the book deals with these procedures of design. When there are finitely many environments to be dealt with, and there is a Nash-implementing mechanism, our algorithms can be used to make that mechanism into an informationally efficient one. Informationally efficient dominant strategy implementation is also studied. Leonid Hurwicz is the Nobel Prize Winner 2007 for The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with colleagues Eric Maskin and Roger Myerson, for his work on the effectiveness of markets

[Go to top]

Computational Geometry

Designing Economics Mechanisms
by Leonid Hurwicz and Stanley Reiter.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

A mechanism is a mathematical structure that models institutions through which economic activity is guided and coordinated. There are many such institutions; markets are the most familiar ones. Lawmakers, administrators and officers of private companies create institutions in order to achieve desired goals. They seek to do so in ways that economize on the resources needed to operate the institutions, and that provide incentives that induce the required behaviors. This book presents systematic procedures for designing mechanisms that achieve specified performance, and economize on the resources required to operate the mechanism. The systematic design procedures are algorithms for designing informationally efficient mechanisms. Most of the book deals with these procedures of design. When there are finitely many environments to be dealt with, and there is a Nash-implementing mechanism, our algorithms can be used to make that mechanism into an informationally efficient one. Informationally efficient dominant strategy implementation is also studied. Leonid Hurwicz is the Nobel Prize Winner 2007 for The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with colleagues Eric Maskin and Roger Myerson, for his work on the effectiveness of markets

[Go to top]

Computational efficiency

Towards Empirical Aspects of Secure Scalar Product (PDF)
by I-Cheng Wang, Chih-Hao Shen, Tsan-sheng Hsu, Churn-Chung Liao, Da-Wei Wang, and J. Zhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Privacy is ultimately important, and there is a fair amount of research about it. However, few empirical studies about the cost of privacy are conducted. In the area of secure multiparty computation, the scalar product has long been reckoned as one of the most promising building blocks in place of the classic logic gates. The reason is not only the scalar product complete, which is as good as logic gates, but also the scalar product is much more efficient than logic gates. As a result, we set to study the computation and communication resources needed for some of the most well-known and frequently referred secure scalar-product protocols, including the composite-residuosity, the invertible-matrix, the polynomial-sharing, and the commodity-based approaches. Besides the implementation remarks of these approaches, we analyze and compare their execution time, computation time, and random number consumption, which are the most concerned resources when talking about secure protocols. Moreover, Fairplay the benchmark approach implementing Yao's famous circuit evaluation protocol, is included in our experiments in order to demonstrate the potential for the scalar product to replace logic gates

[Go to top]

Computer Algebra

Designing Economics Mechanisms
by Leonid Hurwicz and Stanley Reiter.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

A mechanism is a mathematical structure that models institutions through which economic activity is guided and coordinated. There are many such institutions; markets are the most familiar ones. Lawmakers, administrators and officers of private companies create institutions in order to achieve desired goals. They seek to do so in ways that economize on the resources needed to operate the institutions, and that provide incentives that induce the required behaviors. This book presents systematic procedures for designing mechanisms that achieve specified performance, and economize on the resources required to operate the mechanism. The systematic design procedures are algorithms for designing informationally efficient mechanisms. Most of the book deals with these procedures of design. When there are finitely many environments to be dealt with, and there is a Nash-implementing mechanism, our algorithms can be used to make that mechanism into an informationally efficient one. Informationally efficient dominant strategy implementation is also studied. Leonid Hurwicz is the Nobel Prize Winner 2007 for The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with colleagues Eric Maskin and Roger Myerson, for his work on the effectiveness of markets

[Go to top]

Computer Science - Cryptography and Security

reclaimID: Secure, Self-Sovereign Identities using Name Systems and Attribute-Based Encryption
by M. Schanzenbach, G. Bramm, and J. Schütte.
In the Proceedings of 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we present reclaimID: An architecture that allows users to reclaim their digital identities by securely sharing identity attributes without the need for a centralised service provider. We propose a design where user attributes are stored in and shared over a name system under user-owned namespaces. Attributes are encrypted using attribute-based encryption (ABE), allowing the user to selectively authorize and revoke access of requesting parties to subsets of his attributes. We present an implementation based on the decentralised GNU Name System (GNS) in combination with ciphertext-policy ABE using type-1 pairings. To show the practicality of our implementation, we carried out experimental evaluations of selected implementation aspects including attribute resolution performance. Finally, we show that our design can be used as a standard OpenID Connect Identity Provider allowing our implementation to be integrated into standard-compliant services

[Go to top]

Computer applications

A Collusion-Resistant Distributed Scalar Product Protocol with Application to Privacy-Preserving Computation of Trust (PDF)
by C.A. Melchor, B. Ait-Salem, and P. Gaborit.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private scalar product protocols have proved to be interesting in various applications such as data mining, data integration, trust computing, etc. In 2007, Yao et al. proposed a distributed scalar product protocol with application to privacy-preserving computation of trust [1]. This protocol is split in two phases: an homorphic encryption computation; and a private multi-party summation protocol. The summation protocol has two drawbacks: first, it generates a non-negligible communication overhead; and second, it introduces a security flaw. The contribution of this present paper is two-fold. We first prove that the protocol of [1] is not secure in the semi-honest model by showing that it is not resistant to collusion attacks and we give an example of a collusion attack, with only four participants. Second, we propose to use a superposed sending round as an alternative to the multi-party summation protocol, which results in better security properties and in a reduction of the communication costs. In particular, regarding security, we show that the previous scheme was vulnerable to collusions of three users whereas in our proposal we can t isin [1..n–1] and define a protocol resisting to collusions of up to t users

[Go to top]

Computer science

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

Conferences

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

Content Addressable Networks

Selected DHT Algorithms (PDF)
by Stefan Götz, Simon Rieche, and Klaus Wehrle.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several different approaches to realizing the basic principles of DHTs have emerged over the last few years. Although they rely on the same fundamental idea, there is a large diversity of methods for both organizing the identifier space and performing routing. The particular properties of each approach can thus be exploited by specific application scenarios and requirements. This overview focuses on the three DHT systems that have received the most attention in the research community: Chord, Pastry, and Content Addressable Networks (CAN). Furthermore, the systems Symphony, Viceroy, and Kademlia are discussed because they exhibit interesting mechanisms and properties beyond those of the first three systems

[Go to top]

Costs

Towards Empirical Aspects of Secure Scalar Product (PDF)
by I-Cheng Wang, Chih-Hao Shen, Tsan-sheng Hsu, Churn-Chung Liao, Da-Wei Wang, and J. Zhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Privacy is ultimately important, and there is a fair amount of research about it. However, few empirical studies about the cost of privacy are conducted. In the area of secure multiparty computation, the scalar product has long been reckoned as one of the most promising building blocks in place of the classic logic gates. The reason is not only the scalar product complete, which is as good as logic gates, but also the scalar product is much more efficient than logic gates. As a result, we set to study the computation and communication resources needed for some of the most well-known and frequently referred secure scalar-product protocols, including the composite-residuosity, the invertible-matrix, the polynomial-sharing, and the commodity-based approaches. Besides the implementation remarks of these approaches, we analyze and compare their execution time, computation time, and random number consumption, which are the most concerned resources when talking about secure protocols. Moreover, Fairplay the benchmark approach implementing Yao's famous circuit evaluation protocol, is included in our experiments in order to demonstrate the potential for the scalar product to replace logic gates

[Go to top]

SURF-2: A program for dependability evaluation of complex hardware and software systems
by C. Beounes, M. Aguera, J. Arlat, S. Bachmann, C. Bourdeau, J. -. Doucet, K. Kanoun, J. -. Laprie, S. Metge, J. Moreira de Souza, D. Powell, and P. Spiesser.
In the Proceedings of FTCS-23 The Twenty-Third International Symposium on Fault-Tolerant Computing, June 1993, pages 668-673. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SURF-2, a software tool for evaluating system dependability, is described. It is especially designed for an evaluation-based system design approach in which multiple design solutions need to be compared from the dependability viewpoint. System behavior may be modeled either by Markov chains or by generalized stochastic Petri nets. The tool supports the evaluation of different measures of dependability, including pointwise measures, asymptotic measures, mean sojourn times and, by superposing a reward structure on the behavior model, reward measures such as expected performance or cost

[Go to top]

Covariance matrix

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Crowds

The Wisdom of Crowds: Attacks and Optimal Constructions (PDF)
by George Danezis, Claudia Diaz, Emilia Käsper, and Carmela Troncoso.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a traffic analysis of the ADU anonymity scheme presented at ESORICS 2008, and the related RADU scheme. We show that optimal attacks are able to de-anonymize messages more effectively than believed before. Our analysis applies to single messages as well as long term observations using multiple messages. The search of a better scheme is bound to fail, since we prove that the original Crowds anonymity system provides the best security for any given mean messaging latency. Finally we present D-Crowds, a scheme that supports any path length distribution, while leaking the least possible information, and quantify the optimal attacks against it

[Go to top]

An Analysis of the Degradation of Anonymous Protocols (PDF)
by Matthew Wright, Micah Adler, Brian Neil Levine, and Clay Shields.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There have been a number of protocols proposed for anonymous network communication. In this paper we investigate attacks by corrupt group members that degrade the anonymity of each protocol over time. We prove that when a particular initiator continues communication with a particular responder across path reformations, existing protocols are subject to the attack. We use this result to place an upper bound on how long existing protocols, including Crowds, Onion Routing, Hordes, Web Mixes, and DC-Net, can maintain anonymity in the face of the attacks described. Our results show that fully-connected DC-Net is the most resilient to these attacks, but it su$$ers from scalability issues that keep anonymity group sizes small. Additionally, we show how violating an assumption of the attack allows malicious users to setup other participants to falsely appear to be the initiator of a connection

[Go to top]

Crowds: Anonymity for web transactions (PDF)
by Michael K. Reiter and Aviel D. Rubin.
In ACM Transactions on Information and System Security 1, 1998, pages 66-92. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Crowds is a system that allows anonymous web-surfing. For each host, a random static path through the crowd is formed that then acts as a sequence of proxies, indirecting replies and responses. Vulnerable when facing adversaries that can perform traffic analysis at the local node and without responder anonymity. But highly scalable and efficient

[Go to top]

Curve25519

Curve25519: new Diffie-Hellman speed records (PDF)
by Daniel J. Bernstein.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

DC-net

The dining cryptographers in the disco: unconditional sender and recipient untraceability with computationally secure serviceability (PDF)
by Michael Waidner and Birgit Pfitzmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In Journal of Cryptology 1/1 (1988) 65-75 (= [Chau_88]), David Chaum describes a beautiful technique, the DC-net, which should allow participants to send and receive messages anonymously in an arbitrary network. The untraceability of the senders is proved to be unconditional, but that of the recipients implicitly assumes a reliable broadcast network. This assumption is unrealistic in some networks, but it can be removed completely by using the fail-stop key generation schemes by Waidner (these proceedings, =[Waid_89]). In both cases, however, each participant can untraceably and permanently disrupt the entireDC-net. We present a protocol which guarantees unconditional untraceability, the original goal of the DC-net, onthe inseparability assumption (i.e. the attacker must be unable to prevent honest participants fromcommunicating, which is considerably less than reliable broadcast), and computationally secureserviceability: Computationally restricted disrupters can be identified and removed from the DC-net. On the one hand, our solution is based on the lovely idea by David Chaum [Chau_88 2.5] of setting traps for disrupters. He suggests a scheme to guarantee unconditional untraceability and computationally secure serviceability, too, but on the reliable broadcast assumption. The same scheme seems to be used by Bos and den Boer (these proceedings, = [BoBo_89]). We show that this scheme needs some changes and refinements before being secure, even on the reliable broadcast assumption. On the other hand, our solution is based on the idea of digital signatures whose forgery by an unexpectedly powerful attacker is provable, which might be of independent interest. We propose such a (one-time) signature scheme based on claw-free permutation pairs; the forgery of signatures is equivalent to finding claws, thus in a special case to the factoring problem. In particular, with such signatures we can, for the first time, realize fail-stop Byzantine Agreement, and also adaptive Byzantine Agreement, i.e. Byzantine Agreement which can only be disrupted by an attacker who controls at least a third of all participants and who can forge signatures. We also sketch applications of these signatures to a payment system, solving disputes about shared secrets, and signatures which cannot be shown round

[Go to top]

DC-network

CliqueNet: A Self-Organizing, Scalable, Peer-to-Peer Anonymous Communication Substrate (PDF)
by Emin Gün Sirer, Milo Polte, and Mark Robson.
In unknown, 2001. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity is critical for many networked applications. Yet current Internet protocols provide no support for masking the identity of communication endpoints. This paper outlines a design for a peer-to-peer, scalable, tamper-resilient communication protocol that provides strong anonymity and privacy. Called CliqueNet, our protocol provides an information-theoretic guarantee: an omnipotent adversary that can wiretap at any location in the network cannot determine the sender of a packet beyond a clique, that is, a set of k hosts, where k is an anonymizing factor chosen by the participants. CliqueNet is resilient to jamming by malicious hosts and can scale with the number of participants. This paper motivates the need for an anonymous communication layer and describes the self-organizing, novel divide-and-conquer approach that enables CliqueNet to scale while offering a strong anonymity guarantee. CliqueNet is widely applicable as a communication substrate for peer-to-peer applications that require anonymity, privacy and anti-censorship guarantees

[Go to top]

DCOP

BnB-ADOPT: an asynchronous branch-and-bound DCOP algorithm (PDF)
by William Yeoh, Ariel Felner, and Sven Koenig.
In Journal of Artificial Intelligence Research 38, 2010, pages 85-133. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed constraint optimization (DCOP) problems are a popular way of formulating and solving agent-coordination problems. It is often desirable to solve DCOP problems optimally with memory-bounded and asynchronous algorithms. We introduce Branch-and-Bound ADOPT (BnB-ADOPT), a memory-bounded asynchronous DCOP algorithm that uses the message passing and communication framework of ADOPT, a well known memory-bounded asynchronous DCOP algorithm, but changes the search strategy of ADOPT from best-first search to depth-first branch-and-bound search. Our experimental results show that BnB-ADOPT is up to one order of magnitude faster than ADOPT on a variety of large DCOP problems and faster than NCBB, a memory-bounded synchronous DCOP algorithm, on most of these DCOP problems

[Go to top]

Evaluating the performance of DCOP algorithms in a real world, dynamic problem (PDF)
by Robert Junges and Ana L. C. Bazzan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Complete algorithms have been proposed to solve problems modelled as distributed constraint optimization (DCOP). However, there are only few attempts to address real world scenarios using this formalism, mainly because of the complexity associated with those algorithms. In the present work we compare three complete algorithms for DCOP, aiming at studying how they perform in complex and dynamic scenarios of increasing sizes. In order to assess their performance we measure not only standard quantities such as number of cycles to arrive to a solution, size and quantity of exchanged messages, but also computing time and quality of the solution which is related to the particular domain we use. This study can shed light in the issues of how the algorithms perform when applied to problems other than those reported in the literature (graph coloring, meeting scheduling, and distributed sensor network)

[Go to top]

Anytime local search for distributed constraint optimization (PDF)
by Roie Zivan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most former studies of Distributed Constraint Optimization Problems (DisCOPs) search considered only complete search algorithms, which are practical only for relatively small problems. Distributed local search algorithms can be used for solving DisCOPs. However, because of the differences between the global evaluation of a system's state and the private evaluation of states by agents, agents are unaware of the global best state which is explored by the algorithm. Previous attempts to use local search algorithms for solving DisCOPs reported the state held by the system at the termination of the algorithm, which was not necessarily the best state explored. A general framework for implementing distributed local search algorithms for DisCOPs is proposed. The proposed framework makes use of a BFS-tree in order to accumulate the costs of the system's state in its different steps and to propagate the detection of a new best step when it is found. The resulting framework enhances local search algorithms for DisCOPs with the anytime property. The proposed framework does not require additional network load. Agents are required to hold a small (linear) additional space (beside the requirements of the algorithm in use). The proposed framework preserves privacy at a higher level than complete DisCOP algorithms which make use of a pseudo-tree (ADOPT, DPOP)

[Go to top]

Preprocessing techniques for accelerating the DCOP algorithm ADOPT (PDF)
by Syed Ali, Sven Koenig, and Milind Tambe.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Methods for solving Distributed Constraint Optimization Problems (DCOP) have emerged as key techniques for distributed reasoning. Yet, their application faces significant hurdles in many multiagent domains due to their inefficiency. Preprocessing techniques have successfully been used to speed up algorithms for centralized constraint satisfaction problems. This paper introduces a framework of different preprocessing techniques that are based on dynamic programming and speed up ADOPT, an asynchronous complete and optimal DCOP algorithm. We investigate when preprocessing is useful and which factors influence the resulting speedups in two DCOP domains, namely graph coloring and distributed sensor networks. Our experimental results demonstrate that our preprocessing techniques are fast and can speed up ADOPT by an order of magnitude

[Go to top]

Distributed Constraint Optimization as a Formal Model of Partially Adversarial Cooperation (PDF)
by Makoto Yokoo and Edmund H. Durfee.
In unknown(CSE-TR-101-9), 1991. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we argue that partially adversarial and partially cooperative (PARC) problems in distributed arti cial intelligence can be mapped into a formalism called distributed constraint optimization problems (DCOPs), which generalize distributed constraint satisfaction problems [Yokoo, et al. 90] by introducing weak constraints (preferences). We discuss several solution criteria for DCOP and clarify the relation between these criteria and di erent levels of agent rationality [Rosenschein and Genesereth 85], and show the algorithms for solving DCOPs in which agents incrementally exchange only necessary information to converge on a mutually satis able bsolution

[Go to top]

DFA

Decentralized Evaluation of Regular Expressions for Capability Discovery in Peer-to-Peer Networks (PDF)
by Maximilian Szengel.
Masters, Technische Universität München, November 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis presents a novel approach for decentralized evaluation of regular expressions for capability discovery in DHT-based overlays. The system provides support for announcing capabilities expressed as regular expressions and discovering participants offering adequate capabilities. The idea behind our approach is to convert regular expressions into finite automatons and store the corresponding states and transitions in a DHT. We show how locally constructed DFA are merged in the DHT into an NFA without the knowledge of any NFA already present in the DHT and without the need for any central authority. Furthermore we present options of optimizing the DFA. There exist several possible applications for this general approach of decentralized regular expression evaluation. However, in this thesis we focus on the application of discovering users that are willing to provide network access using a specified protocol to a particular destination. We have implemented the system for our proposed approach and conducted a simulation. Moreover we present the results of an emulation of the implemented system in a cluster

[Go to top]

Algorithms to accelerate multiple regular expressions matching for deep packet inspection
by Sailesh Kumar, Sarang Dharmapurikar, Fang Yu, Patrick Crowley, and Jonathan Turner.
In SIGCOMM Comput. Commun. Rev 36(4), 2006, pages 339-350. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Algorithms to accelerate multiple regular expressions matching for deep packet inspection
by Sailesh Kumar, Sarang Dharmapurikar, Fang Yu, Patrick Crowley, and Jonathan Turner.
In SIGCOMM Comput. Commun. Rev 36(4), 2006, pages 339-350. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

DHT

Experimental comparison of Byzantine fault tolerant distributed hash tables (PDF)
by Supriti Singh.
Masters, Saarland University, September 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Hash Tables (DHTs) are a key data structure for construction of a peer to peer systems. They provide an efficient way to distribute the storage and retrieval of key-data pairs among the participating peers. DHTs should be scalable, robust against churn and resilient to attacks. X-Vine is a DHT protocol which offers security against Sybil attacks. All communication among peers is performed over social network links, with the presumption that a friend can be trusted. This trust can be extended to a friend of a friend. It uses the tested Chord Ring topology as an overlay, which has been proven to be scalable and robust. The aim of the thesis is to experimentally compare two DHTs, R5 N and X-Vine. GNUnet is a free software secure peer to peer framework, which uses R 5N . In this thesis, we have presented the implementation of X-Vine on GNUnet, and compared the performance of R5 N and X-Vine

[Go to top]

DKS

Distributed k-ary System: Algorithms for Distributed Hash Tables (PDF)
by Ali Ghodsi.
Doctoral, KTH/Royal Institute of Technology, December 2006. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This dissertation presents algorithms for data structures called distributed hash tables (DHT) or structured overlay networks, which are used to build scalable self-managing distributed systems. The provided algorithms guarantee lookup consistency in the presence of dynamism: they guarantee consistent lookup results in the presence of nodes joining and leaving. Similarly, the algorithms guarantee that routing never fails while nodes join and leave. Previous algorithms for lookup consistency either suffer from starvation, do not work in the presence of failures, or lack proof of correctness. Several group communication algorithms for structured overlay networks are presented. We provide an overlay broadcast algorithm, which unlike previous algorithms avoids redundant messages, reaching all nodes in O(log n) time, while using O(n) messages, where n is the number of nodes in the system. The broadcast algorithm is used to build overlay multicast. We introduce bulk operation, which enables a node to efficiently make multiple lookups or send a message to all nodes in a specified set of identifiers. The algorithm ensures that all specified nodes are reached in O(log n) time, sending maximum O(log n) messages per node, regardless of the input size of the bulk operation. Moreover, the algorithm avoids sending redundant messages. Previous approaches required multiple lookups, which consume more messages and can render the initiator a bottleneck. Our algorithms are used in DHT-based storage systems, where nodes can do thousands of lookups to fetch large files. We use the bulk operation algorithm to construct a pseudo-reliable broadcast algorithm. Bulk operations can also be used to implement efficient range queries. Finally, we describe a novel way to place replicas in a DHT, called symmetric replication, that enables parallel recursive lookups. Parallel lookups are known to reduce latencies. However, costly iterative lookups have previously been used to do parallel lookups. Moreover, joins or leaves only require exchanging O(1) messages, while other schemes require at least log(f) messages for a replication degree of f. The algorithms have been implemented in a middleware called the Distributed k-ary System (DKS), which is briefly described

[Go to top]

Keso–a Scalable, Reliable and Secure Read/Write Peer-to-Peer File System (PDF)
by Mattias Amnefelt and Johanna Svenningsson.
Master's Thesis, KTH/Royal Institute of Technology, May 2004. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this thesis we present the design of Keso, a distributed and completely decentralized file system based on the peer-to-peer overlay network DKS. While designing Keso we have taken into account many of the problems that exist in today's distributed file systems. Traditionally, distributed file systems have been built around dedicated file servers which often use expensive hardware to minimize the risk of breakdown and to handle the load. System administrators are required to monitor the load and disk usage of the file servers and to manually add clients and servers to the system. Another drawback with centralized file systems are that a lot of storage space is unused on clients. Measurements we have taken on existing computer systems has shown that a large part of the storage capacity of workstations is unused. In the system we looked at there was three times as much storage space available on workstations than was stored in the distributed file system. We have also shown that much data stored in a production use distributed file system is redundant. The main goals for the design of Keso has been that it should make use of spare resources, avoid storing unnecessarily redundant data, scale well, be self-organizing and be a secure file system suitable for a real world environment. By basing Keso on peer-to-peer techniques it becomes highly scalable, fault tolerant and self-organizing. Keso is intended to run on ordinary workstations and can make use of the previously unused storage space. Keso also provides means for access control and data privacy despite being built on top of untrusted components. The file system utilizes the fact that a lot of data stored in traditional file systems is redundant by letting all files that contains a datablock with the same contents reference the same datablock in the file system. This is achieved while still maintaining access control and data privacy

[Go to top]

DNS

Decentralized Authentication for Self-Sovereign Identities using Name Systems (PDF)
by Christian Grothoff, Martin Schanzenbach, Annett Laube, and Emmanuel Benoist.
In journal:??(847382), October 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The GNU Name System (GNS) is a fully decentralized public key infrastructure and name system with private information retrieval semantics. It serves a holistic approach to interact seamlessly with IoT ecosystems and enables people and their smart objects to prove their identity, membership and privileges - compatible with existing technologies. In this report we demonstrate how a wide range of private authentication and identity management scenarios are addressed by GNS in a cost-efficient, usable and secure manner. This simple, secure and privacy-friendly authentication method is a significant breakthrough when cyber peace, privacy and liability are the priorities for the benefit of a wide range of the population. After an introduction to GNS itself, we show how GNS can be used to authenticate servers, replacing the Domain Name System (DNS) and X.509 certificate authorities (CAs) with a more privacy-friendly but equally usable protocol which is trustworthy, human-centric and includes group authentication. We also built a demonstrator to highlight how GNS can be used in medical computing to simplify privacy-sensitive data processing in the Swiss health-care system. Combining GNS with attribute-based encryption, we created ReclaimID, a robust and reliable OpenID Connect-compatible authorization system. It includes simple, secure and privacy-friendly single sign-on to seamlessly share selected attributes with Web services, cloud ecosystems. Further, we demonstrate how ReclaimID can be used to solve the problem of addressing, authentication and data sharing for IoT devices. These applications are just the beginning for GNS; the versatility and extensibility of the protocol will lend itself to an even broader range of use-cases. GNS is an open standard with a complete free software reference implementation created by the GNU project. It can therefore be easily audited, adapted, enhanced, tailored, developed and/or integrated, as anyone is allowed to use the core protocols and implementations free of charge, and to adopt them to their needs under the terms of the GNU Affero General Public License, a free software license approved by the Free Software Foundation.

[Go to top]

NSA's MORECOWBELL: Knell for DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Le programme MORECOWBELL de la NSA Sonne le glas du NSA (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Ludovic Courtès.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Il programma MORECOWBELL della NSA: Campane a morto per il DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Luca Saiu.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

El programa MORECOWBELL de la NSA: Doblan las campanas para el DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A Censorship-Resistant, Privacy-Enhancing and Fully Decentralized Name System (PDF)
by Matthias Wachs, Martin Schanzenbach, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) is vital for access to information on the Internet. This makes it a target for attackers whose aim is to suppress free access to information. This paper introduces the design and implementation of the GNU Name System (GNS), a fully decentralized and censorship-resistant name system. GNS provides a privacy-enhancing alternative to DNS which preserves the desirable property of memorable names. Due to its design, it can also double as a partial replacement of public key infrastructures, such as X.509. The design of GNS incorporates the capability to integrate and coexist with DNS. GNS is based on the principle of a petname system and builds on ideas from the Simple Distributed Security Infrastructure (SDSI), addressing a central issue with the decentralized mapping of secure identifiers to memorable names: namely the impossibility of providing a global, secure and memorable mapping without a trusted authority. GNS uses the transitivity in the SDSI design to replace the trusted root with secure delegation of authority, thus making petnames useful to other users while operating under a very strong adversary model. In addition to describing the GNS design, we also discuss some of the mechanisms that are needed to smoothly integrate GNS with existing processes and procedures in Web browsers. Specifically, we show how GNS is able to transparently support many assumptions that the existing HTTP(S) infrastructure makes about globally unique names

[Go to top]

Design and Implementation of a Censorship Resistant and Fully Decentralized Name System (PDF)
by Martin Schanzenbach.
Master's, TU Munich, September 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis presents the design and implementation of the GNU Alternative Domain System (GADS), a decentralized, secure name system providing memorable names for the Internet as an alternative to the Domain Name System (DNS). The system builds on ideas from Rivest's Simple Distributed Security Infrastructure (SDSI) to address a central issue with providing a decentralized mapping of secure identifiers to memorable names: providing a global, secure and memorable mapping is impossible without a trusted authority. SDSI offers an alternative by linking local name spaces; GADS uses the transitivity provided by the SDSI design to build a decentralized and censorship resistant name system without a trusted root based on secure delegation of authority. Additional details need to be considered in order to enable GADS to integrate smoothly with the World Wide Web. While following links on the Web matches following delegations in GADS, the existing HTTP-based infrastructure makes many assumptions about globally unique names; however, proxies can be used to enable legacy applications to function with GADS. This work presents the fundamental goals and ideas behind GADS, provides technical details on how GADS has been implemented and discusses deployment issues for using GADS with existing systems. We discuss how GADS and legacy DNS can interoperate during a transition period and what additional security advantages GADS offers over DNS with Security Extensions (DNSSEC). Finally, we present the results of a survey into surfing behavior, which suggests that the manual introduction of new direct links in GADS will be infrequent

[Go to top]

Bootstrapping of Peer-to-Peer Networks (PDF)
by Chis GauthierDickey and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present the first heuristic for fully distributed bootstrapping of peer-to-peer networks. Our heuristic generates a stream of promising IP addresses to be probed as entry points. This stream is generated using statistical profiles using the IP ranges of start-of-authorities (SOAs) in the domain name system (DNS). We present experimental results demonstrating that with this approach it is efficient and practical to bootstrap Gnutella-sized peer-to-peer networks — without the need for centralized services or the public exposure of end-user's private IP addresses

[Go to top]

DNS-Based Service Discovery in Ad Hoc Networks: Evaluation and Improvements
by Celeste Campo and Carlos García-Rubio.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In wireless networks, devices must be able to dynamically discover and share services in the environment. The problem of service discovery has attracted great research interest in the last years, particularly for ad hoc networks. Recently, the IETF has proposed the use of the DNS protocol for service discovery. For ad hoc networks, the IETF works in two proposals of distributed DNS, Multicast DNS and LLMNR, that can both be used for service discovery. In this paper we describe and compare through simulation the performance of service discovery based in these two proposals of distributed DNS. We also propose four simple improvements that reduce the traffic generated, and so the power consumption, especially of the most limited, battery powered, devices. We present simulation results that show the impact of our improvements in a typical scenario

[Go to top]

Availability, Usage, and Deployment Characteristics of the Domain Name System (PDF)
by Jeffrey Pang, James Hendricks, Aditya Akella, Bruce Maggs, Roberto De Prisco, and Srinivasan Seshan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) is a critical part of the Internet's infrastructure, and is one of the few examples of a robust, highly-scalable, and operational distributed system. Although a few studies have been devoted to characterizing its properties, such as its workload and the stability of the top-level servers, many key components of DNS have not yet been examined. Based on large-scale measurements taken fromservers in a large content distribution network, we present a detailed study of key characteristics of the DNS infrastructure, such as load distribution, availability, and deployment patterns of DNS servers. Our analysis includes both local DNS servers and servers in the authoritative hierarchy. We find that (1) the vast majority of users use a small fraction of deployed name servers, (2) the availability of most name servers is high, and (3) there exists a larger degree of diversity in local DNS server deployment and usage than for authoritative servers. Furthermore, we use our DNS measurements to draw conclusions about federated infrastructures in general. We evaluate and discuss the impact of federated deployment models on future systems, such as Distributed Hash Tables

[Go to top]

A Key-Management Scheme for Distributed Sensor Networks (PDF)
by Laurent Eschenauer and Virgil D. Gligor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensorcapture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs

[Go to top]

DNSSEC

NSA's MORECOWBELL: Knell for DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Le programme MORECOWBELL de la NSA Sonne le glas du NSA (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Ludovic Courtès.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Il programma MORECOWBELL della NSA: Campane a morto per il DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Luca Saiu.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

El programa MORECOWBELL de la NSA: Doblan las campanas para el DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

DOLR

A survey of peer-to-peer content distribution technologies (PDF)
by Stephanos Androutsellis-Theotokis and Diomidis Spinellis.
In ACM Computing Surveys 36, December 2004, pages 335-371. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed computer architectures labeled "peer-to-peer" are designed for the sharing of computer resources (content, storage, CPU cycles) by direct exchange, rather than requiring the intermediation or support of a centralized server or authority. Peer-to-peer architectures are characterized by their ability to adapt to failures and accommodate transient populations of nodes while maintaining acceptable connectivity and performance.Content distribution is an important peer-to-peer application on the Internet that has received considerable research attention. Content distribution applications typically allow personal computers to function in a coordinated manner as a distributed storage medium by contributing, searching, and obtaining digital content.In this survey, we propose a framework for analyzing peer-to-peer content distribution technologies. Our approach focuses on nonfunctional characteristics such as security, scalability, performance, fairness, and resource management potential, and examines the way in which these characteristics are reflected in—and affected by—the architectural design decisions adopted by current peer-to-peer systems.We study current peer-to-peer systems and infrastructure technologies in terms of their distributed object location and routing mechanisms, their approach to content replication, caching and migration, their support for encryption, access control, authentication and identity, anonymity, deniability, accountability and reputation, and their use of resource trading and management schemes

[Go to top]

DPOP

PC-DPOP: a new partial centralization algorithm for distributed optimization (PDF)
by Adrian Petcu, Boi Faltings, and Roger Mailler.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Fully decentralized algorithms for distributed constraint optimization often require excessive amounts of communication when applied to complex problems. The OptAPO algorithm of [Mailler and Lesser, 2004] uses a strategy of partial centralization to mitigate this problem. We introduce PC-DPOP, a new partial centralization technique, based on the DPOP algorithm of [Petcu and Faltings, 2005]. PC-DPOP provides better control over what parts of the problem are centralized and allows this centralization to be optimal with respect to the chosen communication structure. Unlike OptAPO, PC-DPOP allows for a priory, exact predictions about privacy loss, communication, memory and computational requirements on all nodes and links in the network. Upper bounds on communication and memory requirements can be specified. We also report strong efficiency gains over OptAPO in experiments on three problem domains

[Go to top]

DVB

A Software and Hardware IPTV Architecture for Scalable DVB Distribution (PDF)
by unknown.
In International Journal of Digital Multimedia Broadcasting 2009, 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many standards and even more proprietary technologies deal with IP-based television (IPTV). But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders

[Go to top]

Distributed Job Scheduling in a Peer-to-Peer Video Recording System (PDF)
by Curt Cramer, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Since the advent of Gnutella, Peer-to-Peer (P2P) protocols have matured towards a fundamental design element for large-scale, self-organising distributed systems. Many research efforts have been invested to improve various aspects of P2P systems, like their performance, scalability, and so on. However, little experience has been gathered from the actual deployment of such P2P systems apart from the typical file sharing applications. To bridge this gap and to gain more experience in making the transition from theory to practice, we started building advanced P2P applications whose explicit goal is to be deployed in the wild. In this paper, we describe a fully decentralised P2P video recording system. Every node in the system is a networked computer (desktop PC or set-top box) capable of receiving and recording DVB-S, i.e. digital satellite TV. Like a normal video recorder, users can program their machines to record certain programmes. With our system, they will be able to schedule multiple recordings in parallel. It is the task of the system to assign the recordings to different machines in the network. Moreover, users can record broadcasts in the past, i.e. the system serves as a short-term archival storage

[Go to top]

Data mining

Multi-objective optimization based privacy preserving distributed data mining in Peer-to-Peer networks (PDF)
by Kamalika Das, Kanishka Bhaduri, and Hillol Kargupta.
In Peer-to-Peer Networking and Applications 4, 2011, pages 192-209. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper proposes a scalable, local privacy-preserving algorithm for distributed Peer-to-Peer (P2P) data aggregation useful for many advanced data mining/analysis tasks such as average/sum computation, decision tree induction, feature selection, and more. Unlike most multi-party privacy-preserving data mining algorithms, this approach works in an asynchronous manner through local interactions and it is highly scalable. It particularly deals with the distributed computation of the sum of a set of numbers stored at different peers in a P2P network in the context of a P2P web mining application. The proposed optimization-based privacy-preserving technique for computing the sum allows different peers to specify different privacy requirements without having to adhere to a global set of parameters for the chosen privacy model. Since distributed sum computation is a frequently used primitive, the proposed approach is likely to have significant impact on many data mining tasks such as multi-party privacy-preserving clustering, frequent itemset mining, and statistical aggregate computation

[Go to top]

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

Decentralisation

Managing and Presenting User Attributes over a Decentralized Secure Name System
by Martin Schanzenbach and Christian Banse.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Today, user attributes are managed at centralized identity providers. However, two centralized identity providers dominate digital identity and access management on the web. This is increasingly becoming a privacy problem in times of mass surveillance and data mining for targeted advertisement. Existing systems for attribute sharing or credential presentation either rely on a trusted third party service or require the presentation to be online and synchronous. In this paper we propose a concept that allows the user to manage and share his attributes asynchronously with a requesting party using a secure, decentralized name system

[Go to top]

Decisional bilinear diffie-hellman

Attribute-based encryption with non-monotonic access structures (PDF)
by Rafail Ostrovsky, Amit Sahai, and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes

[Go to top]

DefenestraTor

DefenestraTor: Throwing out Windows in Tor (PDF)
by Mashael AlSabah, Kevin Bauer, Ian Goldberg, Dirk Grunwald, Damon McCoy, Stefan Savage, and Geoffrey M. Voelker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is one of the most widely used privacy enhancing technologies for achieving online anonymity and resisting censorship. While conventional wisdom dictates that the level of anonymity offered by Tor increases as its user base grows, the most significant obstacle to Tor adoption continues to be its slow performance. We seek to enhance Tor's performance by offering techniques to control congestion and improve flow control, thereby reducing unnecessary delays. To reduce congestion, we first evaluate small fixed-size circuit windows and a dynamic circuit window that adaptively re-sizes in response to perceived congestion. While these solutions improve web page response times and require modification only to exit routers, they generally offer poor flow control and slower downloads relative to Tor's current design. To improve flow control while reducing congestion, we implement N23, an ATM-style per-link algorithm that allows Tor routers to explicitly cap their queue lengths and signal congestion via back-pressure. Our results show that N23 offers better congestion and flow control, resulting in improved web page response times and faster page loads compared to Tor's current design and other window-based approaches. We also argue that our proposals do not enable any new attacks on Tor users' privacy

[Go to top]

Differential Privacy

Lower Bounds in Differential Privacy (PDF)
by Anindya De.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper is about private data analysis, in which a trusted curator holding a confidential database responds to real vector-valued queries. A common approach to ensuring privacy for the database elements is to add appropriately generated random noise to the answers, releasing only these noisy responses. A line of study initiated in [7] examines the amount of distortion needed to prevent privacy violations of various kinds. The results in the literature vary according to several parameters, including the size of the database, the size of the universe from which data elements are drawn, the amount of privacy desired, and for the purposes of the current work, the arity of the query. In this paper we sharpen and unify these bounds. Our foremost result combines the techniques of Hardt and Talwar [11] and McGregor et al. [13] to obtain linear lower bounds on distortion when providing differential privacy for a (contrived) class of low-sensitivity queries. (A query has low sensitivity if the data of a single individual has small effect on the answer.) Several structural results follow as immediate corollaries: We separate so-called counting queries from arbitrary low-sensitivity queries, proving the latter requires more noise, or distortion, than does the former; We separate (,0)-differential privacy from its well-studied relaxation (,)-differential privacy, even when 2- o(n) is negligible in the size n of the database, proving the latter requires less distortion than the former; We demonstrate that (,)-differential privacy is much weaker than (,0)-differential privacy in terms of mutual information of the transcript of the mechanism with the database, even when 2- o(n) is negligible in the size n of the database. We also simplify the lower bounds on noise for counting queries in [11] and also make them unconditional. Further, we use a characterization of (,) differential privacy from [13] to obtain lower bounds on the distortion needed to ensure (,)-differential privacy for , > 0. We next revisit the LP decoding argument of [10] and combine it with a recent result of Rudelson [15] to improve on a result of Kasiviswanathan et al. [12] on noise lower bounds for privately releasing l-way marginals

[Go to top]

Private Similarity Computation in Distributed Systems: From Cryptography to Differential Privacy (PDF)
by Mohammad Alaggan, Sébastien Gambs, and Anne-Marie Kermarrec.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we address the problem of computing the similarity between two users (according to their profiles) while preserving their privacy in a fully decentralized system and for the passive adversary model. First, we introduce a two-party protocol for privately computing a threshold version of the similarity and apply it to well-known similarity measures such as the scalar product and the cosine similarity. The output of this protocol is only one bit of information telling whether or not two users are similar beyond a predetermined threshold. Afterwards, we explore the computation of the exact and threshold similarity within the context of differential privacy. Differential privacy is a recent notion developed within the field of private data analysis guaranteeing that an adversary that observes the output of the differentially private mechanism, will only gain a negligible advantage (up to a privacy parameter) from the presence (or absence) of a particular item in the profile of a user. This provides a strong privacy guarantee that holds independently of the auxiliary knowledge that the adversary might have. More specifically, we design several differentially private variants of the exact and threshold protocols that rely on the addition of random noise tailored to the sensitivity of the considered similarity measure. We also analyze their complexity as well as their impact on the utility of the resulting similarity measure. Finally, we provide experimental results validating the effectiveness of the proposed approach on real datasets

[Go to top]

How Much Is Enough? Choosing for Differential Privacy (PDF)
by Jaewoo Lee and Chris Clifton.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Differential privacy is a recent notion, and while it is nice conceptually it has been difficult to apply in practice. The parameters of differential privacy have an intuitive theoretical interpretation, but the implications and impacts on the risk of disclosure in practice have not yet been studied, and choosing appropriate values for them is non-trivial. Although the privacy parameter in differential privacy is used to quantify the privacy risk posed by releasing statistics computed on sensitive data, is not an absolute measure of privacy but rather a relative measure. In effect, even for the same value of , the privacy guarantees enforced by differential privacy are different based on the domain of attribute in question and the query supported. We consider the probability of identifying any particular individual as being in the database, and demonstrate the challenge of setting the proper value of given the goal of protecting individuals in the database with some fixed probability

[Go to top]

Private Record Matching Using Differential Privacy (PDF)
by Ali Inan, Murat Kantarcioglu, Gabriel Ghinita, and Elisa Bertino.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private matching between datasets owned by distinct parties is a challenging problem with several applications. Private matching allows two parties to identify the records that are close to each other according to some distance functions, such that no additional information other than the join result is disclosed to any party. Private matching can be solved securely and accurately using secure multi-party computation (SMC) techniques, but such an approach is prohibitively expensive in practice. Previous work proposed the release of sanitized versions of the sensitive datasets which allows blocking, i.e., filtering out sub-sets of records that cannot be part of the join result. This way, SMC is applied only to a small fraction of record pairs, reducing the matching cost to acceptable levels. The blocking step is essential for the privacy, accuracy and efficiency of matching. However, the state-of-the-art focuses on sanitization based on k-anonymity, which does not provide sufficient privacy. We propose an alternative design centered on differential privacy, a novel paradigm that provides strong privacy guarantees. The realization of the new model presents difficult challenges, such as the evaluation of distance-based matching conditions with the help of only a statistical queries interface. Specialized versions of data indexing structures (e.g., kd-trees) also need to be devised, in order to comply with differential privacy. Experiments conducted on the real-world Census-income dataset show that, although our methods provide strong privacy, their effectiveness in reducing matching cost is not far from that of k-anonymity based counterparts

[Go to top]

Privacy Integrated Queries: An Extensible Platform for Privacy-preserving Data Analysis (PDF)
by Frank D. McSherry.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We report on the design and implementation of the Privacy Integrated Queries (PINQ) platform for privacy-preserving data analysis. PINQ provides analysts with a programming interface to unscrubbed data through a SQL-like language. At the same time, the design of PINQ's analysis language and its careful implementation provide formal guarantees of differential privacy for any and all uses of the platform. PINQ's unconditional structural guarantees require no trust placed in the expertise or diligence of the analysts, substantially broadening the scope for design and deployment of privacy-preserving data analysis, especially by non-experts

[Go to top]

Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contenders (PDF)
by Frank McSherry and Ilya Mironov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy. Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty–i.e., noise–to computations, trading accuracy for privacy. We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise. We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides

[Go to top]

DisCOPs

Anytime local search for distributed constraint optimization (PDF)
by Roie Zivan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most former studies of Distributed Constraint Optimization Problems (DisCOPs) search considered only complete search algorithms, which are practical only for relatively small problems. Distributed local search algorithms can be used for solving DisCOPs. However, because of the differences between the global evaluation of a system's state and the private evaluation of states by agents, agents are unaware of the global best state which is explored by the algorithm. Previous attempts to use local search algorithms for solving DisCOPs reported the state held by the system at the termination of the algorithm, which was not necessarily the best state explored. A general framework for implementing distributed local search algorithms for DisCOPs is proposed. The proposed framework makes use of a BFS-tree in order to accumulate the costs of the system's state in its different steps and to propagate the detection of a new best step when it is found. The resulting framework enhances local search algorithms for DisCOPs with the anytime property. The proposed framework does not require additional network load. Agents are required to hold a small (linear) additional space (beside the requirements of the algorithm in use). The proposed framework preserves privacy at a higher level than complete DisCOP algorithms which make use of a pseudo-tree (ADOPT, DPOP)

[Go to top]

DisCSP algorithm

Privacy guarantees through distributed constraint satisfaction (PDF)
by Boi Faltings, Thomas Leaute, and Adrian Petcu.
In unknown(12), April 2008. (BibTeX entry) (Download bibtex record)
(direct link)

Abstract. In Distributed Constraint Satisfaction Problems, agents often desire to find a solution while revealing as little as possible about their variables and constraints. So far, most algorithms for DisCSP do not guarantee privacy of this information. This paper describes some simple obfuscation techniques that can be used with DisCSP algorithms such as DPOP, and provide sensible privacy guarantees based on the distributed solving process without sacrificing its efficiency

[Go to top]

Displays

Personalized Web search for improving retrieval effectiveness (PDF)
by Fang Liu, C. Yu, and Weiyi Meng.
In Knowledge and Data Engineering, IEEE Transactions on 16, January 2004, pages 28-40. (BibTeX entry) (Download bibtex record)
(direct link)

Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient

[Go to top]

Drac

Drac: An Architecture for Anonymous Low-Volume Communications (PDF)
by George Danezis, Claudia Diaz, Carmela Troncoso, and Ben Laurie.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

ECC

High-speed high-security signatures (PDF)
by Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Hang.
In Journal of Cryptographic Engineering 2, September 2011, pages 77-89. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Curve25519: new Diffie-Hellman speed records (PDF)
by Daniel J. Bernstein.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

ECDH

Curve25519: new Diffie-Hellman speed records (PDF)
by Daniel J. Bernstein.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

ECRS

Developing Peer-to-Peer Web Applications (PDF)
by Toni Ruottu.
Master's Thesis, University of Helsinki, September 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform's suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data

[Go to top]

Efficient Sharing of Encrypted Data (PDF)
by Krista Bennett, Christian Grothoff, Tzvetan Horozov, and Ioana Patrascu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

EGOIST

EGOIST: Overlay Routing using Selfish Neighbor Selection (PDF)
by Georgios Smaragdakis, Vassilis Lekakis, Nikolaos Laoutaris, Azer Bestavros, John W. Byers, and Mema Roussopoulos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

A foundational issue underlying many overlay network applications ranging from routing to peer-to-peer file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a distributed overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using extensive measurements of paths between nodes, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we use a multiplayer peer-to-peer game to demonstrate the value of Egoist to end-user applications

[Go to top]

Swarming on Optimized Graphs for n-way Broadcast (PDF)
by Georgios Smaragdakis, Nikolaos Laoutaris, Pietro Michiardi, Azer Bestavros, John W. Byers, and Mema Roussopoulos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and Max- Sum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes

[Go to top]

Implications of Selfish Neighbor Selection in Overlay Networks (PDF)
by Nikolaos Laoutaris, Georgios Smaragdakis, Azer Bestavros, and John W. Byers.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Economics: general interest

Designing Economics Mechanisms
by Leonid Hurwicz and Stanley Reiter.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

A mechanism is a mathematical structure that models institutions through which economic activity is guided and coordinated. There are many such institutions; markets are the most familiar ones. Lawmakers, administrators and officers of private companies create institutions in order to achieve desired goals. They seek to do so in ways that economize on the resources needed to operate the institutions, and that provide incentives that induce the required behaviors. This book presents systematic procedures for designing mechanisms that achieve specified performance, and economize on the resources required to operate the mechanism. The systematic design procedures are algorithms for designing informationally efficient mechanisms. Most of the book deals with these procedures of design. When there are finitely many environments to be dealt with, and there is a Nash-implementing mechanism, our algorithms can be used to make that mechanism into an informationally efficient one. Informationally efficient dominant strategy implementation is also studied. Leonid Hurwicz is the Nobel Prize Winner 2007 for The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with colleagues Eric Maskin and Roger Myerson, for his work on the effectiveness of markets

[Go to top]

Ed25519

High-speed high-security signatures (PDF)
by Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Hang.
In Journal of Cryptographic Engineering 2, September 2011, pages 77-89. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

EdDSA

High-speed high-security signatures (PDF)
by Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Hang.
In Journal of Cryptographic Engineering 2, September 2011, pages 77-89. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

ExperimenTor

ExperimenTor: A Testbed for Safe and Realistic Tor Experimentation (PDF)
by Kevin Bauer, Micah Sherr, Damon McCoy, and Dirk Grunwald.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is one of the most widely-used privacy enhancing technologies for achieving online anonymity and resisting censorship. Simultaneously, Tor is also an evolving research network on which investigators perform experiments to improve the network's resilience to attacks and enhance its performance. Existing methods for studying Tor have included analytical modeling, simulations, small-scale network emulations, small-scale PlanetLab deployments, and measurement and analysis of the live Tor network. Despite the growing body of work concerning Tor, there is no widely accepted methodology for conducting Tor research in a manner that preserves realism while protecting live users' privacy. In an effort to propose a standard, rigorous experimental framework for conducting Tor research in a way that ensures safety and realism, we present the design of ExperimenTor, a large-scale Tor network emulation toolkit and testbed. We also report our early experiences with prototype testbeds currently deployed at four research institutions

[Go to top]

F2F

Drac: An Architecture for Anonymous Low-Volume Communications (PDF)
by George Danezis, Claudia Diaz, Carmela Troncoso, and Ben Laurie.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

FEC

Design and evaluation of a low density generator matrix (PDF)
by Vincent Roca, Zainab Khallouf, and Julien Laboure.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional small block Forward Error Correction (FEC) codes, like the Reed-Solomon erasure (RSE) code, are known to raise efficiency problems, in particular when they are applied to the Asynchronous Layered Coding (ALC) reliable multicast protocol. In this paper we describe the design of a simple large block Low Density Generator Matrix (LDGM) codec, a particular case of LDPC code, which is capable of operating on source blocks that are several tens of megabytes long. We also explain how the iterative decoding feature of LDGM/LDPC can be used to protect a large number of small independent objects during time-limited partially-reliable sessions. We illustrate this feature with an example derived from a video streaming scheme over ALC. We then evaluate our LDGM codec and compare its performances with a well known RSE codec. Tests focus on the global efficiency and on encoding/decoding performances. This paper deliberately skips theoretical aspects to focus on practical results. It shows that LDGM/LDPC open many opportunities in the area of bulk data multicasting

[Go to top]

FPGA

Wireless Sensor Networks: A Survey
by Vidyasagar Potdar, Atif Sharif, and Elizabeth Chang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless Sensor Networks (WSN), an element of pervasive computing, are presently being used on a large scale to monitor real-time environmental status. However these sensors operate under extreme energy constraints and are designed by keeping an application in mind. Designing a new wireless sensor node is extremely challenging task and involves assessing a number of different parameters required by the target application, which includes range, antenna type, target technology, components, memory, storage, power, life time, security, computational capability, communication technology, power, size, programming interface and applications. This paper analyses commercially (and research prototypes) available wireless sensor nodes based on these parameters and outlines research directions in this area

[Go to top]

FTP-ALG

NTALG–TCP NAT traversal with application-level gateways (PDF)
by M. Wander, S. Holzapfel, A. Wacker, and T. Weis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Consumer computers or home communication devices are usually connected to the Internet via a Network Address Translation (NAT) router. This imposes restrictions for networking applications that require inbound connections. Existing solutions for NAT traversal can remedy the restrictions, but still there is a fraction of home users which lack support of it, especially when it comes to TCP. We present a framework for traversing NAT routers by exploiting their built-in FTP and IRC application-level gateways (ALG) for arbitrary TCP-based applications. While this does not work in every scenario, it significantly improves the success chance without requiring any user interaction at all. To demonstrate the framework, we show a small test setup with laptop computers and home NAT routers

[Go to top]

Freedom

Freedom Systems 2.1 Security Issues and Analysis (PDF)
by Adam Back, Ian Goldberg, and Adam Shostack.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

We describe attacks to which Freedom, or Freedom users, may be vulnerable. These attacks are those that reduce the privacy of a Freedom user, through exploiting cryptographic, design or implementation issues. We include issues which may not be Freedom security issues which arise when the system is not properly used. This disclosure includes all known design or implementation flaws, as well as places where various trade-offs made while creating the system have privacy implications. We also discuss cryptographic points that are needed for a complete understanding of how Freedom works, including ones we don't believe can be used to reduce anyone's privacy

[Go to top]

Traffic Analysis Attacks and Trade-Offs in Anonymity Providing Systems (PDF)
by Adam Back, Ulf Möller, and Anton Stiglic.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We discuss problems and trade-offs with systems providing anonymity for web browsing (or more generally any communication system that requires low latency interaction). We focus on two main systems: the Freedom network [12] and PipeNet [8]. Although Freedom is efficient and reasonably secure against denial of service attacks, it is vulnerable to some generic traffic analysis attacks, which we describe. On the other hand, we look at PipeNet, a simple theoretical model which protects against the traffic analysis attacks we point out, but is vulnerable to denial of services attacks and has efficiency problems. In light of these observations, we discuss the trade-offs that one faces when trying to construct an efficient low latency communication system that protects users anonymity

[Go to top]

Freeloading

Incentive-driven QoS in peer-to-peer overlays (PDF)
by Raul Leonardo Landa Gamiochipi.
Ph.D. thesis, University College London, May 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

[Go to top]

Freenet

Methods for Secure Decentralized Routing in Open Networks (PDF)
by Nathan S Evans.
Ph.D. thesis, Technische Universität München, August 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The contribution of this thesis is the study and improvement of secure, decentralized, robust routing algorithms for open networks including ad-hoc networks and peer-to-peer (P2P) overlay networks. The main goals for our secure routing algorithm are openness, efficiency, scalability and resilience to various types of attacks. Common P2P routing algorithms trade-off decentralization for security; for instance by choosing whether or not to require a centralized authority to allow peers to join the network. Other algorithms trade scalability for security, for example employing random search or flooding to prevent certain types of attacks. Our design attempts to meet our security goals in an open system, while limiting the performance penalties incurred. The first step we took towards designing our routing algorithm was an analysis of the routing algorithm in Freenet. This algorithm is relevant because it achieves efficient (order O(log n)) routing in realistic network topologies in a fully decentralized open network. However, we demonstrate why their algorithm is not secure, as malicious participants are able to severely disrupt the operation of the network. The main difficulty with the Freenet routing algorithm is that for performance it relies on information received from untrusted peers. We also detail a range of proposed solutions, none of which we found to fully fix the problem. A related problem for efficient routing in sparsely connected networks is the difficulty in sufficiently populating routing tables. One way to improve connectivity in P2P overlay networks is by utilizing modern NAT traversal techniques. We employ a number of standard NAT traversal techniques in our approach, and also developed and experimented with a novel method for NAT traversal based on ICMP and UDP hole punching. Unlike other NAT traversal techniques ours does not require a trusted third party. Another technique we use in our implementation to help address the connectivity problem in sparse networks is the use of distance vector routing in a small local neighborhood. The distance vector variant used in our system employs onion routing to secure the resulting indirect connections. Materially to this design, we discovered a serious vulnerability in the Tor protocol which allowed us to use a DoS attack to reduce the anonymity of the users of this extant anonymizing P2P network. This vulnerability is based on allowing paths of unrestricted length for onion routes through the network. Analyzing Tor and implementing this attack gave us valuable knowledge which helped when designing the distance vector routing protocol for our system. Finally, we present the design of our new secure randomized routing algorithm that does not suffer from the various problems we discovered in previous designs. Goals for the algorithm include providing efficiency and robustness in the presence of malicious participants for an open, fully decentralized network without trusted authorities. We provide a mathematical analysis of the algorithm itself and have created and deployed an implementation of this algorithm in GNUnet. In this thesis we also provide a detailed overview of a distributed emulation framework capable of running a large number of nodes using our full code base as well as some of the challenges encountered in creating and using such a testing framework. We present extensive experimental results showing that our routing algorithm outperforms the dominant DHT design in target topologies, and performs comparably in other scenarios

[Go to top]

Developing Peer-to-Peer Web Applications (PDF)
by Toni Ruottu.
Master's Thesis, University of Helsinki, September 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform's suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data

[Go to top]

Routing in the Dark: Pitch Black (PDF)
by Nathan S Evans, Chis GauthierDickey, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In many networks, such as mobile ad-hoc networks and friend-to-friend overlay networks, direct communication between nodes is limited to specific neighbors. Often these networks have a small-world topology; while short paths exist between any pair of nodes in small-world networks, it is non-trivial to determine such paths with a distributed algorithm. Recently, Clarke and Sandberg proposed the first decentralized routing algorithm that achieves efficient routing in such small-world networks. This paper is the first independent security analysis of Clarke and Sandberg's routing algorithm. We show that a relatively weak participating adversary can render the overlay ineffective without being detected, resulting in significant data loss due to the resulting load imbalance. We have measured the impact of the attack in a testbed of 800 nodes using minor modifications to Clarke and Sandberg's implementation of their routing algorithm in Freenet. Our experiments show that the attack is highly effective, allowing a small number of malicious nodes to cause rapid loss of data on the entire network. We also discuss various proposed countermeasures designed to detect, thwart or limit the attack. While we were unable to find effective countermeasures, we hope that the presented analysis will be a first step towards the design of secure distributed routing algorithms for restricted-route topologies

[Go to top]

Freenet: A Distributed Anonymous Information Storage and Retrieval System (PDF)
by Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and di$$cult for a node operator to determine or be held responsible for the actual physical contents of her own node

[Go to top]

Freenet: A Distributed Anonymous Information Storage and Retrieval System (PDF)
by Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and di$$cult for a node operator to determine or be held responsible for the actual physical contents of her own node

[Go to top]

Future Internet

Toward secure name resolution on the internet
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In Computers & Security, 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) provides crucial name resolution functions for most Internet services. As a result, DNS traffic provides an important attack vector for mass surveillance, as demonstrated by the QUANTUMDNS and MORECOWBELL programs of the NSA. This article reviews how DNS works and describes security considerations for next generation name resolution systems. We then describe DNS variations and analyze their impact on security and privacy. We also consider Namecoin, the GNU Name System and RAINS, which are more radical re-designs of name systems in that they both radically change the wire protocol and also eliminate the existing global consensus on TLDs provided by ICANN. Finally, we assess how the different systems stack up with respect to the goal of improving security and privacy of name resolution for the future Internet

[Go to top]

Fuzzy IBE

Fuzzy Identity-Based Encryption (PDF)
by Amit Sahai and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, ω, to decrypt a ciphertext encrypted with an identity, ω , if and only if the identities ω and ω are close to each other as measured by the set overlap distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term attribute-based encryption. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model

[Go to top]

GENI

Managing Distributed Applications Using Gush (PDF)
by Jeannie R. Albrecht and Danny Yuxing Huang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

GNU Name System

Towards Self-sovereign, decentralized personal data sharing and identity management (PDF)
by Martin Schanzenbach.
Dissertation, Technische Universität München, 2020. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Today, identity management is a key element for commercial and private services on the Internet. Over the past decade, digital identities evolved away from decentralized, pseudonymous, user-controlled personas towards centralized, unabiguous identities managed at and provided through service providers. This development was sparked by the requirement of real identities in the context of electronic commerce. However, it was particularly fuelled later by the emergence of social media and the possibilities it provides to people in order to establish social connections. The following centralization of identities at a handful of service providers significantly improved usability and reliability of identity services. Those benefits come at the expense of other, arguably equally important areas. For users, it is privacy and the permanent threat of being tracked and analyzed. For service providers, it is liability and the risk of facing significant punishment caused by strict privacy regulations which try to counteract the former. In this thesis, we investigate state-of-the-art approaches to modern identity management. We take a look at existing standards and recent research in order to understand the status quo and how it can be improved. As a result from our research, we present the following contributions: In order to allow users to reclaim control over their identities and personal data, we propose a design for a decentralized, self-sovereign directory service. This service allows users to share personal data with services without the need of a trusted third party. Unlike existing research in this area, we propose mechanisms which allow users to efficiently enforce access control on their data. Further, we investigate how trust can be established in user-managed, self-sovereign identities. We propose a trust establishment mechanism through the use of secure name systems. It allows users and organizations to establish trust relationships and identity assertions without the need of centralized public key infrastructures (PKIs). Additionally, we show how recent advancements in the area of non-interactive zero-knowledge (NIZK) protocols can be leveraged in order to create privacy-preserving attribute-based credentials (PP-ABCs) suitable for use in self-sovereign identity systems including our proposed directory service. We provide proof of concept implementations of our designs and evaluate them to show that they are suitable for practical applications.

[Go to top]

Decentralized Authentication for Self-Sovereign Identities using Name Systems (PDF)
by Christian Grothoff, Martin Schanzenbach, Annett Laube, and Emmanuel Benoist.
In journal:??(847382), October 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The GNU Name System (GNS) is a fully decentralized public key infrastructure and name system with private information retrieval semantics. It serves a holistic approach to interact seamlessly with IoT ecosystems and enables people and their smart objects to prove their identity, membership and privileges - compatible with existing technologies. In this report we demonstrate how a wide range of private authentication and identity management scenarios are addressed by GNS in a cost-efficient, usable and secure manner. This simple, secure and privacy-friendly authentication method is a significant breakthrough when cyber peace, privacy and liability are the priorities for the benefit of a wide range of the population. After an introduction to GNS itself, we show how GNS can be used to authenticate servers, replacing the Domain Name System (DNS) and X.509 certificate authorities (CAs) with a more privacy-friendly but equally usable protocol which is trustworthy, human-centric and includes group authentication. We also built a demonstrator to highlight how GNS can be used in medical computing to simplify privacy-sensitive data processing in the Swiss health-care system. Combining GNS with attribute-based encryption, we created ReclaimID, a robust and reliable OpenID Connect-compatible authorization system. It includes simple, secure and privacy-friendly single sign-on to seamlessly share selected attributes with Web services, cloud ecosystems. Further, we demonstrate how ReclaimID can be used to solve the problem of addressing, authentication and data sharing for IoT devices. These applications are just the beginning for GNS; the versatility and extensibility of the protocol will lend itself to an even broader range of use-cases. GNS is an open standard with a complete free software reference implementation created by the GNU project. It can therefore be easily audited, adapted, enhanced, tailored, developed and/or integrated, as anyone is allowed to use the core protocols and implementations free of charge, and to adopt them to their needs under the terms of the GNU Affero General Public License, a free software license approved by the Free Software Foundation.

[Go to top]

A Secure and Resilient Communication Infrastructure for Decentralized Networking Applications (PDF)
by Matthias Wachs.
PhD, Technische Universität München, February 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis provides the design and implementation of a secure and resilient communication infrastructure for decentralized peer-to-peer networks. The proposed communication infrastructure tries to overcome limitations to unrestricted communication on today's Internet and has the goal of re-establishing unhindered communication between users. With the GNU name system, we present a fully decentralized, resilient, and privacy-preserving alternative to DNS and existing security infrastructures

[Go to top]

A Censorship-Resistant, Privacy-Enhancing and Fully Decentralized Name System (PDF)
by Matthias Wachs, Martin Schanzenbach, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) is vital for access to information on the Internet. This makes it a target for attackers whose aim is to suppress free access to information. This paper introduces the design and implementation of the GNU Name System (GNS), a fully decentralized and censorship-resistant name system. GNS provides a privacy-enhancing alternative to DNS which preserves the desirable property of memorable names. Due to its design, it can also double as a partial replacement of public key infrastructures, such as X.509. The design of GNS incorporates the capability to integrate and coexist with DNS. GNS is based on the principle of a petname system and builds on ideas from the Simple Distributed Security Infrastructure (SDSI), addressing a central issue with the decentralized mapping of secure identifiers to memorable names: namely the impossibility of providing a global, secure and memorable mapping without a trusted authority. GNS uses the transitivity in the SDSI design to replace the trusted root with secure delegation of authority, thus making petnames useful to other users while operating under a very strong adversary model. In addition to describing the GNS design, we also discuss some of the mechanisms that are needed to smoothly integrate GNS with existing processes and procedures in Web browsers. Specifically, we show how GNS is able to transparently support many assumptions that the existing HTTP(S) infrastructure makes about globally unique names

[Go to top]

Design and Implementation of a Censorship Resistant and Fully Decentralized Name System (PDF)
by Martin Schanzenbach.
Master's, TU Munich, September 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis presents the design and implementation of the GNU Alternative Domain System (GADS), a decentralized, secure name system providing memorable names for the Internet as an alternative to the Domain Name System (DNS). The system builds on ideas from Rivest's Simple Distributed Security Infrastructure (SDSI) to address a central issue with providing a decentralized mapping of secure identifiers to memorable names: providing a global, secure and memorable mapping is impossible without a trusted authority. SDSI offers an alternative by linking local name spaces; GADS uses the transitivity provided by the SDSI design to build a decentralized and censorship resistant name system without a trusted root based on secure delegation of authority. Additional details need to be considered in order to enable GADS to integrate smoothly with the World Wide Web. While following links on the Web matches following delegations in GADS, the existing HTTP-based infrastructure makes many assumptions about globally unique names; however, proxies can be used to enable legacy applications to function with GADS. This work presents the fundamental goals and ideas behind GADS, provides technical details on how GADS has been implemented and discusses deployment issues for using GADS with existing systems. We discuss how GADS and legacy DNS can interoperate during a transition period and what additional security advantages GADS offers over DNS with Security Extensions (DNSSEC). Finally, we present the results of a survey into surfing behavior, which suggests that the manual introduction of new direct links in GADS will be infrequent

[Go to top]

GNUnet

Towards Self-sovereign, decentralized personal data sharing and identity management (PDF)
by Martin Schanzenbach.
Dissertation, Technische Universität München, 2020. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Today, identity management is a key element for commercial and private services on the Internet. Over the past decade, digital identities evolved away from decentralized, pseudonymous, user-controlled personas towards centralized, unabiguous identities managed at and provided through service providers. This development was sparked by the requirement of real identities in the context of electronic commerce. However, it was particularly fuelled later by the emergence of social media and the possibilities it provides to people in order to establish social connections. The following centralization of identities at a handful of service providers significantly improved usability and reliability of identity services. Those benefits come at the expense of other, arguably equally important areas. For users, it is privacy and the permanent threat of being tracked and analyzed. For service providers, it is liability and the risk of facing significant punishment caused by strict privacy regulations which try to counteract the former. In this thesis, we investigate state-of-the-art approaches to modern identity management. We take a look at existing standards and recent research in order to understand the status quo and how it can be improved. As a result from our research, we present the following contributions: In order to allow users to reclaim control over their identities and personal data, we propose a design for a decentralized, self-sovereign directory service. This service allows users to share personal data with services without the need of a trusted third party. Unlike existing research in this area, we propose mechanisms which allow users to efficiently enforce access control on their data. Further, we investigate how trust can be established in user-managed, self-sovereign identities. We propose a trust establishment mechanism through the use of secure name systems. It allows users and organizations to establish trust relationships and identity assertions without the need of centralized public key infrastructures (PKIs). Additionally, we show how recent advancements in the area of non-interactive zero-knowledge (NIZK) protocols can be leveraged in order to create privacy-preserving attribute-based credentials (PP-ABCs) suitable for use in self-sovereign identity systems including our proposed directory service. We provide proof of concept implementations of our designs and evaluate them to show that they are suitable for practical applications.

[Go to top]

Decentralized Authentication for Self-Sovereign Identities using Name Systems (PDF)
by Christian Grothoff, Martin Schanzenbach, Annett Laube, and Emmanuel Benoist.
In journal:??(847382), October 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The GNU Name System (GNS) is a fully decentralized public key infrastructure and name system with private information retrieval semantics. It serves a holistic approach to interact seamlessly with IoT ecosystems and enables people and their smart objects to prove their identity, membership and privileges - compatible with existing technologies. In this report we demonstrate how a wide range of private authentication and identity management scenarios are addressed by GNS in a cost-efficient, usable and secure manner. This simple, secure and privacy-friendly authentication method is a significant breakthrough when cyber peace, privacy and liability are the priorities for the benefit of a wide range of the population. After an introduction to GNS itself, we show how GNS can be used to authenticate servers, replacing the Domain Name System (DNS) and X.509 certificate authorities (CAs) with a more privacy-friendly but equally usable protocol which is trustworthy, human-centric and includes group authentication. We also built a demonstrator to highlight how GNS can be used in medical computing to simplify privacy-sensitive data processing in the Swiss health-care system. Combining GNS with attribute-based encryption, we created ReclaimID, a robust and reliable OpenID Connect-compatible authorization system. It includes simple, secure and privacy-friendly single sign-on to seamlessly share selected attributes with Web services, cloud ecosystems. Further, we demonstrate how ReclaimID can be used to solve the problem of addressing, authentication and data sharing for IoT devices. These applications are just the beginning for GNS; the versatility and extensibility of the protocol will lend itself to an even broader range of use-cases. GNS is an open standard with a complete free software reference implementation created by the GNU project. It can therefore be easily audited, adapted, enhanced, tailored, developed and/or integrated, as anyone is allowed to use the core protocols and implementations free of charge, and to adopt them to their needs under the terms of the GNU Affero General Public License, a free software license approved by the Free Software Foundation.

[Go to top]

Toward secure name resolution on the internet
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In Computers & Security, 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) provides crucial name resolution functions for most Internet services. As a result, DNS traffic provides an important attack vector for mass surveillance, as demonstrated by the QUANTUMDNS and MORECOWBELL programs of the NSA. This article reviews how DNS works and describes security considerations for next generation name resolution systems. We then describe DNS variations and analyze their impact on security and privacy. We also consider Namecoin, the GNU Name System and RAINS, which are more radical re-designs of name systems in that they both radically change the wire protocol and also eliminate the existing global consensus on TLDs provided by ICANN. Finally, we assess how the different systems stack up with respect to the goal of improving security and privacy of name resolution for the future Internet

[Go to top]

The GNUnet System
by Christian Grothoff.
Habilitation à diriger des recherches, Université de Rennes 1, December 2017. (BibTeX entry) (Download bibtex record)
(direct link) (website)

GNUnet is an alternative network stack for building secure, decentralized and privacy-preserving distributed applications. Our goal is to replace the old insecure Internet protocol stack. Starting from an application for secure publication of files, it has grown to include all kinds of basic protocol components and applications towards the creation of a GNU internet. This habilitation provides an overview of the GNUnet architecture, including the development process, the network architecture and the software architecture. The goal of Part 1 is to provide an overview of how the various parts of the project work together today, and to then give ideas for future directions. The text is a first attempt to provide this kind of synthesis, and in return does not go into extensive technical depth on any particular topic. Part 2 then gives selected technical details based on eight publications covering many of the core components. This is a harsh selection; on the GNUnet website there are more than 50 published research papers and theses related to GNUnet, providing extensive and in-depth documentation. Finally, Part 3 gives an overview of current plans and future work

[Go to top]

Improving Voice over GNUnet (PDF)
by Christian Ulrich.
B.S, TU Berlin, July 2017. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In contrast to ubiquitous cloud-based solutions the telephony application GNUnet conversation provides fully-decentralized, secure voice communication and thus impedes mass surveillance. The aim of this thesis is to investigate why GNUnet conversation currently provides poor Quality of Experience under typical wide area network conditions and to propose optimization measures. After network shaping and the initialization of two isolated GNUnet peers had been automated, delay measurements were done. With emulated network characteristics network delay, cryptography delays and audio codec delays were measured and transmitted speech was recorded. An analysis of the measurement results and a subjective assessment of the speech recordings revealed that extreme outliers occur in most scenarios and impair QoE. Moreover it was shown that GNUnet conversation introduces a large delay that confines the environment in which good QoE is possible. In the measurement environment at least 23 ms always ocurred of which large parts are were caused by cryptography. It was shown that optimization in the cryptography part and other components are possible. Finally the conditions for currently reaching good QoE were determined and ideas for further investigations were presented

[Go to top]

Implementing Privacy Preserving Auction Protocols (PDF)
by Markus Teich.
Ph.D. thesis, TUM, February 2017. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this thesis we translate Brandt's privacy preserving sealed-bid online auction protocol from RSA to elliptic curve arithmetic and analyze the theoretical and practical benefits. With Brandt's protocol, the auction outcome is completely resolved by the bidders and the seller without the need for a trusted third party. Loosing bids are not revealed to anyone. We present libbrandt, our implementation of four algorithms with different outcome and pricing properties, and describe how they can be incorporated in a real-world online auction system. Our performance measurements show a reduction of computation time and prospective bandwidth cost of over 90 compared to an implementation of the RSA version of the same algorithms. We also evaluate how libbrandt scales in different dimensions and conclude that the system we have presented is promising with respect to an adoption in the real world

[Go to top]

Enabling Secure Web Payments with GNU Taler (PDF)
by Jeffrey Burdges, Florian Dold, Christian Grothoff, and Marcello Stanisci.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

GNU Taler is a new electronic online payment system which provides privacy for customers and accountability for merchants. It uses an exchange service to issue digital coins using blind signatures, and is thus not subject to the performance issues that plague Byzantine fault-tolerant consensus-based solutions. The focus of this paper is addressing the challenges payment systems face in the context of the Web. We discuss how to address Web-specific challenges, such as handling bookmarks and sharing of links, as well as supporting users that have disabled JavaScript. Web payment systems must also navigate various constraints imposed by modern Web browser security architecture, such as same-origin policies and the separation between browser extensions and Web pages. While our analysis focuses on how Taler operates within the security infrastructure provided by the modern Web, the results partially generalize to other payment systems. We also include the perspective of merchants, as existing systems have often struggled with securing payment information at the merchant's side. Here, challenges include avoiding database transactions for customers that do not actually go through with the purchase, as well as cleanly separating security-critical functions of the payment system from the rest of the Web service

[Go to top]

Privacy-Preserving Abuse Detection in Future Decentralised Online Social Networks (PDF)
by Álvaro García-Recuero, Jeffrey Burdges, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Future online social networks need to not only protect sensitive data of their users, but also protect them from abusive behavior coming from malicious participants in the network. We investigate the use of supervised learning techniques to detect abusive behavior and describe privacy-preserving protocols to compute the feature set required by abuse classification algorithms in a secure and privacy-preserving way. While our method is not yet fully resilient against a strong adaptive adversary, our evaluation suggests that it will be useful to detect abusive behavior with a minimal impact on privacy

[Go to top]

Managing and Presenting User Attributes over a Decentralized Secure Name System
by Martin Schanzenbach and Christian Banse.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Today, user attributes are managed at centralized identity providers. However, two centralized identity providers dominate digital identity and access management on the web. This is increasingly becoming a privacy problem in times of mass surveillance and data mining for targeted advertisement. Existing systems for attribute sharing or credential presentation either rely on a trusted third party service or require the presentation to be online and synchronous. In this paper we propose a concept that allows the user to manage and share his attributes asynchronously with a requesting party using a secure, decentralized name system

[Go to top]

Byzantine Set-Union Consensus using Efficient Set Reconciliation (PDF)
by Florian Dold and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Applications of secure multiparty computation such as certain electronic voting or auction protocols require Byzantine agreement on large sets of elements. Implementations proposed in the literature so far have relied on state machine replication, and reach agreement on each individual set element in sequence. We introduce set-union consensus, a specialization of Byzantine consensus that reaches agreement over whole sets. This primitive admits an efficient and simple implementation by the composition of Eppstein's set reconciliation protocol with Ben-Or's ByzConsensus protocol. A free software implementation of this construction is available in GNUnet. Experimental results indicate that our approach results in an efficient protocol for very large sets, especially in the absence of Byzantine faults. We show the versatility of set-union consensus by using it to implement distributed key generation, ballot collection and cooperative decryption for an electronic voting protocol implemented in GNUnet

[Go to top]

GNUnet und Informationsmacht: Analyse einer P2P-Technologie und ihrer sozialen Wirkung (PDF)
by Christian Ricardo Kühne.
Diplomarbeit, Humboldt-Universität zu Berlin, April 2016. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis studies the GNUnet project comprising its history, ideas and the P2P network technology. It specifically investigates the question of emancipatory potentials with regard to forms of information power due to a widely deployed new Internet technology and tries to identify essential suspensions of power within the scope of an impact assessment. Moreover, we will see by contrasting the GNUnet project with the critical data protection project, founded on social theory, that both are heavily concerned about the problem of illegitimate and unrestrained information power, giving us additional insights for the assessment. Last but least I'll try to present a scheme of how both approaches may interact to realize their goals

[Go to top]

Zur Idee herrschaftsfreier kooperativer Internetdienste (PDF)
by Christian Ricardo Kühne.
In FIfF-Kommunikation, 2016. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Byzantine Fault Tolerant Set Consensus with Efficient Set Reconciliation (PDF)
by Florian Dold.
Master, Technische Universität München, December 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Byzantine consensus is a fundamental and well-studied problem in the area of distributed system. It requires a group of peers to reach agreement on some value, even if a fraction of the peers is controlled by an adversary. This thesis proposes set union consensus, an efficient generalization of Byzantine consensus from single elements to sets. This is practically motivated by Secure Multiparty Computation protocols such as electronic voting, where a large set of elements must be collected and agreed upon. Existing practical implementations of Byzantine consensus are typically based on state machine replication and not well-suited for agreement on sets, since they must process individual agreements on all set elements in sequence. We describe and evaluate our implementation of set union consensus in GNUnet, which is based on a composition of Eppstein set reconciliation protocol with the simple gradecast consensus prococol described by Ben-Or

[Go to top]

A Secure and Resilient Communication Infrastructure for Decentralized Networking Applications (PDF)
by Matthias Wachs.
PhD, Technische Universität München, February 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis provides the design and implementation of a secure and resilient communication infrastructure for decentralized peer-to-peer networks. The proposed communication infrastructure tries to overcome limitations to unrestricted communication on today's Internet and has the goal of re-establishing unhindered communication between users. With the GNU name system, we present a fully decentralized, resilient, and privacy-preserving alternative to DNS and existing security infrastructures

[Go to top]

A Decentralized and Autonomous Anomaly Detection Infrastructure for Decentralized Peer-to-Peer Networks (PDF)
by Omar Tarabai.
Master, October 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In decentralized networks, collecting and analysing information from the network is useful for developers and operators to monitor the behaviour and detect anomalies such as attacks or failures in both the overlay and underlay networks. But realizing such an infrastructure is hard to achieve due to the decentralized nature of the network especially if the anomaly occurs on systems not operated by developers or participants get separated from the collection points. In this thesis a decentralized monitoring infrastructure using a decentralized peer-to-peer network is developed to collect information and detect anomalies in a collaborative way without coordination by and in absence of a centralized infrastructure and report detected incidents to a monitoring infrastructure. We start by introducing background information about peer-to-peer networks, anomalies and anomaly detection techniques in literature. Then we present some of the related work regarding monitoring decentralized networks, anomaly detection and data aggregation in decentralized networks. Then we perform an analysis of the system objectives, target environment and the desired properties of the system. Then we design the system in terms of the overall structure and its individual components. We follow with details about the system implementation. Lastly, we evaluate the final system implementation against our desired objectives

[Go to top]

Automatic Transport Selection and Resource Allocation for Resilient Communication in Decentralised Networks (PDF)
by Matthias Wachs, Fabian Oehlmann, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Making communication more resilient is a main focus for modern decentralised networks. A current development to increase connectivity between participants and to be resilient against service degradation attempts is to support different communication protocols, and to switch between these protocols in case degradation or censorship are detected. Supporting multiple protocols with different properties and having to share resources for communication with multiple partners creates new challenges with respect to protocol selection and resource allocation to optimally satisfy the applications' requirements for communication. This paper presents a novel approach for automatic transport selection and resource allocation with a focus on decentralised networks. Our goal is to evaluate the communication mechanisms available for each communication partner and then allocate resources in line with the requirements of the applications. We begin by detailing the overall requirements for an algorithm for transport selection and resource allocation, and then compare three different solutions using (1) a heuristic, (2) linear optimisation, and (3) machine learning. To show the suitability and the specific benefits of each approach, we evaluate their performance with respect to usability, scalability and quality of the solution found in relation to application requirements

[Go to top]

An Approach for Home Routers to Securely Erase Sensitive Data (PDF)
by Nicolas Bene s.
Bachelor Thesis, Technische Universität München, October 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Home routers are always-on low power embedded systems and part of the Internet infrastructure. In addition to the basic router functionality, they can be used to operate sensitive personal services, such as for private web and email servers, secure peer-to-peer networking services like GNUnet and Tor, and encrypted network file system services. These services naturally involve cryptographic operations with the cleartext keys being stored in RAM. This makes router devices possible targets to physical attacks by home intruders. Attacks include interception of unprotected data on bus wires, alteration of firmware through exposed JTAG headers, or recovery of cryptographic keys through the cold boot attack. This thesis presents Panic!, a combination of open hardware design and free software to detect physical integrity attacks and to react by securely erasing cryptographic keys and other sensitive data from memory. To improve auditability and to allow cheap reproduction, the components of Panic! are kept simple in terms of conceptual design and lines of code. First, the motivation to use home routers for services besides routing and the need to protect their physical integrity is discussed. Second, the idea and functionality of the Panic! system is introduced and the high-level interactions between its components explained. Third, the software components to be run on the router are described. Fourth, the requirements of the measurement circuit are declared and a prototype is presented. Fifth, some characteristics of pressurized environments are discussed and the difficulties for finding adequate containments are explained. Finally, an outlook to tasks left for the future is given

[Go to top]

Experimental comparison of Byzantine fault tolerant distributed hash tables (PDF)
by Supriti Singh.
Masters, Saarland University, September 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Hash Tables (DHTs) are a key data structure for construction of a peer to peer systems. They provide an efficient way to distribute the storage and retrieval of key-data pairs among the participating peers. DHTs should be scalable, robust against churn and resilient to attacks. X-Vine is a DHT protocol which offers security against Sybil attacks. All communication among peers is performed over social network links, with the presumption that a friend can be trusted. This trust can be extended to a friend of a friend. It uses the tested Chord Ring topology as an overlay, which has been proven to be scalable and robust. The aim of the thesis is to experimentally compare two DHTs, R5 N and X-Vine. GNUnet is a free software secure peer to peer framework, which uses R 5N . In this thesis, we have presented the implementation of X-Vine on GNUnet, and compared the performance of R5 N and X-Vine

[Go to top]

Cryptographically Secure, Distributed Electronic Voting (PDF)
by Florian Dold.
Bachelor's, Technische Universität München, August 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Elections are a vital tool for decision-making in democratic societies. The past decade has witnessed a handful of attempts to apply modern technology to the election process in order to make it faster and more cost-effective. Most of the practical efforts in this area have focused on replacing traditional voting booths with electronic terminals, but did not attempt to apply cryptographic techniques able to guarantee critical properties of elections such as secrecy of ballot and verifiability. While such techniques were extensively researched in the past 30 years, practical implementation of cryptographically secure remote electronic voting schemes are not readily available. All existing implementation we are aware of either exhibit critical security flaws, are proprietary black-box systems or require additional physical assumptions such as a preparatory key ceremony executed by the election officials. The latter makes such systems unusable for purely digital communities. This thesis describes the design and implementation of an electronic voting system in GNUnet, a framework for secure and decentralized networking. We provide a short survey of voting schemes and existing implementations. The voting scheme we implemented makes use of threshold cryptography, a technique which requires agreement among a large subset of the election officials to execute certain cryptographic operations. Since such protocols have applications outside of electronic voting, we describe their design and implementation in GNUnet separately

[Go to top]

Control Flow Analysis for Event-Driven Programs (PDF)
by Florian Scheibner.
Bachelors, Technical University of Munich, July 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Static analysis is often used to automatically check for common bugs in programs. Compilers already check for some common programming errors and issue warnings; however, they do not do a very deep analysis because this would slow the compilation of the program down. Specialized tools like Coverity or Clang Static Analyzer look at possible runs of a program and track the state of variables in respect to function calls. This information helps to identify possible bugs. In event driven programs like GNUnet callbacks are registered for later execution. Normal static analysis cannot track these function calls. This thesis is an attempt to extend different static analysis tools so that they can handle this case as well. Different solutions were thought of and executed with Coverity and Clang. This thesis describes the theoretical background of model checking and static analysis, the practical usage of wide spread static analysis tools, and how these tools can be extended in order to improve their usefulness

[Go to top]

Cryogenic: Enabling Power-Aware Applications on Linux (PDF)
by Alejandra Morales.
Masters, Technische Universität München, February 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As a means of reducing power consumption, hardware devices are capable to enter into sleep-states that have low power consumption. Waking up from those states in order to return to work is typically a rather energy-intensive activity. Some existing applications have non-urgent tasks that currently force hardware to wake up needlessly or prevent it from going to sleep. It would be better if such non-urgent activities could be scheduled to execute when the respective devices are active to maximize the duration of sleep-states. This requires cooperation between applications and the kernel in order to determine when the execution of a task will not be expensive in terms of power consumption. This work presents the design and implementation of Cryogenic, a POSIX-compatible API that enables clustering tasks based on the hardware activity state. Specifically, Cryogenic's API allows applications to defer their execution until other tasks use the device they want to use. As a result, two actions that contribute to reduce the device energy consumption are achieved: reduce the number of hardware wake-ups and maximize the idle periods. The energy measurements enacted at the end of this thesis demonstrate that, for the specific setup and conditions present during our experimentation, Cryogenic is capable to achieve savings between 1 and 10 for a USB WiFi device. Although we ideally target mobile platforms, Cryogenic has been developed by means a new Linux module that integrates with the existing POSIX event loop system calls. This allows to use Cryogenic on many different platforms as long as they use a GNU/Linux distribution as the main operating system. An evidence of this can be found in this thesis, where we demonstrate the power savings on a single-board computer

[Go to top]

CADET: Confidential Ad-hoc Decentralized End-to-End Transport (PDF)
by Bartlomiej Polot and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes CADET, a new transport protocol for confidential and authenticated data transfer in decentralized networks. This transport protocol is designed to operate in restricted-route scenarios such as friend-to-friend or ad-hoc wireless networks. We have implemented CADET and evaluated its performance in various network scenarios, compared it to the well-known TCP/IP stack and tested its response to rapidly changing network topologies. While our current implementation is still significantly slower in high-speed low-latency networks, for typical Internet-usage our system provides much better connectivity and security with comparable performance to TCP/IP

[Go to top]

A Censorship-Resistant, Privacy-Enhancing and Fully Decentralized Name System (PDF)
by Matthias Wachs, Martin Schanzenbach, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) is vital for access to information on the Internet. This makes it a target for attackers whose aim is to suppress free access to information. This paper introduces the design and implementation of the GNU Name System (GNS), a fully decentralized and censorship-resistant name system. GNS provides a privacy-enhancing alternative to DNS which preserves the desirable property of memorable names. Due to its design, it can also double as a partial replacement of public key infrastructures, such as X.509. The design of GNS incorporates the capability to integrate and coexist with DNS. GNS is based on the principle of a petname system and builds on ideas from the Simple Distributed Security Infrastructure (SDSI), addressing a central issue with the decentralized mapping of secure identifiers to memorable names: namely the impossibility of providing a global, secure and memorable mapping without a trusted authority. GNS uses the transitivity in the SDSI design to replace the trusted root with secure delegation of authority, thus making petnames useful to other users while operating under a very strong adversary model. In addition to describing the GNS design, we also discuss some of the mechanisms that are needed to smoothly integrate GNS with existing processes and procedures in Web browsers. Specifically, we show how GNS is able to transparently support many assumptions that the existing HTTP(S) infrastructure makes about globally unique names

[Go to top]

Decentralized Evaluation of Regular Expressions for Capability Discovery in Peer-to-Peer Networks (PDF)
by Maximilian Szengel.
Masters, Technische Universität München, November 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis presents a novel approach for decentralized evaluation of regular expressions for capability discovery in DHT-based overlays. The system provides support for announcing capabilities expressed as regular expressions and discovering participants offering adequate capabilities. The idea behind our approach is to convert regular expressions into finite automatons and store the corresponding states and transitions in a DHT. We show how locally constructed DFA are merged in the DHT into an NFA without the knowledge of any NFA already present in the DHT and without the need for any central authority. Furthermore we present options of optimizing the DFA. There exist several possible applications for this general approach of decentralized regular expression evaluation. However, in this thesis we focus on the application of discovering users that are willing to provide network access using a specified protocol to a particular destination. We have implemented the system for our proposed approach and conducted a simulation. Moreover we present the results of an emulation of the implemented system in a cluster

[Go to top]

Design and Implementation of a Censorship Resistant and Fully Decentralized Name System (PDF)
by Martin Schanzenbach.
Master's, TU Munich, September 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis presents the design and implementation of the GNU Alternative Domain System (GADS), a decentralized, secure name system providing memorable names for the Internet as an alternative to the Domain Name System (DNS). The system builds on ideas from Rivest's Simple Distributed Security Infrastructure (SDSI) to address a central issue with providing a decentralized mapping of secure identifiers to memorable names: providing a global, secure and memorable mapping is impossible without a trusted authority. SDSI offers an alternative by linking local name spaces; GADS uses the transitivity provided by the SDSI design to build a decentralized and censorship resistant name system without a trusted root based on secure delegation of authority. Additional details need to be considered in order to enable GADS to integrate smoothly with the World Wide Web. While following links on the Web matches following delegations in GADS, the existing HTTP-based infrastructure makes many assumptions about globally unique names; however, proxies can be used to enable legacy applications to function with GADS. This work presents the fundamental goals and ideas behind GADS, provides technical details on how GADS has been implemented and discusses deployment issues for using GADS with existing systems. We discuss how GADS and legacy DNS can interoperate during a transition period and what additional security advantages GADS offers over DNS with Security Extensions (DNSSEC). Finally, we present the results of a survey into surfing behavior, which suggests that the manual introduction of new direct links in GADS will be infrequent

[Go to top]

Efficient and Secure Decentralized Network Size Estimation (PDF)
by Nathan S Evans, Bartlomiej Polot, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The size of a Peer-to-Peer (P2P) network is an important parameter for performance tuning of P2P routing algorithms. This paper introduces and evaluates a new efficient method for participants in an unstructured P2P network to establish the size of the overall network. The presented method is highly efficient, propagating information about the current size of the network to all participants using O(|E|) operations where |E| is the number of edges in the network. Afterwards, all nodes have the same network size estimate, which can be made arbitrarily accurate by averaging results from multiple rounds of the protocol. Security measures are included which make it prohibitively expensive for a typical active participating adversary to significantly manipulate the estimates. This paper includes experimental results that demonstrate the viability, efficiency and accuracy of the protocol

[Go to top]

Efficient and Secure Decentralized Network Size Estimation (PDF)
by Nathan S Evans, Bartlomiej Polot, and Christian Grothoff.
In unknown, May 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The size of a Peer-to-Peer (P2P) network is an important parameter for performance tuning of P2P routing algorithms. This paper introduces and evaluates a new efficient method for participants in an unstructured P2P network to establish the size of the overall network. The presented method is highly efficient, propagating information about the current size of the network to all participants using O(|E|) operations where |E| is the number of edges in the network. Afterwards, all nodes have the same network size estimate, which can be made arbitrarily accurate by averaging results from multiple rounds of the protocol. Security measures are included which make it prohibitively expensive for a typical active participating adversary to significantly manipulate the estimates. This paper includes experimental results that demonstrate the viability, efficiency and accuracy of the protocol

[Go to top]

R5N : Randomized Recursive Routing for Restricted-Route Networks (PDF)
by Nathan S Evans and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes a new secure DHT routing algorithm for open, decentralized P2P networks operating in a restricted-route environment with malicious participants. We have implemented our routing algorithm and have evaluated its performance under various topologies and in the presence of malicious peers. For small-world topologies, our algorithm provides significantly better performance when compared to existing methods. In more densely connected topologies, our performance is better than or on par with other designs

[Go to top]

Performance Regression Monitoring with Gauger
by Bartlomiej Polot and Christian Grothoff.
In LinuxJournal(209), September 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

High-speed high-security signatures (PDF)
by Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Hang.
In Journal of Cryptographic Engineering 2, September 2011, pages 77-89. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Methods for Secure Decentralized Routing in Open Networks (PDF)
by Nathan S Evans.
Ph.D. thesis, Technische Universität München, August 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The contribution of this thesis is the study and improvement of secure, decentralized, robust routing algorithms for open networks including ad-hoc networks and peer-to-peer (P2P) overlay networks. The main goals for our secure routing algorithm are openness, efficiency, scalability and resilience to various types of attacks. Common P2P routing algorithms trade-off decentralization for security; for instance by choosing whether or not to require a centralized authority to allow peers to join the network. Other algorithms trade scalability for security, for example employing random search or flooding to prevent certain types of attacks. Our design attempts to meet our security goals in an open system, while limiting the performance penalties incurred. The first step we took towards designing our routing algorithm was an analysis of the routing algorithm in Freenet. This algorithm is relevant because it achieves efficient (order O(log n)) routing in realistic network topologies in a fully decentralized open network. However, we demonstrate why their algorithm is not secure, as malicious participants are able to severely disrupt the operation of the network. The main difficulty with the Freenet routing algorithm is that for performance it relies on information received from untrusted peers. We also detail a range of proposed solutions, none of which we found to fully fix the problem. A related problem for efficient routing in sparsely connected networks is the difficulty in sufficiently populating routing tables. One way to improve connectivity in P2P overlay networks is by utilizing modern NAT traversal techniques. We employ a number of standard NAT traversal techniques in our approach, and also developed and experimented with a novel method for NAT traversal based on ICMP and UDP hole punching. Unlike other NAT traversal techniques ours does not require a trusted third party. Another technique we use in our implementation to help address the connectivity problem in sparse networks is the use of distance vector routing in a small local neighborhood. The distance vector variant used in our system employs onion routing to secure the resulting indirect connections. Materially to this design, we discovered a serious vulnerability in the Tor protocol which allowed us to use a DoS attack to reduce the anonymity of the users of this extant anonymizing P2P network. This vulnerability is based on allowing paths of unrestricted length for onion routes through the network. Analyzing Tor and implementing this attack gave us valuable knowledge which helped when designing the distance vector routing protocol for our system. Finally, we present the design of our new secure randomized routing algorithm that does not suffer from the various problems we discovered in previous designs. Goals for the algorithm include providing efficiency and robustness in the presence of malicious participants for an open, fully decentralized network without trusted authorities. We provide a mathematical analysis of the algorithm itself and have created and deployed an implementation of this algorithm in GNUnet. In this thesis we also provide a detailed overview of a distributed emulation framework capable of running a large number of nodes using our full code base as well as some of the challenges encountered in creating and using such a testing framework. We present extensive experimental results showing that our routing algorithm outperforms the dominant DHT design in target topologies, and performs comparably in other scenarios

[Go to top]

Scalability amp; Paranoia in a Decentralized Social Network (PDF)
by Carlo v. Loesch, Gabor X Toth, and Mathias Baumann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There's a lot of buzz out there about "replacing" Facebook with a privacy-enhanced, decentralized, ideally open source something. In this talk we'll focus on how much privacy we should plan for (specifically about how we cannot entrust our privacy to modern virtual machine technology) and the often underestimated problem of getting such a monster network to function properly. These issues can be considered together or separately: Even if you're not as concerned about privacy as we are, the scalability problem still persists

[Go to top]

What's the difference?: efficient set reconciliation without prior context (PDF)
by David Eppstein, Michael T. Goodrich, Frank Uyeda, and George Varghese.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

The Free Secure Network Systems Group: Secure Peer-to-Peer Networking and Beyond (PDF)
by Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper introduces the current research and future plans of the Free Secure Network Systems Group at the Technische Universitauml;t Muuml;nchen. In particular, we provide some insight into the development process and architecture of the GNUnet P2P framework and the challenges we are currently working on

[Go to top]

Beyond Simulation: Large-Scale Distributed Emulation of P2P Protocols (PDF)
by Nathan S Evans and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents details on the design and implementation of a scalable framework for evaluating peer-to-peer protocols. Unlike systems based on simulation, emulation-based systems enable the experimenter to obtain data that reflects directly on the concrete implementation in much greater detail. This paper argues that emulation is a better model for experiments with peer-to-peer protocols since it can provide scalability and high flexibility while eliminating the cost of moving from experimentation to deployment. We discuss our unique experience with large-scale emulation using the GNUnet peer-to-peer framework and provide experimental results to support these claims

[Go to top]

Developing Peer-to-Peer Web Applications (PDF)
by Toni Ruottu.
Master's Thesis, University of Helsinki, September 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform's suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data

[Go to top]

Cryptographic Extraction and Key Derivation: The HKDF Scheme (PDF)
by Hugo Krawczyk.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In spite of the central role of key derivation functions (KDF) in applied cryptography, there has been little formal work addressing the design and analysis of general multi-purpose KDFs. In practice, most KDFs (including those widely standardized) follow ad-hoc approaches that treat cryptographic hash functions as perfectly random functions. In this paper we close some gaps between theory and practice by contributing to the study and engineering of KDFs in several ways. We provide detailed rationale for the design of KDFs based on the extract-then-expand approach; we present the first general and rigorous definition of KDFs and their security which we base on the notion of computational extractors; we specify a concrete fully practical KDF based on the HMAC construction; and we provide an analysis of this construction based on the extraction and pseudorandom properties of HMAC. The resultant KDF design can support a large variety of KDF applications under suitable assumptions on the underlying hash function; particular attention and effort is devoted to minimizing these assumptions as much as possible for each usage scenario. Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function. (The HMAC-based scheme presented here, named HKDF, is being standardized by the IETF.)

[Go to top]

Autonomous NAT Traversal (PDF)
by Andreas Müller, Nathan S Evans, Christian Grothoff, and Samy Kamkar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional NAT traversal methods require the help of a third party for signalling. This paper investigates a new autonomous method for establishing connections to peers behind NAT. The proposed method for Autonomous NAT traversal uses fake ICMP messages to initially contact the NATed peer. This paper presents how the method is supposed to work in theory, discusses some possible variations, introduces various concrete implementations of the proposed approach and evaluates empirical results of a measurement study designed to evaluate the efficacy of the idea in practice

[Go to top]

Adapting Blackhat Approaches to Increase the Resilience of Whitehat Application Scenarios (PDF)
by Bartlomiej Polot.
masters, Technische Universität München, 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Curve25519: new Diffie-Hellman speed records (PDF)
by Daniel J. Bernstein.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

A Quick Introduction to Bloom Filters (PDF)
by Christian Grothoff.
In unknown, August 2005. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Query Forwarding Algorithm Supporting Initiator Anonymity in GNUnet (PDF)
by Kohei Tatara, Y. Hori, and Kouichi Sakurai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Anonymity in peer-to-peer network means that it is difficult to associate a particular communication with a sender or a recipient. Recently, anonymous peer-to-peer framework, called GNUnet, was developed. A primary feature of GNUnet is resistance to traffic-analysis. However, Kugler analyzed a routing protocol in GNUnet, and pointed out traceability of initiator. In this paper, we propose an alternative routing protocol applicable in GNUnet, which is resistant to Kugler's shortcut attacks

[Go to top]

Reading File Metadata with extract and libextractor
by Christian Grothoff.
In Linux Journal 6-2005, June 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Enhancing Web privacy and anonymity in the digital era (PDF)
by Stefanos Gritzalis.
In Information Management amp; Computer Security 12, January 2004, pages 255-287. (BibTeX entry) (Download bibtex record)
(direct link)

This paper presents a state-of-the-art review of the Web privacy and anonymity enhancing security mechanisms, tools, applications and services, with respect to their architecture, operational principles and vulnerabilities. Furthermore, to facilitate a detailed comparative analysis, the appropriate parameters have been selected and grouped in classes of comparison criteria, in the form of an integrated comparison framework. The main concern during the design of this framework was to cover the confronted security threats, applied technological issues and users' demands satisfaction. GNUnet's Anonymity Protocol (GAP), Freedom, Hordes, Crowds, Onion Routing, Platform for Privacy Preferences (P3P), TRUSTe, Lucent Personalized Web Assistant (LPWA), and Anonymizer have been reviewed and compared. The comparative review has clearly highlighted that the pros and cons of each system do not coincide, mainly due to the fact that each one exhibits different design goals and thus adopts dissimilar techniques for protecting privacy and anonymity

[Go to top]

An Excess-Based Economic Model for Resource Allocation in Peer-to-Peer Networks (PDF)
by Christian Grothoff.
In Wirtschaftsinformatik 3-2003, June 2003. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes economic aspects of GNUnet, a peer-to-peer framework for anonymous distributed file-sharing. GNUnet is decentralized; all nodes are equal peers. In particular, there are no trusted entities in the network. This paper describes an economic model to perform resource allocation and defend against malicious participants in this context. The approach presented does not use credentials or payments; rather, it is based on trust. The design is much like that of a cooperative game in which peers take the role of players. Nodes must cooperate to achieve individual goals. In such a scenario, it is important to be able to distinguish between nodes exhibiting friendly behavior and those exhibiting malicious behavior. GNUnet aims to provide anonymity for its users. Its design makes it hard to link a transaction to the node where it originated from. While anonymity requirements make a global view of the end-points of a transaction infeasible, the local link-to-link messages can be fully authenticated. Our economic model is based entirely on this local view of the network and takes only local decisions

[Go to top]

An Analysis of GNUnet and the Implications for Anonymous, Censorship-Resistant Networks (PDF)
by Dennis Kügler.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

A Transport Layer Abstraction for Peer-to-Peer Networks (PDF)
by Ronaldo A. Ferreira, Christian Grothoff, and Paul Ruth.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The initially unrestricted host-to-host communication model provided by the Internet Protocol has deteriorated due to political and technical changes caused by Internet growth. While this is not a problem for most client-server applications, peer-to-peer networks frequently struggle with peers that are only partially reachable. We describe how a peer-to-peer framework can hide diversity and obstacles in the underlying Internet and provide peer-to-peer applications with abstractions that hide transport specific details. We present the details of an implementation of a transport service based on SMTP. Small-scale benchmarks are used to compare transport services over UDP, TCP, and SMTP

[Go to top]

gap–Practical Anonymous Networking (PDF)
by Krista Bennett and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes how anonymity is achieved in GNUnet, a framework for anonymous distributed and secure networking. The main focus of this work is gap, a simple protocol for anonymous transfer of data which can achieve better anonymity guarantees than many traditional indirection schemes and is additionally more efficient. gap is based on a new perspective on how to achieve anonymity. Based on this new perspective it is possible to relax the requirements stated in traditional indirection schemes, allowing individual nodes to balance anonymity with efficiency according to their specific needs

[Go to top]

The GNet Whitepaper (PDF)
by Krista Bennett, Tiberius Stef, Christian Grothoff, Tzvetan Horozov, and Ioana Patrascu.
In unknown, June 2002. (BibTeX entry) (Download bibtex record)
(direct link)

This paper describes GNet, a reliable anonymous distributed backup system with reasonable defenses against malicious hosts and low overhead in traffic and CPU time. The system design is described and compared to other publicly used services with similar goals. Additionally, the implementation and the protocols of GNet are presented

[Go to top]

Efficient Sharing of Encrypted Data (PDF)
by Krista Bennett, Christian Grothoff, Tzvetan Horozov, and Ioana Patrascu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

GPS

Keyless Jam Resistance (PDF)
by Leemon C. Baird, William L. Bahn, Michael D. Collins, Martin C. Carlisle, and Sean C. Butler.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

has been made resistant to jamming by the use of a secret key that is shared by the sender and receiver. There are no known methods for achieving jam resistance without that shared key. Unfortunately, wireless communication is now reaching a scale and a level of importance where such secret-key systems are becoming impractical. For example, the civilian side of the Global Positioning System (GPS) cannot use a shared secret, since that secret would have to be given to all 6.5 billion potential users, and so would no longer be secret. So civilian GPS cannot currently be protected from jamming. But the FAA has stated that the civilian airline industry will transition to using GPS for all navigational aids, even during landings. A terrorist with a simple jamming system could wreak havoc at a major airport. No existing system can solve this problem, and the problem itself has not even been widely discussed. The problem of keyless jam resistance is important. There is a great need for a system that can broadcast messages without any prior secret shared between the sender and receiver. We propose the first system for keyless jam resistance: the BBC algorithm. We describe the encoding, decoding, and broadcast algorithms. We then analyze it for expected resistance to jamming and error rates. We show that BBC can achieve the same level of jam resistance as traditional spread spectrum systems, at just under half the bit rate, and with no shared secret. Furthermore, a hybrid system can achieve the same average bit rate as traditional systems

[Go to top]

GRID

The IGOR File System for Efficient Data Distribution in the GRID (PDF)
by Kendy Kutzner and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many GRID applications such as drug discovery in the pharmaceutical industry or simulations in meteorology and generally in the earth sciences rely on large data bases. Historically, these data bases are flat files on the order of several hundred megabytes each. Today, sites often need to download dozens or hundreds of such files before they can start a simulation or analysis run, even if the respective application accesses only small fractions of the respective files. The IGOR file system (which has been developed within the EU FP6 SIMDAT project), addresses the need for an easy and efficient way to access large files across the Internet. IGOR-FS is especially suited for (potentially globally) distributed sites that read or modify only small portions of the files. IGOR-FS provides fine grained versioning and backup capabilities; and it is built on strong cryptography to protect confidential data both in the network and on the local sites storage systems

[Go to top]

Off-line Karma: A Decentralized Currency for Peer-to-peer and Grid Applications (PDF)
by Flavio D. Garcia and Jaap-Henk Hoepman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) and grid systems allow their users to exchange information and share resources, with little centralised or hierarchical control, instead relying on the fairness of the users to make roughly as much resources available as they use. To enforce this balance, some kind of currency or barter (called karma) is needed that must be exchanged for resources thus limiting abuse. We present a completely decentralised, off-line karma implementation for P2P and grid systems, that detects double-spending and other types of fraud under varying adversarial scenarios. The system is based on tracing the spending pattern of coins, and distributing the normally central role of a bank over a predetermined, but random, selection of nodes. The system is designed to allow nodes to join and leave the system at arbitrary times

[Go to top]

Peer-to-Peer Overlays and Data Integration in a Life Science Grid (PDF)
by Curt Cramer, Andrea Schafferhans, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Databases and Grid computing are a good match. With the service orientation of Grid computing, the complexity of maintaining and integrating databases can be kept away from the actual users. Data access and integration is performed via services, which also allow to employ an access control. While it is our perception that many proposed Grid applications rely on a centralized and static infrastructure, Peer-to-Peer (P2P) technologies might help to dynamically scale and enhance Grid applications. The focus does not lie on publicly available P2P networks here, but on the self-organizing capabilities of P2P networks in general. A P2P overlay could, e.g., be used to improve the distribution of queries in a data Grid. For studying the combination of these three technologies, Grid computing, databases, and P2P, in this paper, we use an existing application from the life sciences, drug target validation, as an example. In its current form, this system has several drawbacks. We believe that they can be alleviated by using a combination of the service-based architecture of Grid computing and P2P technologies for implementing the services. The work presented in this paper is in progress. We mainly focus on the description of the current system state, its problems and the proposed new architecture. For a better understanding, we also outline the main topics related to the work presented here

[Go to top]

Gauger

Performance Regression Monitoring with Gauger
by Bartlomiej Polot and Christian Grothoff.
In LinuxJournal(209), September 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Gnutella

GAS: Overloading a File Sharing Network as an Anonymizing System (PDF)
by Elias Athanasopoulos, Mema Roussopoulos, Kostas G. Anagnostakis, and Evangelos P. Markatos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity is considered as a valuable property as far as everyday transactions in the Internet are concerned. Users care about their privacy and they seek for new ways to keep secret as much as of their personal information from third parties. Anonymizing systems exist nowadays that provide users with the technology, which is able to hide their origin when they use applications such as the World Wide Web or Instant Messaging. However, all these systems are vulnerable to a number of attacks and some of them may collapse under a low strength adversary. In this paper we explore anonymity from a different perspective. Instead of building a new anonymizing system, we try to overload an existing file sharing system, Gnutella, and use it for a different purpose. We develop a technique that transforms Gnutella as an Anonymizing System (GAS) for a single download from the World Wide Web

[Go to top]

Understanding churn in peer-to-peer networks (PDF)
by Daniel Stutzbach and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The dynamics of peer participation, or churn, are an inherent property of Peer-to-Peer (P2P) systems and critical for design and evaluation. Accurately characterizing churn requires precise and unbiased information about the arrival and departure of peers, which is challenging to acquire. Prior studies show that peer participation is highly dynamic but with conflicting characteristics. Therefore, churn remains poorly understood, despite its significance.In this paper, we identify several common pitfalls that lead to measurement error. We carefully address these difficulties and present a detailed study using three widely-deployed P2P systems: an unstructured file-sharing system (Gnutella), a content-distribution system (BitTorrent), and a Distributed Hash Table (Kad). Our analysis reveals several properties of churn: (i) overall dynamics are surprisingly similar across different systems, (ii) session lengths are not exponential, (iii) a large portion of active peers are highly stable while the remaining peers turn over quickly, and (iv) peer session lengths across consecutive appearances are correlated. In summary, this paper advances our understanding of churn by improving accuracy, comparing different P2P file sharingdistribution systems, and exploring new aspects of churn

[Go to top]

Free Riding on Gnutella Revisited: The Bell Tolls? (PDF)
by Daniel Hughes, Geoff Coulson, and James Walkerdine.
In IEEE Distributed Systems Online 6, June 2005. (BibTeX entry) (Download bibtex record)
(direct link)

Individuals who use peer-to-peer (P2P) file-sharing networks such as Gnutella face a social dilemma. They must decide whether to contribute to the common good by sharing files or to maximize their personal experience by free riding, downloading files while not contributing any to the network. Individuals gain no personal benefits from uploading files (in fact, it's inconvenient), so it's "rational" for users to free ride. However, significant numbers of free riders degrade the entire system's utility, creating a "tragedy of the digital commons." In this article, a new analysis of free riding on the Gnutella network updates data from 2000 and points to an increasing downgrade in the network's overall performance and the emergence of a "metatragedy" of the commons among Gnutella developers

[Go to top]

Making gnutella-like P2P systems scalable (PDF)
by Yatin Chawathe, Lee Breslau, Nick Lanham, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Napster pioneered the idea of peer-to-peer file sharing, and supported it with a centralized file search facility. Subsequent P2P systems like Gnutella adopted decentralized search algorithms. However, Gnutella's notoriously poor scaling led some to propose distributed hash table solutions to the wide-area file search problem. Contrary to that trend, we advocate retaining Gnutella's simplicity while proposing new mechanisms that greatly improve its scalability. Building upon prior research [1, 12, 22], we propose several modifications to Gnutella's design that dynamically adapt the overlay topology and the search algorithms in order to accommodate the natural heterogeneity present in most peer-to-peer systems. We test our design through simulations and the results show three to five orders of magnitude improvement in total system capacity. We also report on a prototype implementation and its deployment on a testbed

[Go to top]

Gossip-based protocols

Gossip-based Peer Sampling (PDF)
by Márk Jelasity, Spyros Voulgaris, Rachid Guerraoui, Anne-Marie Kermarrec, and Maarten van Steen.
In ACM Trans. Comput. Syst 25, 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Gossip-based communication protocols are appealing in large-scale distributed applications such as information dissemination, aggregation, and overlay topology management. This paper factors out a fundamental mechanism at the heart of all these protocols: the peer-sampling service. In short, this service provides every node with peers to gossip with. We promote this service to the level of a first-class abstraction of a large-scale distributed system, similar to a name service being a first-class abstraction of a local-area system. We present a generic framework to implement a peer-sampling service in a decentralized manner by constructing and maintaining dynamic unstructured overlays through gossiping membership information itself. Our framework generalizes existing approaches and makes it easy to discover new ones. We use this framework to empirically explore and compare several implementations of the peer sampling service. Through extensive simulation experiments we show that—although all protocols provide a good quality uniform random stream of peers to each node locally—traditional theoretical assumptions about the randomness of the unstructured overlays as a whole do not hold in any of the instances. We also show that different design decisions result in severe differences from the point of view of two crucial aspects: load balancing and fault tolerance. Our simulations are validated by means of a wide-area implementation

[Go to top]

Gossip-based aggregation in large dynamic networks (PDF)
by Márk Jelasity, Alberto Montresor, and Ozalp Babaoglu.
In ACM Transactions on Computer Systems 23, August 2005, pages 219-252. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure—all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures

[Go to top]

Guard

Privacy-Implications of Performance-Based Peer Selection by Onion-Routers: A Real-World Case Study using I2P (PDF)
by Michael Herrmann and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

I2P is one of the most widely used anonymizing Peer-to-Peer networks on the Internet today. Like Tor, it uses onion routing to build tunnels between peers as the basis for providing anonymous communication channels. Unlike Tor, I2P integrates a range of anonymously hosted services directly with the platform. This paper presents a new attack on the I2P Peer-to-Peer network, with the goal of determining the identity of peers that are anonymously hosting HTTP services (Eepsite) in the network. Key design choices made by I2P developers, in particular performance-based peer selection, enable a sophisticated adversary with modest resources to break key security assumptions. Our attack first obtains an estimate of the victim's view of the network. Then, the adversary selectively targets a small number of peers used by the victim with a denial-of-service attack while giving the victim the opportunity to replace those peers with other peers that are controlled by the adversary. Finally, the adversary performs some simple measurements to determine the identity of the peer hosting the service. This paper provides the necessary background on I2P, gives details on the attack — including experimental data from measurements against the actual I2P network — and discusses possible solutions

[Go to top]

Locating Hidden Servers (PDF)
by Lasse Øverlier and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Hidden services were deployed on the Tor anonymous communication network in 2004. Announced properties include server resistance to distributed DoS. Both the EFF and Reporters Without Borders have issued guides that describe using hidden services via Tor to protect the safety of dissidents as well as to resist censorship. We present fast and cheap attacks that reveal the location of a hidden server. Using a single hostile Tor node we have located deployed hidden servers in a matter of minutes. Although we examine hidden services over Tor, our results apply to any client using a variety of anonymity networks. In fact, these are the first actual intersection attacks on any deployed public network: thus confirming general expectations from prior theory and simulation. We recommend changes to route selection design and implementation for Tor. These changes require no operational increase in network overhead and are simple to make; but they prevent the attacks we have demonstrated. They have been implemented

[Go to top]

HKDF

Cryptographic Extraction and Key Derivation: The HKDF Scheme (PDF)
by Hugo Krawczyk.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In spite of the central role of key derivation functions (KDF) in applied cryptography, there has been little formal work addressing the design and analysis of general multi-purpose KDFs. In practice, most KDFs (including those widely standardized) follow ad-hoc approaches that treat cryptographic hash functions as perfectly random functions. In this paper we close some gaps between theory and practice by contributing to the study and engineering of KDFs in several ways. We provide detailed rationale for the design of KDFs based on the extract-then-expand approach; we present the first general and rigorous definition of KDFs and their security which we base on the notion of computational extractors; we specify a concrete fully practical KDF based on the HMAC construction; and we provide an analysis of this construction based on the extraction and pseudorandom properties of HMAC. The resultant KDF design can support a large variety of KDF applications under suitable assumptions on the underlying hash function; particular attention and effort is devoted to minimizing these assumptions as much as possible for each usage scenario. Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function. (The HMAC-based scheme presented here, named HKDF, is being standardized by the IETF.)

[Go to top]

HMAC

Cryptographic Extraction and Key Derivation: The HKDF Scheme (PDF)
by Hugo Krawczyk.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In spite of the central role of key derivation functions (KDF) in applied cryptography, there has been little formal work addressing the design and analysis of general multi-purpose KDFs. In practice, most KDFs (including those widely standardized) follow ad-hoc approaches that treat cryptographic hash functions as perfectly random functions. In this paper we close some gaps between theory and practice by contributing to the study and engineering of KDFs in several ways. We provide detailed rationale for the design of KDFs based on the extract-then-expand approach; we present the first general and rigorous definition of KDFs and their security which we base on the notion of computational extractors; we specify a concrete fully practical KDF based on the HMAC construction; and we provide an analysis of this construction based on the extraction and pseudorandom properties of HMAC. The resultant KDF design can support a large variety of KDF applications under suitable assumptions on the underlying hash function; particular attention and effort is devoted to minimizing these assumptions as much as possible for each usage scenario. Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function. (The HMAC-based scheme presented here, named HKDF, is being standardized by the IETF.)

[Go to top]

HTTP

New Covert Channels in HTTP: Adding Unwitting Web Browsers to Anonymity Sets (PDF)
by Matthias Bauer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents new methods enabling anonymous communication on the Internet. We describe a new protocol that allows us to create an anonymous overlay network by exploiting the web browsing activities of regular users. We show that the overlay net work provides an anonymity set greater than the set of senders and receivers in a realistic threat model. In particular, the protocol provides unobservability in our threat model

[Go to top]

Hardware

SURF-2: A program for dependability evaluation of complex hardware and software systems
by C. Beounes, M. Aguera, J. Arlat, S. Bachmann, C. Bourdeau, J. -. Doucet, K. Kanoun, J. -. Laprie, S. Metge, J. Moreira de Souza, D. Powell, and P. Spiesser.
In the Proceedings of FTCS-23 The Twenty-Third International Symposium on Fault-Tolerant Computing, June 1993, pages 668-673. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SURF-2, a software tool for evaluating system dependability, is described. It is especially designed for an evaluation-based system design approach in which multiple design solutions need to be compared from the dependability viewpoint. System behavior may be modeled either by Markov chains or by generalized stochastic Petri nets. The tool supports the evaluation of different measures of dependability, including pointwise measures, asymptotic measures, mean sojourn times and, by superposing a reward structure on the behavior model, reward measures such as expected performance or cost

[Go to top]

History

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Personalized Web search for improving retrieval effectiveness (PDF)
by Fang Liu, C. Yu, and Weiyi Meng.
In Knowledge and Data Engineering, IEEE Transactions on 16, January 2004, pages 28-40. (BibTeX entry) (Download bibtex record)
(direct link)

Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient

[Go to top]

Hordes

An Analysis of the Degradation of Anonymous Protocols (PDF)
by Matthew Wright, Micah Adler, Brian Neil Levine, and Clay Shields.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There have been a number of protocols proposed for anonymous network communication. In this paper we investigate attacks by corrupt group members that degrade the anonymity of each protocol over time. We prove that when a particular initiator continues communication with a particular responder across path reformations, existing protocols are subject to the attack. We use this result to place an upper bound on how long existing protocols, including Crowds, Onion Routing, Hordes, Web Mixes, and DC-Net, can maintain anonymity in the face of the attacks described. Our results show that fully-connected DC-Net is the most resilient to these attacks, but it su$$ers from scalability issues that keep anonymity group sizes small. Additionally, we show how violating an assumption of the attack allows malicious users to setup other participants to falsely appear to be the initiator of a connection

[Go to top]

Hordes — A Multicast Based Protocol for Anonymity (PDF)
by Brian Neil Levine and Clay Shields.
In Journal of Computer Security 10(3), 2002, pages 213-240. (BibTeX entry) (Download bibtex record)
(direct link) (website)

With widespread acceptance of the Internet as a public medium for communication and information retrieval, there has been rising concern that the personal privacy of users can be eroded by cooperating network entities. A technical solution to maintaining privacy is to provide anonymity. We present a protocol for initiator anonymity called Hordes, which uses forwarding mechanisms similar to those used in previous protocols for sending data, but is the first protocol to make use of multicast routing to anonymously receive data. We show this results in shorter transmission latencies and requires less work of the protocol participants, in terms of the messages processed. We also present a comparison of the security and anonymity of Hordes with previous protocols, using the first quantitative definition of anonymity and unlinkability

[Go to top]

A Protocol for Anonymous Communication Over the Internet (PDF)
by Clay Shields and Brian Neil Levine.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents a new protocol for initiator anonymity called Hordes, which uses forwarding mechanisms similar to those used in previous protocols for sending data, but is the first protocol to make use of the anonymity inherent in multicast routing to receive data. We show this results in shorter transmission latencies and requires less work of the protocol participants, in terms of the messages processed. We also present a comparison of the security and anonymity of Hordes with previous protocols, using the first quantitative definition of anonymity and unlinkability. Our analysis shows that Hordes provides anonymity in a degree similar to that of Crowds and Onion Routing, but also that Hordes has numerous performance advantages

[Go to top]

Human--computer interaction

Personalization and privacy: a survey of privacy risks and remedies in personalization-based systems (PDF)
by Eran Toch, Yang Wang, and LorrieFaith Cranor.
In User Modeling and User-Adapted Interaction 22, 2012, pages 203-220. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Personalization technologies offer powerful tools for enhancing the user experience in a wide variety of systems, but at the same time raise new privacy concerns. For example, systems that personalize advertisements according to the physical location of the user or according to the user's friends' search history, introduce new privacy risks that may discourage wide adoption of personalization technologies. This article analyzes the privacy risks associated with several current and prominent personalization trends, namely social-based personalization, behavioral profiling, and location-based personalization. We survey user attitudes towards privacy and personalization, as well as technologies that can help reduce privacy risks. We conclude with a discussion that frames risks and technical solutions in the intersection between personalization and privacy, as well as areas for further investigation. This frameworks can help designers and researchers to contextualize privacy challenges of solutions when designing personalization systems

[Go to top]

Humans

SURF-2: A program for dependability evaluation of complex hardware and software systems
by C. Beounes, M. Aguera, J. Arlat, S. Bachmann, C. Bourdeau, J. -. Doucet, K. Kanoun, J. -. Laprie, S. Metge, J. Moreira de Souza, D. Powell, and P. Spiesser.
In the Proceedings of FTCS-23 The Twenty-Third International Symposium on Fault-Tolerant Computing, June 1993, pages 668-673. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SURF-2, a software tool for evaluating system dependability, is described. It is especially designed for an evaluation-based system design approach in which multiple design solutions need to be compared from the dependability viewpoint. System behavior may be modeled either by Markov chains or by generalized stochastic Petri nets. The tool supports the evaluation of different measures of dependability, including pointwise measures, asymptotic measures, mean sojourn times and, by superposing a reward structure on the behavior model, reward measures such as expected performance or cost

[Go to top]

Hunch

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Hydra

The rainbow skip graph: a fault-tolerant constant-degree distributed data structure (PDF)
by Michael T. Goodrich, Michael J. Nelson, and Jonathan Z. Sun.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We present a distributed data structure, which we call the rainbow skip graph. To our knowledge, this is the first peer-to-peer data structure that simultaneously achieves high fault-tolerance, constant-sized nodes, and fast update and query times for ordered data. It is a non-trivial adaptation of the SkipNet/skip-graph structures of Harvey et al. and Aspnes and Shah, so as to provide fault-tolerance as these structures do, but to do so using constant-sized nodes, as in the family tree structure of Zatloukal and Harvey. It supports successor queries on a set of n items using O(log n) messages with high probability, an improvement over the expected O(log n) messages of the family tree. Our structure achieves these results by using the following new constructs: Rainbow connections: parallel sets of pointers between related components of nodes, so as to achieve good connectivity between "adjacent" components, using constant-sized nodes. Hydra components: highly-connected, highly fault-tolerant components of constant-sized nodes, which will contain relatively large connected subcomponents even under the failure of a constant fraction of the nodes in the component.We further augment the hydra components in the rainbow skip graph by using erasure-resilient codes to ensure that any large subcomponent of nodes in a hydra component is sufficient to reconstruct all the data stored in that component. By carefully maintaining the size of related components and hydra components to be O(log n), we are able to achieve fast times for updates and queries in the rainbow skip graph. In addition, we show how to make the communication complexity for updates and queries be worst case, at the expense of more conceptual complexity and a slight degradation in the node congestion of the data structure

[Go to top]

I2P

Privacy-Implications of Performance-Based Peer Selection by Onion-Routers: A Real-World Case Study using I2P (PDF)
by Michael Herrmann and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

I2P is one of the most widely used anonymizing Peer-to-Peer networks on the Internet today. Like Tor, it uses onion routing to build tunnels between peers as the basis for providing anonymous communication channels. Unlike Tor, I2P integrates a range of anonymously hosted services directly with the platform. This paper presents a new attack on the I2P Peer-to-Peer network, with the goal of determining the identity of peers that are anonymously hosting HTTP services (Eepsite) in the network. Key design choices made by I2P developers, in particular performance-based peer selection, enable a sophisticated adversary with modest resources to break key security assumptions. Our attack first obtains an estimate of the victim's view of the network. Then, the adversary selectively targets a small number of peers used by the victim with a denial-of-service attack while giving the victim the opportunity to replace those peers with other peers that are controlled by the adversary. Finally, the adversary performs some simple measurements to determine the identity of the peer hosting the service. This paper provides the necessary background on I2P, gives details on the attack — including experimental data from measurements against the actual I2P network — and discusses possible solutions

[Go to top]

Privacy-Implications of Performance-Based Peer Selection by Onion-Routers: A Real-World Case Study using I2P (PDF)
by Michael Herrmann.
M.S, Technische Universität München, March 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Invisible Internet Project (I2P) is one of the most widely used anonymizing Peer-to-Peer networks on the Internet today. Like Tor, it uses onion routing to build tunnels between peers as the basis for providing anonymous communication channels. Unlike Tor, I2P integrates a range of anonymously hosted services directly with the platform. This thesis presents a new attack on the I2P Peer-to-Peer network, with the goal of determining the identity of peers that are anonymously hosting HTTP (Eepsite) services in the network. Key design choices made by I2P developers, in particular performance-based peer selection, enable a sophisticated adversary with modest resources to break key security assumptions. Our attack first obtains an estimate of the victim's view of the network. Then, the adversary selectively targets a small number of peers used by the victim with a denial-of-service attack while giving the victim the opportunity to replace those peers with other peers that are controlled by the adversary. Finally, the adversary performs some simple measurements to determine the identity of the peer hosting the service. This thesis provides the necessary background on I2P, gives details on the attack — including experimental data from measurements against the actual I2P network — and discusses possible solutions

[Go to top]

Peer Profiling and Selection in the I2P Anonymous Network (PDF)
by Lars Schimmer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

IBBE

Identity-based broadcast encryption with constant size ciphertexts and private keys (PDF)
by Cécile Delerablée.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes the first identity-based broadcast encryption scheme (IBBE) with constant size ciphertexts and private keys. In our scheme, the public key is of size linear in the maximal size m of the set of receivers, which is smaller than the number of possible users (identities) in the system. Compared with a recent broadcast encryption system introduced by Boneh, Gentry and Waters (BGW), our system has comparable properties, but with a better efficiency: the public key is shorter than in BGW. Moreover, the total number of possible users in the system does not have to be fixed in the setup

[Go to top]

IBE

Identity-based encryption with efficient revocation (PDF)
by Alexandra Boldyreva, Vipul Goyal, and Virendra Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Identity-based encryption (IBE) is an exciting alternative to public-key encryption, as IBE eliminates the need for a Public Key Infrastructure (PKI). The senders using an IBE do not need to look up the public keys and the corresponding certificates of the receivers, the identities (e.g. emails or IP addresses) of the latter are sufficient to encrypt. Any setting, PKI- or identity-based, must provide a means to revoke users from the system. Efficient revocation is a well-studied problem in the traditional PKI setting. However in the setting of IBE, there has been little work on studying the revocation mechanisms. The most practical solution requires the senders to also use time periods when encrypting, and all the receivers (regardless of whether their keys have been compromised or not) to update their private keys regularly by contacting the trusted authority. We note that this solution does not scale well – as the number of users increases, the work on key updates becomes a bottleneck. We propose an IBE scheme that significantly improves key-update efficiency on the side of the trusted party (from linear to logarithmic in the number of users), while staying efficient for the users. Our scheme builds on the ideas of the Fuzzy IBE primitive and binary tree data structure, and is provably secure

[Go to top]

Fuzzy Identity-Based Encryption (PDF)
by Amit Sahai and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, ω, to decrypt a ciphertext encrypted with an identity, ω , if and only if the identities ω and ω are close to each other as measured by the set overlap distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term attribute-based encryption. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model

[Go to top]

ICMP

Autonomous NAT Traversal (PDF)
by Andreas Müller, Nathan S Evans, Christian Grothoff, and Samy Kamkar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional NAT traversal methods require the help of a third party for signalling. This paper investigates a new autonomous method for establishing connections to peers behind NAT. The proposed method for Autonomous NAT traversal uses fake ICMP messages to initially contact the NATed peer. This paper presents how the method is supposed to work in theory, discusses some possible variations, introduces various concrete implementations of the proposed approach and evaluates empirical results of a measurement study designed to evaluate the efficacy of the idea in practice

[Go to top]

IDs distribution

Efficient DHT attack mitigation through peers' ID distribution (PDF)
by Thibault Cholez, Isabelle Chrisment, and Olivier Festor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a new solution to protect the widely deployed KAD DHT against localized attacks which can take control over DHT entries. We show through measurements that the IDs distribution of the best peers found after a lookup process follows a geometric distribution. We then use this result to detect DHT attacks by comparing real peers' ID distributions to the theoretical one thanks to the Kullback-Leibler divergence. When an attack is detected, we propose countermeasures that progressively remove suspicious peers from the list of possible contacts to provide a safe DHT access. Evaluations show that our method detects the most efficient attacks with a very small false-negative rate, while countermeasures successfully filter almost all malicious peers involved in an attack. Moreover, our solution completely fits the current design of the KAD network and introduces no network overhead

[Go to top]

IEEE 802.11

Investigating the energy consumption of a wireless network interface in an ad hoc networking environment (PDF)
by Laura Marie Feeney and Martin Nilsson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Energy-aware design and evaluation of network protocols requires knowledge of the energy consumption behavior of actual wireless interfaces. But little practical information is available about the energy consumption behavior of well-known wireless network interfaces and device specifications do not provide information in a form that is helpful to protocol developers. This paper describes a series of experiments which obtained detailed measurements of the energy consumption of an IEEE 802.11 wireless network interface operating in an ad hoc networking environment. The data is presented as a collection of linear equations for calculating the energy consumed in sending, receiving and discarding broadcast and point-to-point data packets of various sizes. Some implications for protocol design and evaluation in ad hoc networks are discussed

[Go to top]

IGOR

Proximity Neighbor Selection and Proximity Route Selection for the Overlay-Network IGOR (PDF)
by Yves Philippe Kising.
Diplomarbeit, Technische Universität München, June 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Unfortunately, from all known "Distributed Hash Table"-based overlay networks only a few of them relate to proximity in terms of latency. So a query routing can come with high latency when very distant hops are used. One can imagine hops are from one continent to the other in terms of here and back. Thereby it is possible that the target node is located close to the requesting node. Such cases increase query latency to a great extent and are responsible for performance bottlenecks of a query routing. There exist two main strategies to reduce latency in the query routing process: Proximity Neighbor Selection and Proximity Route Selection. As a new proposal of PNS for the IGOR overlay network, Merivaldi is developed. Merivaldi represents a combination of two basic ideas: The first idea is the Meridian framework and its Closest-Node- Discovery without synthetic coordinates. The second idea is Vivaldi, a distributed algorithm for predicting Internet latency between arbitrary Internet hosts. Merivaldi is quite similar to Meridian. It differs in using no direct Round Trip Time measurements like Meridian does to obtain latency characteristics between hosts. Merivaldi obtains latency characteristics of nodes using the latency prediction derived from the Vivaldi-coordinates. A Merivaldi-node forms exponentially growing latency-rings, i.e., the rings correspond to latency distances to the Merivaldi-node itself. In these rings node-references are inserted with regard to their latency characteristics. These node-references are obtained through a special protocol. A Merivaldi-node finds latency-closest nodes through periodic querying its ring-members for closer nodes. If a closer node is found by a ring-member the query is forwarded to this one until no closer one can be found. The closest on this way reports itself to the Merivaldi-node. Exemplary analysis show that Merivaldi means only a modest burden for the network. Merivaldi uses O(log N) CND-hops at maximum to recognize a closest node, where N is the number of nodes. Empirical tests demonstrate this analysis. Analysis shows, the overhead for a Merivaldi-node is modest. It is shown that Merivaldi's Vivaldi works with high quality with the used PING-message type

[Go to top]

IPTV

A Software and Hardware IPTV Architecture for Scalable DVB Distribution (PDF)
by unknown.
In International Journal of Digital Multimedia Broadcasting 2009, 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many standards and even more proprietary technologies deal with IP-based television (IPTV). But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders

[Go to top]

IRA

Capacity-achieving ensembles for the binary erasure channel with bounded complexity (PDF)
by Henry D. Pfister, Igal Sason, and Rüdiger L. Urbanke.
In IEEE TRANS. INFORMATION THEORY 51(7), 2005, pages 2352-2379. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present two sequences of ensembles of nonsystematic irregular repeat–accumulate (IRA) codes which asymptotically (as their block length tends to infinity) achieve capacity on the binary erasure channel (BEC) with bounded complexity per information bit. This is in contrast to all previous constructions of capacity-achieving sequences of ensembles whose complexity grows at least like the log of the inverse of the gap (in rate) to capacity. The new bounded complexity result is achieved by puncturing bits, and allowing in this way a sufficient number of state nodes in the Tanner graph representing the codes. We derive an information-theoretic lower bound on the decoding complexity of randomly punctured codes on graphs. The bound holds for every memoryless binary-input output-symmetric (MBIOS) channel and is refined for the binary erasure channel

[Go to top]

IRC

Bootstrapping Peer-to-Peer Systems Using IRC
by Mirko Knoll, Matthias Helling, Arno Wacker, Sebastian Holzapfel, and Torben Weis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Research in the area of peer-to-peer systems is mainly focused on structuring the overlay network. Little attention is paid to the process of setting up and joining a peer-to-peer overlay network, i.e. the bootstrapping of peer-to-peer networks. The major challenge is to get hold of one peer that is already in the overlay. Otherwise, the first peer must be able to detect that the overlay is currently empty. Successful P2P applications either provide a centralized server for this task (Skype) or they simply put the burden on the user (eMule). We propose an automatic solution which does not require any user intervention and does not exhibit a single point of failure. Such decentralized bootstrapping protocols are especially important for open non-commercial peer-to-peer systems which cannot provide a server infrastructure for bootstrapping. The algorithm we are proposing builds on the Internet Relay Chat (IRC), a highly available, open,and distributed network of chat servers. Our algorithm is designed to put only a very minimal load on the IRC servers.In measurements we show that our bootstrapping protocol scales very well, handles flash crowds, and does only put a constant load on the IRC system disregarding of the peer-to-peer overlay size

[Go to top]

ISDN

Networks Without User Observability – Design Options
by Andreas Pfitzmann and Michael Waidner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In present-day communication networks, the network operator or an intruder could easily observe when, how much and with whom the users communicate (traffic analysis), even if the users employ end-to-end encryption. With the increasing use of ISDNs, this becomes a severe threat. Therefore, we summarize basic concepts to keep the recipient and sender or at least their relationship unobservable, consider some possible implementations and necessary hierarchical extensions, and propose some suitable performance and reliability enhancements

[Go to top]

Identity and Access Management

Managing and Presenting User Attributes over a Decentralized Secure Name System
by Martin Schanzenbach and Christian Banse.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Today, user attributes are managed at centralized identity providers. However, two centralized identity providers dominate digital identity and access management on the web. This is increasingly becoming a privacy problem in times of mass surveillance and data mining for targeted advertisement. Existing systems for attribute sharing or credential presentation either rely on a trusted third party service or require the presentation to be online and synchronous. In this paper we propose a concept that allows the user to manage and share his attributes asynchronously with a requesting party using a secure, decentralized name system

[Go to top]

Inference algorithms

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Information science

Towards Empirical Aspects of Secure Scalar Product (PDF)
by I-Cheng Wang, Chih-Hao Shen, Tsan-sheng Hsu, Churn-Chung Liao, Da-Wei Wang, and J. Zhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Privacy is ultimately important, and there is a fair amount of research about it. However, few empirical studies about the cost of privacy are conducted. In the area of secure multiparty computation, the scalar product has long been reckoned as one of the most promising building blocks in place of the classic logic gates. The reason is not only the scalar product complete, which is as good as logic gates, but also the scalar product is much more efficient than logic gates. As a result, we set to study the computation and communication resources needed for some of the most well-known and frequently referred secure scalar-product protocols, including the composite-residuosity, the invertible-matrix, the polynomial-sharing, and the commodity-based approaches. Besides the implementation remarks of these approaches, we analyze and compare their execution time, computation time, and random number consumption, which are the most concerned resources when talking about secure protocols. Moreover, Fairplay the benchmark approach implementing Yao's famous circuit evaluation protocol, is included in our experiments in order to demonstrate the potential for the scalar product to replace logic gates

[Go to top]

Internet

Zur Idee herrschaftsfreier kooperativer Internetdienste (PDF)
by Christian Ricardo Kühne.
In FIfF-Kommunikation, 2016. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Auction, but don't block (PDF)
by Xiaowei Yang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper argues that ISP's recent actions to block certain applications (e.g. BitTorrent) and attempts to differentiate traffic could be a signal of bandwidth scarcity. Bandwidth-intensive applications such as VoD could have driven the traffic demand to the capacity limit of their networks. This paper proposes to let ISPs auction their bandwidth, instead of blocking or degrading applications. A user places a bid in a packet header based on how much he values the communication. When congestion occurs, ISPs allocate bandwidth to those users that value their packets the most, and charge them the Vickrey auction price. We outline a design that addresses the technical challenges to support this auction and analyze its feasibility. Our analysis suggests that the design have reasonable overhead and could be feasible with modern hardware

[Go to top]

PRIME: Peer-to-Peer Receiver-drIven MEsh-based Streaming (PDF)
by Nazanin Magharei and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The success of file swarming mechanisms such as BitTorrent has motivated a new approach for scalable streaming of live content that we call mesh-based Peer-to-Peer (P2P) streaming. In this approach, participating end-systems (or peers) form a randomly connected mesh and incorporate swarming content delivery to stream live content. Despite the growing popularity of this approach, neither the fundamental design tradeoffs nor the basic performance bottlenecks in mesh-based P2P streaming are well understood. In this paper, we follow a performance-driven approach to design PRIME, a scalable mesh-based P2P streaming mechanism for live content. The main design goal of PRIME is to minimize two performance bottlenecks, namely bandwidth bottleneck and content bottleneck. We show that the global pattern of delivery for each segment of live content should consist of a diffusion phase which is followed by a swarming phase. This leads to effective utilization of available resources to accommodate scalability and also minimizes content bottleneck. Using packet level simulations, we carefully examine the impact of overlay connectivity, packet scheduling scheme at individual peers and source behavior on the overall performance of the system. Our results reveal fundamental design tradeoffs of mesh-based P2P streaming for live content

[Go to top]

A Network Positioning System for the Internet (PDF)
by T. S. Eugene Ng and Hui Zhang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network positioning has recently been demonstrated to be a viable concept to represent the network distance relationships among Internet end hosts. Several subsequent studies have examined the potential benefits of using network position in applications, and proposed alternative network positioning algorithms. In this paper, we study the problem of designing and building a network positioning system (NPS). We identify several key system-building issues such as the consistency, adaptivity and stability of host network positions over time. We propose a hierarchical network positioning architecture that maintains consistency while enabling decentralization, a set of adaptive decentralized algorithms to compute and maintain accurate, stable network positions, and finally present a prototype system deployed on PlanetLab nodes that can be used by a variety of applications. We believe our system is a viable first step to provide a network positioning capability in the Internet

[Go to top]

Privacy-enhancing Technologies for the Internet (PDF)
by Ian Goldberg, David Wagner, and Eric Brewer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The increased use of the Internet for everyday activities is bringing new threats to personal privacy. This paper gives an overview of existing and potential privacy-enhancing technologies for the Internet, as well as motivation and challenges for future work in this field

[Go to top]

Internet communication

Decoy Routing: Toward Unblockable Internet Communication (PDF)
by Josh Karlin, Daniel Ellard, Alden W. Jackson, Christine E. Jones, Greg Lauer, David P. Mankins, and W. Timothy Strayer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present decoy routing, a mechanism capable of circumventing common network filtering strategies. Unlike other circumvention techniques, decoy routing does not require a client to connect to a specific IP address (which is easily blocked) in order to provide circumvention. We show that if it is possible for a client to connect to any unblocked host/service, then decoy routing could be used to connect them to a blocked destination without cooperation from the host. This is accomplished by placing the circumvention service in the network itself – where a single device could proxy traffic between a significant fraction of hosts – instead of at the edge

[Go to top]

Internet exchange

Sampled Traffic Analysis by Internet-Exchange-Level Adversaries (PDF)
by Steven J. Murdoch and Piotr Zieliński.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Existing low-latency anonymity networks are vulnerable to traffic analysis, so location diversity of nodes is essential to defend against attacks. Previous work has shown that simply ensuring geographical diversity of nodes does not resist, and in some cases exacerbates, the risk of traffic analysis by ISPs. Ensuring high autonomous-system (AS) diversity can resist this weakness. However, ISPs commonly connect to many other ISPs in a single location, known as an Internet eXchange (IX). This paper shows that IXes are a single point where traffic analysis can be performed. We examine to what extent this is true, through a case study of Tor nodes in the UK. Also, some IXes sample packets flowing through them for performance analysis reasons, and this data could be exploited to de-anonymize traffic. We then develop and evaluate Bayesian traffic analysis techniques capable of processing this sampled data

[Go to top]

Internet user

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Internetworking

A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing (PDF)
by Sally Floyd, Van Jacobson, Ching-Gung Liu, Steven McCanne, and Lixia Zhang.
In IEEE/ACM Trans. Netw 5, 1997, pages 784-803. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes SRM (Scalable Reliable Multicast), a reliable multicast framework for light-weight sessions and application level framing. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The SRM framework has been prototyped in wb, a distributed whiteboard application, which has been used on a global scale with sessions ranging from a few to a few hundred participants. The paper describes the principles that have guided the SRM design, including the IP multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies

[Go to top]

JAP

How to Achieve Blocking Resistance for Existing Systems Enabling Anonymous Web Surfing (PDF)
by Stefan Köpsell and Ulf Hilling.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We are developing a blocking resistant, practical and usable system for anonymous web surfing. This means, the system tries to provide as much reachability and availability as possible, even to users in countries where the free flow of information is legally, organizationally and physically restricted. The proposed solution is an add-on to existing anonymity systems. First we give a classification of blocking criteria and some general countermeasures. Using these techniques, we outline a concrete design, which is based on the JAP-Web Mixes (aka AN.ON)

[Go to top]

Join-leave attacks

Robust Random Number Generation for Peer-to-Peer Systems (PDF)
by Baruch Awerbuch and Christian Scheideler.
In Theor. Comput. Sci 410, 2009, pages 453-466. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of designing an efficient and robust distributed random number generator for peer-to-peer systems that is easy to implement and works even if all communication channels are public. A robust random number generator is crucial for avoiding adversarial join-leave attacks on peer-to-peer overlay networks. We show that our new generator together with a light-weight rule recently proposed in [B. Awerbuch, C. Scheideler, Towards a scalable and robust DHT, in: Proc. of the 18th ACM Symp. on Parallel Algorithms and Architectures, SPAA, 2006. See also http://www14.in.tum.de/personen/scheideler] for keeping peers well distributed can keep various structured overlay networks in a robust state even under a constant fraction of adversarial peers

[Go to top]

KAD

Efficient DHT attack mitigation through peers' ID distribution (PDF)
by Thibault Cholez, Isabelle Chrisment, and Olivier Festor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a new solution to protect the widely deployed KAD DHT against localized attacks which can take control over DHT entries. We show through measurements that the IDs distribution of the best peers found after a lookup process follows a geometric distribution. We then use this result to detect DHT attacks by comparing real peers' ID distributions to the theoretical one thanks to the Kullback-Leibler divergence. When an attack is detected, we propose countermeasures that progressively remove suspicious peers from the list of possible contacts to provide a safe DHT access. Evaluations show that our method detects the most efficient attacks with a very small false-negative rate, while countermeasures successfully filter almost all malicious peers involved in an attack. Moreover, our solution completely fits the current design of the KAD network and introduces no network overhead

[Go to top]

Poisoning the Kad network (PDF)
by Thomas Locher, David Mysicka, Stefan Schmid, and Roger Wattenhofer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Since the demise of the Overnet network, the Kad network has become not only the most popular but also the only widely used peer-to-peer system based on a distributed hash table. It is likely that its user base will continue to grow in numbers over the next few years as, unlike the eDonkey network, it does not depend on central servers, which increases scalability and reliability. Moreover, the Kad network is more efficient than unstructured systems such as Gnutella. However, we show that today's Kad network can be attacked in several ways by carrying out several (well-known) attacks on the Kad network. The presented attacks could be used either to hamper the correct functioning of the network itself, to censor contents, or to harm other entities in the Internet not participating in the Kad network such as ordinary web servers. While there are simple heuristics to reduce the impact of some of the attacks, we believe that the presented attacks cannot be thwarted easily in any fully decentralized peer-to-peer system without some kind of a centralized certification and verification authority

[Go to top]

Evaluation of Sybil Attacks Protection Schemes in KAD (PDF)
by Thibault Cholez, Isabelle Chrisment, and Olivier Festor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we assess the protection mechanisms entered into recent clients to fight against the Sybil attack in KAD, a widely deployed Distributed Hash Table. We study three main mechanisms: a protection against flooding through packet tracking, an IP address limitation and a verification of identities. We evaluate their efficiency by designing and adapting an attack for several KAD clients with different levels of protection. Our results show that the new security rules mitigate the Sybil attacks previously launched. However, we prove that it is still possible to control a small part of the network despite the new inserted defenses with a distributed eclipse attack and limited resources

[Go to top]

Long term study of peer behavior in the KAD DHT (PDF)
by Moritz Steiner, Taoufik En-Najjary, and E W Biersack.
In IEEE/ACM Transactions on Networking 17, May 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hash tables (DHTs) have been actively studied in literature and many different proposals have been made on how to organize peers in a DHT. However, very few DHTs have been implemented in real systems and deployed on a large scale. One exception is KAD, a DHT based on Kademlia, which is part of eDonkey, a peer-to-peer file sharing system with several million simultaneous users. We have been crawling a representative subset of KAD every five minutes for six months and obtained information about geographical distribution of peers, session times, daily usage, and peer lifetime. We have found that session times are Weibull distributed and we show how this information can be exploited to make the publishing mechanism much more efficient. Peers are identified by the so-called KAD ID, which up to now was assumed to be persistent. However, we observed that a fraction of peers changes their KAD ID as frequently as once a session. This change of KAD IDs makes it difficult to characterize end-user behavior. For this reason we have been crawling the entire KAD network once a day for more than a year to track end-users with static IP addresses, which allows us to estimate end-user lifetime and the fraction of end-users changing their KAD ID

[Go to top]

Analyzing Peer Behavior in KAD (PDF)
by Moritz Steiner, Taoufik En-Najjary, and E W Biersack.
In unknown(RR-07-205), October 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hash tables (DHTs) have been actively studied in literature and many different proposals have been made on how to organize peers in a DHT. However, very few DHTs have been implemented in real systems and deployed on a large scale. One exception is KAD, a DHT based on Kademlia, which is part of eDonkey2000, a peer-to-peer file sharing system with several million simultaneous users. We have been crawling KAD continuously for about six months and obtained information about geographical distribution of peers, session times, peer availability, and peer lifetime. We also evaluated to what extent information about past peer uptime can be used to predict the remaining uptime of the peer. Peers are identified by the so called KAD ID, which was up to now as- sumed to remain the same across sessions. However, we observed that this is not the case: There is a large number of peers, in particular in China, that change their KAD ID, sometimes as frequently as after each session. This change of KAD IDs makes it difficult to characterize end-user availability or membership turnover. By tracking end-users with static IP addresses, we could measure the rate of change of KAD ID per end-user

[Go to top]

Understanding churn in peer-to-peer networks (PDF)
by Daniel Stutzbach and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The dynamics of peer participation, or churn, are an inherent property of Peer-to-Peer (P2P) systems and critical for design and evaluation. Accurately characterizing churn requires precise and unbiased information about the arrival and departure of peers, which is challenging to acquire. Prior studies show that peer participation is highly dynamic but with conflicting characteristics. Therefore, churn remains poorly understood, despite its significance.In this paper, we identify several common pitfalls that lead to measurement error. We carefully address these difficulties and present a detailed study using three widely-deployed P2P systems: an unstructured file-sharing system (Gnutella), a content-distribution system (BitTorrent), and a Distributed Hash Table (Kad). Our analysis reveals several properties of churn: (i) overall dynamics are surprisingly similar across different systems, (ii) session lengths are not exponential, (iii) a large portion of active peers are highly stable while the remaining peers turn over quickly, and (iv) peer session lengths across consecutive appearances are correlated. In summary, this paper advances our understanding of churn by improving accuracy, comparing different P2P file sharingdistribution systems, and exploring new aspects of churn

[Go to top]

Kademlia

Long term study of peer behavior in the KAD DHT (PDF)
by Moritz Steiner, Taoufik En-Najjary, and E W Biersack.
In IEEE/ACM Transactions on Networking 17, May 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hash tables (DHTs) have been actively studied in literature and many different proposals have been made on how to organize peers in a DHT. However, very few DHTs have been implemented in real systems and deployed on a large scale. One exception is KAD, a DHT based on Kademlia, which is part of eDonkey, a peer-to-peer file sharing system with several million simultaneous users. We have been crawling a representative subset of KAD every five minutes for six months and obtained information about geographical distribution of peers, session times, daily usage, and peer lifetime. We have found that session times are Weibull distributed and we show how this information can be exploited to make the publishing mechanism much more efficient. Peers are identified by the so-called KAD ID, which up to now was assumed to be persistent. However, we observed that a fraction of peers changes their KAD ID as frequently as once a session. This change of KAD IDs makes it difficult to characterize end-user behavior. For this reason we have been crawling the entire KAD network once a day for more than a year to track end-users with static IP addresses, which allows us to estimate end-user lifetime and the fraction of end-users changing their KAD ID

[Go to top]

Selected DHT Algorithms (PDF)
by Stefan Götz, Simon Rieche, and Klaus Wehrle.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several different approaches to realizing the basic principles of DHTs have emerged over the last few years. Although they rely on the same fundamental idea, there is a large diversity of methods for both organizing the identifier space and performing routing. The particular properties of each approach can thus be exploited by specific application scenarios and requirements. This overview focuses on the three DHT systems that have received the most attention in the research community: Chord, Pastry, and Content Addressable Networks (CAN). Furthermore, the systems Symphony, Viceroy, and Kademlia are discussed because they exhibit interesting mechanisms and properties beyond those of the first three systems

[Go to top]

Kademlia: A Peer-to-peer Information System Based on the XOR Metric (PDF)
by Petar Maymounkov and David Mazières.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a peer-to-peer distributed hash table with provable consistency and performance in a fault-prone environment. Our system routes queries and locates nodes using a novel XOR-based metric topology that simplifies the algorithm and facilitates our proof. The topology has the property that every message exchanged conveys or reinforces useful contact information. The system exploits this information to send parallel, asynchronous query messages that tolerate node failures without imposing timeout delays on users

[Go to top]

Kernel

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

Keso

Keso–a Scalable, Reliable and Secure Read/Write Peer-to-Peer File System (PDF)
by Mattias Amnefelt and Johanna Svenningsson.
Master's Thesis, KTH/Royal Institute of Technology, May 2004. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this thesis we present the design of Keso, a distributed and completely decentralized file system based on the peer-to-peer overlay network DKS. While designing Keso we have taken into account many of the problems that exist in today's distributed file systems. Traditionally, distributed file systems have been built around dedicated file servers which often use expensive hardware to minimize the risk of breakdown and to handle the load. System administrators are required to monitor the load and disk usage of the file servers and to manually add clients and servers to the system. Another drawback with centralized file systems are that a lot of storage space is unused on clients. Measurements we have taken on existing computer systems has shown that a large part of the storage capacity of workstations is unused. In the system we looked at there was three times as much storage space available on workstations than was stored in the distributed file system. We have also shown that much data stored in a production use distributed file system is redundant. The main goals for the design of Keso has been that it should make use of spare resources, avoid storing unnecessarily redundant data, scale well, be self-organizing and be a secure file system suitable for a real world environment. By basing Keso on peer-to-peer techniques it becomes highly scalable, fault tolerant and self-organizing. Keso is intended to run on ordinary workstations and can make use of the previously unused storage space. Keso also provides means for access control and data privacy despite being built on top of untrusted components. The file system utilizes the fact that a lot of data stored in traditional file systems is redundant by letting all files that contains a datablock with the same contents reference the same datablock in the file system. This is achieved while still maintaining access control and data privacy

[Go to top]

Koorde

Koorde: A Simple degree-optimal distributed hash table (PDF)
by Frans M. Kaashoek and David Karger.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

Koorde is a new distributed hash table (DHT) based on Chord 15 and the de Bruijn graphs 2. While inheriting the simplicity of Chord, Koorde meets various lower bounds, such as O(log n) hops per lookup request with only 2 neighbors per node (where n is the number of nodes in the DHT), and O(log n/log log n) hops per lookup request with O(log n) neighbors per node

[Go to top]

LA

Decentralized Learning in Markov Games (PDF)
by Peter Vrancx, Katja Verbeeck, and Ann Nowé.
In IEEE Transactions on Systems, Man, and Cybernetics, Part B 38, August 2008, pages 976-981. (BibTeX entry) (Download bibtex record)
(direct link)

Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games-a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies

[Go to top]

LAP

LAP: Lightweight Anonymity and Privacy (PDF)
by Hsu-Chun Hsiao, Tiffany Hyun-Jin Kim, Adrian Perrig, Akira Yamada, Sam Nelson, Marco Gruteser, and Wei Ming.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Popular anonymous communication systems often require sending packets through a sequence of relays on dilated paths for strong anonymity protection. As a result, increased end-to-end latency renders such systems inadequate for the majority of Internet users who seek an intermediate level of anonymity protection while using latency-sensitive applications, such as Web applications. This paper serves to bridge the gap between communication systems that provide strong anonymity protection but with intolerable latency and non-anonymous communication systems by considering a new design space for the setting. More specifically, we explore how to achieve near-optimal latency while achieving an intermediate level of anonymity with a weaker yet practical adversary model (i.e., protecting an end-host's identity and location from servers) such that users can choose between the level of anonymity and usability. We propose Lightweight Anonymity and Privacy (LAP), an efficient network-based solution featuring lightweight path establishment and stateless communication, by concealing an end-host's topological location to enhance anonymity against remote tracking. To show practicality, we demonstrate that LAP can work on top of the current Internet and proposed future Internet architectures

[Go to top]

LDGM

Design and evaluation of a low density generator matrix (PDF)
by Vincent Roca, Zainab Khallouf, and Julien Laboure.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional small block Forward Error Correction (FEC) codes, like the Reed-Solomon erasure (RSE) code, are known to raise efficiency problems, in particular when they are applied to the Asynchronous Layered Coding (ALC) reliable multicast protocol. In this paper we describe the design of a simple large block Low Density Generator Matrix (LDGM) codec, a particular case of LDPC code, which is capable of operating on source blocks that are several tens of megabytes long. We also explain how the iterative decoding feature of LDGM/LDPC can be used to protect a large number of small independent objects during time-limited partially-reliable sessions. We illustrate this feature with an example derived from a video streaming scheme over ALC. We then evaluate our LDGM codec and compare its performances with a well known RSE codec. Tests focus on the global efficiency and on encoding/decoding performances. This paper deliberately skips theoretical aspects to focus on practical results. It shows that LDGM/LDPC open many opportunities in the area of bulk data multicasting

[Go to top]

LDPC

Impacts of packet scheduling and packet loss distribution on FEC Performances: observations and recommendations (PDF)
by Christoph Neumann, Aurélien Francillon, and David Furodet.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Forward Error Correction (FEC) is commonly used for content broadcasting. The performance of the FEC codes largely vary, depending in particular on the code used and on the object size, and these parameters have already been studied in detail by the community. However the FEC performances are also largely dependent on the packet scheduling used during transmission and on the loss pattern introduced by the channel. Little attention has been devoted to these aspects so far. Therefore the present paper analyzes their impacts on the three FEC codes: LDGM Staircase, LDGM Triangle, two large block codes, and Reed-Solomon. Thanks to this analysis, we define several recommendations on how to best use these codes, depending on the test case and on the channel, which turns out to be of utmost importance

[Go to top]

On the Practical Use of LDPC Erasure Codes for Distributed Storage Applications (PDF)
by James S. Plank and Michael G. Thomason.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper has been submitted for publication. Please see the above URL for current publication status. As peer-to-peer and widely distributed storage systems proliferate, the need to perform efficient erasure coding, instead of replication, is crucial to performance and efficiency. Low-Density Parity-Check (LDPC) codes have arisen as alternatives to standard erasure codes, such as Reed-Solomon codes, trading off vastly improved decoding performance for inefficiencies in the amount of data that must be acquired to perform decoding. The scores of papers written on LDPC codes typically analyze their collective and asymptotic behavior. Unfortunately, their practical application requires the generation and analysis of individual codes for finite systems. This paper attempts to illuminate the practical considerations of LDPC codes for peer-to-peer and distributed storage systems. The three main types of LDPC codes are detailed, and a huge variety of codes are generated, then analyzed using simulation. This analysis focuses on the performance of individual codes for finite systems, and addresses several important heretofore unanswered questions about employing LDPC codes in real-world systems. This material is based upon work supported by the National

[Go to top]

Design and evaluation of a low density generator matrix (PDF)
by Vincent Roca, Zainab Khallouf, and Julien Laboure.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional small block Forward Error Correction (FEC) codes, like the Reed-Solomon erasure (RSE) code, are known to raise efficiency problems, in particular when they are applied to the Asynchronous Layered Coding (ALC) reliable multicast protocol. In this paper we describe the design of a simple large block Low Density Generator Matrix (LDGM) codec, a particular case of LDPC code, which is capable of operating on source blocks that are several tens of megabytes long. We also explain how the iterative decoding feature of LDGM/LDPC can be used to protect a large number of small independent objects during time-limited partially-reliable sessions. We illustrate this feature with an example derived from a video streaming scheme over ALC. We then evaluate our LDGM codec and compare its performances with a well known RSE codec. Tests focus on the global efficiency and on encoding/decoding performances. This paper deliberately skips theoretical aspects to focus on practical results. It shows that LDGM/LDPC open many opportunities in the area of bulk data multicasting

[Go to top]

LOF outlier detection

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

LP decoding

Lower Bounds in Differential Privacy (PDF)
by Anindya De.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper is about private data analysis, in which a trusted curator holding a confidential database responds to real vector-valued queries. A common approach to ensuring privacy for the database elements is to add appropriately generated random noise to the answers, releasing only these noisy responses. A line of study initiated in [7] examines the amount of distortion needed to prevent privacy violations of various kinds. The results in the literature vary according to several parameters, including the size of the database, the size of the universe from which data elements are drawn, the amount of privacy desired, and for the purposes of the current work, the arity of the query. In this paper we sharpen and unify these bounds. Our foremost result combines the techniques of Hardt and Talwar [11] and McGregor et al. [13] to obtain linear lower bounds on distortion when providing differential privacy for a (contrived) class of low-sensitivity queries. (A query has low sensitivity if the data of a single individual has small effect on the answer.) Several structural results follow as immediate corollaries: We separate so-called counting queries from arbitrary low-sensitivity queries, proving the latter requires more noise, or distortion, than does the former; We separate (,0)-differential privacy from its well-studied relaxation (,)-differential privacy, even when 2- o(n) is negligible in the size n of the database, proving the latter requires less distortion than the former; We demonstrate that (,)-differential privacy is much weaker than (,0)-differential privacy in terms of mutual information of the transcript of the mechanism with the database, even when 2- o(n) is negligible in the size n of the database. We also simplify the lower bounds on noise for counting queries in [11] and also make them unconditional. Further, we use a characterization of (,) differential privacy from [13] to obtain lower bounds on the distortion needed to ensure (,)-differential privacy for , > 0. We next revisit the LP decoding argument of [10] and combine it with a recent result of Rudelson [15] to improve on a result of Kasiviswanathan et al. [12] on noise lower bounds for privately releasing l-way marginals

[Go to top]

LPLC

Local Production, Local Consumption: Peer-to-Peer Architecture for a Dependable and Sustainable Social Infrastructure (PDF)
by Kenji Saito, Eiichi Morino, Yoshihiko Suko, Takaaki Suzuki, and Jun Murai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) is a system of overlay networks such that participants can potentially take symmetrical roles. This translates itself into a design based on the philosophy of Local Production, Local Consumption (LPLC), originally an agricultural concept to promote sustainable local economy. This philosophy helps enhancing survivability of a society by providing a dependable economic infrastructure and promoting the power of individuals. This paper attempts to put existing works of P2P designs into the perspective of the five-layer architecture model to realize LPLC, and proposes future research directions toward integration of P2P studies for actualization of a dependable and sustainable social infrastructure

[Go to top]

LSD

The LSD Broadcast Encryption Scheme (PDF)
by Dani Halevy and Adi Shamir.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Broadcast Encryption schemes enable a center to broadcast encrypted programs so that only designated subsets of users can decrypt each program. The stateless variant of this problem provides each user with a fixed set of keys which is never updated. The best scheme published so far for this problem is the "subset difference" (SD) technique of Naor Naor and Lotspiech, in which each one of the n users is initially given O(log2(n)) symmetric encryption keys. This allows the broadcaster to define at a later stage any subset of up to r users as "revoked", and to make the program accessible only to their complement by sending O(r) short messages before the encrypted program, and asking each user to perform an O(log(n)) computation. In this paper we describe the "Layered Subset Difference" (LSD) technique, which achieves the same goal with O(log1+(n)) keys, O(r) messages, and O(log(n)) computation. This reduces the number of keys given to each user by almost a square root factor without affecting the other parameters. In addition, we show how to use the same LSD keys in order to address any subset defined by a nested combination of inclusion and exclusion conditions with a number of messages which is proportional to the complexity of the description rather than to the size of the subset. The LSD scheme is truly practical, and makes it possible to broadcast an unlimited number of programs to 256,000,000 possible customers by giving each new customer a smart card with one kilobyte of tamper-resistant memory. It is then possible to address any subset defined by t nested inclusion and exclusion conditions by sending less than 4t short messages, and the scheme remains secure even if all the other users form an adversarial coalition

[Go to top]

Laboratories

A Collusion-Resistant Distributed Scalar Product Protocol with Application to Privacy-Preserving Computation of Trust (PDF)
by C.A. Melchor, B. Ait-Salem, and P. Gaborit.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private scalar product protocols have proved to be interesting in various applications such as data mining, data integration, trust computing, etc. In 2007, Yao et al. proposed a distributed scalar product protocol with application to privacy-preserving computation of trust [1]. This protocol is split in two phases: an homorphic encryption computation; and a private multi-party summation protocol. The summation protocol has two drawbacks: first, it generates a non-negligible communication overhead; and second, it introduces a security flaw. The contribution of this present paper is two-fold. We first prove that the protocol of [1] is not secure in the semi-honest model by showing that it is not resistant to collusion attacks and we give an example of a collusion attack, with only four participants. Second, we propose to use a superposed sending round as an alternative to the multi-party summation protocol, which results in better security properties and in a reduction of the communication costs. In particular, regarding security, we show that the previous scheme was vulnerable to collusions of three users whereas in our proposal we can t isin [1..n–1] and define a protocol resisting to collusions of up to t users

[Go to top]

Lagrangean conditions

The Theory of Moral Hazard and Unobservable Behaviour: Part I
by James A. Mirrlees.
In Review of Economic Studies 66(1), January 1999, pages 3-21. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This article presents information on principal-agent models in which outcomes conditional on the agent's action are uncertain, and the agent's behavior therefore unobservable. For a model with bounded agent's utility, conditions are given under which the first-best equilibrium can be approximated arbitrarily closely by contracts relating payment to observable outcomes. For general models, it is shown that the solution may not always be obtained by using the agent's first-order conditions as constraint. General conditions of Lagrangean type are given for problems in which contracts are finite-dimensional

[Go to top]

Last.fm

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Library Thing

"You Might Also Like:" Privacy Risks of Collaborative Filtering (PDF)
by J.A. Calandrino, A. Kilzer, A. Narayanan, E.W. Felten, and V. Shmatikov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last.fm, LibraryThing, and Amazon

[Go to top]

Linux

Cryogenic: Enabling Power-Aware Applications on Linux (PDF)
by Alejandra Morales.
Masters, Technische Universität München, February 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As a means of reducing power consumption, hardware devices are capable to enter into sleep-states that have low power consumption. Waking up from those states in order to return to work is typically a rather energy-intensive activity. Some existing applications have non-urgent tasks that currently force hardware to wake up needlessly or prevent it from going to sleep. It would be better if such non-urgent activities could be scheduled to execute when the respective devices are active to maximize the duration of sleep-states. This requires cooperation between applications and the kernel in order to determine when the execution of a task will not be expensive in terms of power consumption. This work presents the design and implementation of Cryogenic, a POSIX-compatible API that enables clustering tasks based on the hardware activity state. Specifically, Cryogenic's API allows applications to defer their execution until other tasks use the device they want to use. As a result, two actions that contribute to reduce the device energy consumption are achieved: reduce the number of hardware wake-ups and maximize the idle periods. The energy measurements enacted at the end of this thesis demonstrate that, for the specific setup and conditions present during our experimentation, Cryogenic is capable to achieve savings between 1 and 10 for a USB WiFi device. Although we ideally target mobile platforms, Cryogenic has been developed by means a new Linux module that integrates with the existing POSIX event loop system calls. This allows to use Cryogenic on many different platforms as long as they use a GNU/Linux distribution as the main operating system. An evidence of this can be found in this thesis, where we demonstrate the power savings on a single-board computer

[Go to top]

Location-based services

Personalization and privacy: a survey of privacy risks and remedies in personalization-based systems (PDF)
by Eran Toch, Yang Wang, and LorrieFaith Cranor.
In User Modeling and User-Adapted Interaction 22, 2012, pages 203-220. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Personalization technologies offer powerful tools for enhancing the user experience in a wide variety of systems, but at the same time raise new privacy concerns. For example, systems that personalize advertisements according to the physical location of the user or according to the user's friends' search history, introduce new privacy risks that may discourage wide adoption of personalization technologies. This article analyzes the privacy risks associated with several current and prominent personalization trends, namely social-based personalization, behavioral profiling, and location-based personalization. We survey user attitudes towards privacy and personalization, as well as technologies that can help reduce privacy risks. We conclude with a discussion that frames risks and technical solutions in the intersection between personalization and privacy, as well as areas for further investigation. This frameworks can help designers and researchers to contextualize privacy challenges of solutions when designing personalization systems

[Go to top]

Low-Energy Adaptive Clustering Hierarchy

Energy-Efficient Communication Protocol for Wireless Microsensor Networks (PDF)
by Wendi Rabiner Heinzelman, Anantha Chandrakasan, and Hari Balakrishnan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated

[Go to top]

MBIOS

Capacity-achieving ensembles for the binary erasure channel with bounded complexity (PDF)
by Henry D. Pfister, Igal Sason, and Rüdiger L. Urbanke.
In IEEE TRANS. INFORMATION THEORY 51(7), 2005, pages 2352-2379. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present two sequences of ensembles of nonsystematic irregular repeat–accumulate (IRA) codes which asymptotically (as their block length tends to infinity) achieve capacity on the binary erasure channel (BEC) with bounded complexity per information bit. This is in contrast to all previous constructions of capacity-achieving sequences of ensembles whose complexity grows at least like the log of the inverse of the gap (in rate) to capacity. The new bounded complexity result is achieved by puncturing bits, and allowing in this way a sufficient number of state nodes in the Tanner graph representing the codes. We derive an information-theoretic lower bound on the decoding complexity of randomly punctured codes on graphs. The bound holds for every memoryless binary-input output-symmetric (MBIOS) channel and is refined for the binary erasure channel

[Go to top]

MCTS

A Survey of Monte Carlo Tree Search Methods (PDF)
by Cameron Browne, Edward Powley, Daniel Whitehouse, Simon Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton.
In IEEE Transactions on Computational Intelligence and AI in Games 4, March 2012, pages 1-43. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

[Go to top]

Monte-Carlo Search Techniques in the Modern Board Game Thurn and Taxis (PDF)
by Frederik Christiaan Schadd.
Master Thesis, Maastricht University, December 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Modern board games present a new and challenging field when researching search techniques in the field of Artificial Intelligence. These games differ to classic board games, such as chess, in that they can be non-deterministic, have imperfect information or more than two players. While tree-search approaches, such as alpha-beta pruning, have been quite successful in playing classic board games, by for instance defeating the then reigning world champion Gary Kasparov in Chess, these techniques are not as effective when applied to modern board games. This thesis investigates the effectiveness of Monte-Carlo Tree Search when applied to a modern board game, for which the board game Thurn and Taxis was used. This is a non-deterministic modern board game with imperfect information that can be played with more than 2 players, and is hence suitable for research. First, the state-space and game-tree complexities of this game are computed, from which the conclusion can be drawn that the two-player version of the game has a complexity similar to the game Shogi. Several techniques are investigated in order to improve the sampling process, for instance by adding domain knowledge. Given the results of the experiments, one can conclude that Monte-Carlo Tree Search gives a slight performance increase over standard Monte-Carlo search. In addition, the most effective improvements appeared to be the application of pseudo-random simulations and limiting simulation lengths, while other techniques have been shown to be less effective or even ineffective. Overall, when applying the best performing techniques, an AI with advanced playing strength has been created, such that further research is likely to push this performance to a strength of expert level

[Go to top]

Efficient selectivity and backup operators in Monte-Carlo tree search (PDF)
by Rémi Coulom.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations. The method can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to minmax as the number of simulations grows. This approach provides a finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity. The resulting algorithm was implemented in a 9 9 Go-playing program, Crazy Stone, that won the 10th KGS computer-Go tournament

[Go to top]

MCTS heuristic search

Progressive Strategies for Monte-Carlo Tree Search (PDF)
by Guillaume M. J-B. Chaslot, Mark H. M. Winands, H. Jaap van den Herik, Jos W. H. M. Uiterwijk, and Bruno Bouzy.
In New Mathematics and Natural Computation 4, 2008, pages 343-357. (BibTeX entry) (Download bibtex record)
(direct link)

Monte-Carlo Tree Search (MCTS) is a new best-first search guided by the results of Monte-Carlo simulations. In this article, we introduce two progressive strategies for MCTS, called progressive bias and progressive unpruning. They enable the use of relatively time-expensive heuristic knowledge without speed reduction. Progressive bias directs the search according to heuristic knowledge. Progressive unpruning first reduces the branching factor, and then increases it gradually again. Experiments assess that the two progressive strategies significantly improve the level of our Go program Mango. Moreover, we see that the combination of both strategies performs even better on larger board sizes

[Go to top]

MDS Codes

Low Density MDS Codes and Factors of Complete Graphs (PDF)
by Lihao Xu, Vasken Bohossian, Jehoshua Bruck, and David Wagner.
In IEEE Trans. on Information Theory 45, 1998, pages 1817-1826. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We reveal an equivalence relation between the construction of a new class of low density MDS array codes, that we call B-Code, and a combinatorial problem known as perfect onefactorization of complete graphs. We use known perfect one-factors of complete graphs to create constructions and decoding algorithms for both B-Code and its dual code. B-Code and its dual are optimal in the sense that (i) they are MDS, (ii) they have an optimal encoding property, i.e., the number of the parity bits that are affected by change of a single information bit is minimal and (iii) they have optimal length. The existence of perfect one-factorizations for every complete graph with an even number of nodes is a 35 years long conjecture in graph theory. The construction of B-codes of arbitrary odd length will provide an affirmative answer to the conjecture

[Go to top]

MORECOWBELL

NSA's MORECOWBELL: Knell for DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Le programme MORECOWBELL de la NSA Sonne le glas du NSA (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Ludovic Courtès.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Il programma MORECOWBELL della NSA: Campane a morto per il DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Luca Saiu.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

El programa MORECOWBELL de la NSA: Doblan las campanas para el DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Malugo

Malugo: A peer-to-peer storage system (PDF)
by Yu-Wei Chan, Tsung-Hsuan Ho, Po-Chi Shih, and Yeh-Ching Chung.
In unknown, 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of routing locality in peer-to-peer storage systems where peers store and exchange data among themselves. With the global information, peers will take the data locality into consideration when they implement their replication mechanisms to keep a number of file replicas all over the systems. In this paper, we mainly propose a peer-to-peer storage system–Malugo. Algorithms for the implementation of the peers' locating and file operation processes are also presented. Simulation results show that the proposed system successfully constructs an efficient and stable peer-to-peer storage environment with considerations of data and routing locality among peers

[Go to top]

Markov chain

Self-organized Data Redundancy Management for Peer-to-Peer Storage Systems (PDF)
by Yaser Houri, Manfred Jobmann, and Thomas Fuhrmann.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In peer-to-peer storage systems, peers can freely join and leave the system at any time. Ensuring high data availability in such an environment is a challenging task. In this paper we analyze the costs of achieving data availability in fully decentralized peer-to-peer systems. We mainly address the problem of churn and what effect maintaining availability has on network bandwidth. We discuss two different redundancy techniques – replication and erasure coding – and consider their monitoring and repairing costs analytically. We calculate the bandwidth costs using basic costs equations and two different Markov reward models. One for centralized monitoring system and the other for distributed monitoring. We show a comparison of the numerical results accordingly. Depending on these results, we determine the best redundancy and maintenance strategy that corresponds to peer's failure probability

[Go to top]

The bayesian traffic analysis of mix networks (PDF)
by Carmela Troncoso and George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This work casts the traffic analysis of anonymity systems, and in particular mix networks, in the context of Bayesian inference. A generative probabilistic model of mix network architectures is presented, that incorporates a number of attack techniques in the traffic analysis literature. We use the model to build an Markov Chain Monte Carlo inference engine, that calculates the probabilities of who is talking to whom given an observation of network traces. We provide a thorough evaluation of its correctness and performance, and confirm that mix networks with realistic parameters are secure. This approach enables us to apply established information theoretic anonymity metrics on complex mix networks, and extract information from anonymised traffic traces optimally

[Go to top]

Provable Anonymity for Networks of Mixes (PDF)
by Marek Klonowski and Miroslaw Kutylowski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We analyze networks of mixes used for providing untraceable communication. We consider a network consisting of k mixes working in parallel and exchanging the outputs – which is the most natural architecture for composing mixes of a certain size into networks able to mix a larger number of inputs at once. We prove that after O(log k) rounds the network considered provides a fair level of privacy protection for any number of messages. No mathematical proof of this kind has been published before. We show that if at least one of server is corrupted we need substantially more rounds to meet the same requirements of privacy protection

[Go to top]

Provable Unlinkability Against Traffic Analysis (PDF)
by Ron Berman, Amos Fiat, and Amnon Ta-Shma.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider unlinkability of communication problem: given n users, each sending a message to some destination, encode and route the messages so that an adversary analyzing the traffic in the communication network cannot link the senders with the recipients. A solution should have a small communication overhead, that is, the number of additional messages should be kept low. David Chaum introduced idea of mixes for solving this problem. His approach was developed further by Simon and Rackoff, and implemented later as the onion protocol. Even if the onion protocol is widely regarded as secure and used in practice, formal arguments supporting this claim are rare and far from being complete. On top of that, in certain scenarios very simple tricks suffice to break security without breaking the cryptographic primitives. It turns out that one source of difficulties in analyzing the onion protocols security is the adversary model. In a recent work, Berman, Fiat and Ta-Shma develop a new and more realistic model in which only a constant fraction of communication lines can be accessed by an adversary, the number of messages does not need to be high and the preferences of the users are taken into account. For this model they prove that with high probability a good level of unlinkability is obtained after steps of the onion protocol where n is the number of messages sent. In this paper we improve these results: we show that the same level of unlinkability (expressed as variation distance between certain probability distributions) is obtained with high probability already after steps of the onion protocol. Asymptotically, this is the best result possible, since obviously (log n) steps are necessary. On top of that, our analysis is much simpler. It is based on path coupling technique designed for showing rapid mixing of Markov chains

[Go to top]

Rapid Mixing and Security of Chaum's Visual Electronic Voting (PDF)
by Marcin Gomulkiewicz, Marek Klonowski, and Miroslaw Kutylowski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recently, David Chaum proposed an electronic voting scheme that combines visual cryptography and digital processing. It was designed to meet not only mathematical security standards, but also to be accepted by voters that do not trust electronic devices. In this scheme mix-servers are used to guarantee anonymity of the votes in the counting process. The mix-servers are operated by different parties, so an evidence of their correct operation is necessary. For this purpose the protocol uses randomized partial checking of Jakobsson et al., where some randomly selected connections between the (encoded) inputs and outputs of a mix-server are revealed. This leaks some information about the ballots, even if intuitively this information cannot be used for any efficient attack. We provide a rigorous stochastic analysis of how much information is revealed by randomized partial checking in the Chaums protocol. We estimate how many mix-servers are necessary for a fair security level. Namely, we consider probability distribution of the permutations linking the encoded votes with the decoded votes given the information revealed by randomized partial checking. We show that the variation distance between this distribution and the uniform distribution is already for a constant number of mix-servers (n is the number of voters). This means that a constant number of trustees in the Chaums protocol is enough to obtain provable security. The analysis also shows that certain details of the Chaums protocol can be simplified without lowering security level

[Go to top]

Markov chains

SURF-2: A program for dependability evaluation of complex hardware and software systems
by C. Beounes, M. Aguera, J. Arlat, S. Bachmann, C. Bourdeau, J. -. Doucet, K. Kanoun, J. -. Laprie, S. Metge, J. Moreira de Souza, D. Powell, and P. Spiesser.
In the Proceedings of FTCS-23 The Twenty-Third International Symposium on Fault-Tolerant Computing, June 1993, pages 668-673. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SURF-2, a software tool for evaluating system dependability, is described. It is especially designed for an evaluation-based system design approach in which multiple design solutions need to be compared from the dependability viewpoint. System behavior may be modeled either by Markov chains or by generalized stochastic Petri nets. The tool supports the evaluation of different measures of dependability, including pointwise measures, asymptotic measures, mean sojourn times and, by superposing a reward structure on the behavior model, reward measures such as expected performance or cost

[Go to top]

Medical diagnostic imaging

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

Millionaires' problem

Practical and Secure Solutions for Integer Comparison (PDF)
by Juan Garay, Berry Schoenmakers, and José Villegas.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Yao's classical millionaires' problem is about securely determining whether x > y, given two input values x,y, which are held as private inputs by two parties, respectively. The output x > y becomes known to both parties. In this paper, we consider a variant of Yao's problem in which the inputs x,y as well as the output bit x > y are encrypted. Referring to the framework of secure n-party computation based on threshold homomorphic cryptosystems as put forth by Cramer, Damg ard, and Nielsen at Eurocrypt 2001, we develop solutions for integer comparison, which take as input two lists of encrypted bits representing x and y, respectively, and produce an encrypted bit indicating whether x > y as output. Secure integer comparison is an important building block for applications such as secure auctions. In this paper, our focus is on the two-party case, although most of our results extend to the multi-party case. We propose new logarithmic-round and constant-round protocols for this setting, which achieve simultaneously very low communication and computational complexities. We analyze the protocols in detail and show that our solutions compare favorably to other known solutions

[Go to top]

ModelNet

ModelNet-TE: An emulation tool for the study of P2P and traffic engineering interaction dynamics (PDF)
by D. Rossi, P. Veglia, M. Sammarco, and F. Larroca.
In Peer-to-Peer Networking and Applications, 2012, pages 1-19. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Monte-Carlo Tree Search

Monte-Carlo Search Techniques in the Modern Board Game Thurn and Taxis (PDF)
by Frederik Christiaan Schadd.
Master Thesis, Maastricht University, December 2009. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Modern board games present a new and challenging field when researching search techniques in the field of Artificial Intelligence. These games differ to classic board games, such as chess, in that they can be non-deterministic, have imperfect information or more than two players. While tree-search approaches, such as alpha-beta pruning, have been quite successful in playing classic board games, by for instance defeating the then reigning world champion Gary Kasparov in Chess, these techniques are not as effective when applied to modern board games. This thesis investigates the effectiveness of Monte-Carlo Tree Search when applied to a modern board game, for which the board game Thurn and Taxis was used. This is a non-deterministic modern board game with imperfect information that can be played with more than 2 players, and is hence suitable for research. First, the state-space and game-tree complexities of this game are computed, from which the conclusion can be drawn that the two-player version of the game has a complexity similar to the game Shogi. Several techniques are investigated in order to improve the sampling process, for instance by adding domain knowledge. Given the results of the experiments, one can conclude that Monte-Carlo Tree Search gives a slight performance increase over standard Monte-Carlo search. In addition, the most effective improvements appeared to be the application of pseudo-random simulations and limiting simulation lengths, while other techniques have been shown to be less effective or even ineffective. Overall, when applying the best performing techniques, an AI with advanced playing strength has been created, such that further research is likely to push this performance to a strength of expert level

[Go to top]

Progressive Strategies for Monte-Carlo Tree Search (PDF)
by Guillaume M. J-B. Chaslot, Mark H. M. Winands, H. Jaap van den Herik, Jos W. H. M. Uiterwijk, and Bruno Bouzy.
In New Mathematics and Natural Computation 4, 2008, pages 343-357. (BibTeX entry) (Download bibtex record)
(direct link)

Monte-Carlo Tree Search (MCTS) is a new best-first search guided by the results of Monte-Carlo simulations. In this article, we introduce two progressive strategies for MCTS, called progressive bias and progressive unpruning. They enable the use of relatively time-expensive heuristic knowledge without speed reduction. Progressive bias directs the search according to heuristic knowledge. Progressive unpruning first reduces the branching factor, and then increases it gradually again. Experiments assess that the two progressive strategies significantly improve the level of our Go program Mango. Moreover, we see that the combination of both strategies performs even better on larger board sizes

[Go to top]

Efficient selectivity and backup operators in Monte-Carlo tree search (PDF)
by Rémi Coulom.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations. The method can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to minmax as the number of simulations grows. This approach provides a finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity. The resulting algorithm was implemented in a 9 9 Go-playing program, Crazy Stone, that won the 10th KGS computer-Go tournament

[Go to top]

Morphmix

Design principles for low latency anonymous network systems secure against timing attacks (PDF)
by Rungrat Wiangsripanawan, Willy Susilo, and Rei Safavi-Naini.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Low latency anonymous network systems, such as Tor, were considered secure against timing attacks when the threat model does not include a global adversary. In this threat model the adversary can only see part of the links in the system. In a recent paper entitled Low-cost traffic analysis of Tor, it was shown that a variant of timing attack that does not require a global adversary can be applied to Tor. More importantly, authors claimed that their attack would work on any low latency anonymous network systems. The implication of the attack is that all low latency anonymous networks will be vulnerable to this attack even if there is no global adversary. In this paper, we investigate this claim against other low latency anonymous networks, including Tarzan and Morphmix. Our results show that in contrast to the claim of the aforementioned paper, the attack may not be applicable in all cases. Based on our analysis, we draw design principles for secure low latency anonymous network system (also secure against the above attack)

[Go to top]

Multi-dimensional range query

CISS: An efficient object clustering framework for DHT-based peer-to-peer applications
by Jinwon Lee, Hyonik Lee, Seungwoo Kang, Su Myeon Kim, and Junehwa Song.
In Comput. Netw 51(4), 2007, pages 1072-1094. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Multiparty Computation

Multi Party Distributed Private Matching, Set Disjointness and Cardinality of Set Intersection with Information Theoretic Security (PDF)
by G. Narayanan, T. Aishwarya, Anugrah Agrawal, Arpita Patra, Ashish Choudhary, and C Rangan.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we focus on the specific problems of Private Matching, Set Disjointness and Cardinality of Set Intersection in information theoretic settings. Specifically, we give perfectly secure protocols for the above problems in n party settings, tolerating a computationally unbounded semi-honest adversary, who can passively corrupt at most t < n/2 parties. To the best of our knowledge, these are the first such information theoretically secure protocols in a multi-party setting for all the three problems. Previous solutions for Distributed Private Matching and Cardinality of Set Intersection were cryptographically secure and the previous Set Disjointness solution, though information theoretically secure, is in a two party setting. We also propose a new model for Distributed Private matching which is relevant in a multi-party setting

[Go to top]

Multiparty Computation for Interval, Equality, and Comparison Without Bit-Decomposition Protocol (PDF)
by Takashi Nishide and Kazuo Ohta.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Damg ard et al. [11] showed a novel technique to convert a polynomial sharing of secret a into the sharings of the bits of a in constant rounds, which is called the bit-decomposition protocol. The bit-decomposition protocol is a very powerful tool because it enables bit-oriented operations even if shared secrets are given as elements in the field. However, the bit-decomposition protocol is relatively expensive. In this paper, we present a simplified bit-decomposition protocol by analyzing the original protocol. Moreover, we construct more efficient protocols for a comparison, interval test and equality test of shared secrets without relying on the bit-decomposition protocol though it seems essential to such bit-oriented operations. The key idea is that we do computation on secret a with c and r where c = a + r, c is a revealed value, and r is a random bitwise-shared secret. The outputs of these protocols are also shared without being revealed. The realized protocols as well as the original protocol are constant-round and run with less communication rounds and less data communication than those of [11]. For example, the round complexities are reduced by a factor of approximately 3 to 10

[Go to top]

NAMECOIN

NSA's MORECOWBELL: Knell for DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Le programme MORECOWBELL de la NSA Sonne le glas du NSA (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Ludovic Courtès.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Il programma MORECOWBELL della NSA: Campane a morto per il DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, Jacob Appelbaum, and Luca Saiu.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

El programa MORECOWBELL de la NSA: Doblan las campanas para el DNS (PDF)
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In unknown, January 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

NAT

NTALG–TCP NAT traversal with application-level gateways (PDF)
by M. Wander, S. Holzapfel, A. Wacker, and T. Weis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Consumer computers or home communication devices are usually connected to the Internet via a Network Address Translation (NAT) router. This imposes restrictions for networking applications that require inbound connections. Existing solutions for NAT traversal can remedy the restrictions, but still there is a fraction of home users which lack support of it, especially when it comes to TCP. We present a framework for traversing NAT routers by exploiting their built-in FTP and IRC application-level gateways (ALG) for arbitrary TCP-based applications. While this does not work in every scenario, it significantly improves the success chance without requiring any user interaction at all. To demonstrate the framework, we show a small test setup with laptop computers and home NAT routers

[Go to top]

Methods for Secure Decentralized Routing in Open Networks (PDF)
by Nathan S Evans.
Ph.D. thesis, Technische Universität München, August 2011. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The contribution of this thesis is the study and improvement of secure, decentralized, robust routing algorithms for open networks including ad-hoc networks and peer-to-peer (P2P) overlay networks. The main goals for our secure routing algorithm are openness, efficiency, scalability and resilience to various types of attacks. Common P2P routing algorithms trade-off decentralization for security; for instance by choosing whether or not to require a centralized authority to allow peers to join the network. Other algorithms trade scalability for security, for example employing random search or flooding to prevent certain types of attacks. Our design attempts to meet our security goals in an open system, while limiting the performance penalties incurred. The first step we took towards designing our routing algorithm was an analysis of the routing algorithm in Freenet. This algorithm is relevant because it achieves efficient (order O(log n)) routing in realistic network topologies in a fully decentralized open network. However, we demonstrate why their algorithm is not secure, as malicious participants are able to severely disrupt the operation of the network. The main difficulty with the Freenet routing algorithm is that for performance it relies on information received from untrusted peers. We also detail a range of proposed solutions, none of which we found to fully fix the problem. A related problem for efficient routing in sparsely connected networks is the difficulty in sufficiently populating routing tables. One way to improve connectivity in P2P overlay networks is by utilizing modern NAT traversal techniques. We employ a number of standard NAT traversal techniques in our approach, and also developed and experimented with a novel method for NAT traversal based on ICMP and UDP hole punching. Unlike other NAT traversal techniques ours does not require a trusted third party. Another technique we use in our implementation to help address the connectivity problem in sparse networks is the use of distance vector routing in a small local neighborhood. The distance vector variant used in our system employs onion routing to secure the resulting indirect connections. Materially to this design, we discovered a serious vulnerability in the Tor protocol which allowed us to use a DoS attack to reduce the anonymity of the users of this extant anonymizing P2P network. This vulnerability is based on allowing paths of unrestricted length for onion routes through the network. Analyzing Tor and implementing this attack gave us valuable knowledge which helped when designing the distance vector routing protocol for our system. Finally, we present the design of our new secure randomized routing algorithm that does not suffer from the various problems we discovered in previous designs. Goals for the algorithm include providing efficiency and robustness in the presence of malicious participants for an open, fully decentralized network without trusted authorities. We provide a mathematical analysis of the algorithm itself and have created and deployed an implementation of this algorithm in GNUnet. In this thesis we also provide a detailed overview of a distributed emulation framework capable of running a large number of nodes using our full code base as well as some of the challenges encountered in creating and using such a testing framework. We present extensive experimental results showing that our routing algorithm outperforms the dominant DHT design in target topologies, and performs comparably in other scenarios

[Go to top]

Autonomous NAT Traversal (PDF)
by Andreas Müller, Nathan S Evans, Christian Grothoff, and Samy Kamkar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional NAT traversal methods require the help of a third party for signalling. This paper investigates a new autonomous method for establishing connections to peers behind NAT. The proposed method for Autonomous NAT traversal uses fake ICMP messages to initially contact the NATed peer. This paper presents how the method is supposed to work in theory, discusses some possible variations, introduces various concrete implementations of the proposed approach and evaluates empirical results of a measurement study designed to evaluate the efficacy of the idea in practice

[Go to top]

Peer-to-Peer Communication Across Network Address Translators (PDF)
by Pyda Srisuresh, Bryan Ford, and Dan Kegel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network Address Translation (NAT) causes well-known difficulties for peer-to-peer (P2P) communication, since the peers involved may not be reachable at any globally valid IP address. Several NAT traversal techniques are known, but their documentation is slim, and data about their robustness or relative merits is slimmer. This paper documents and analyzes one of the simplest but most robust and practical NAT traversal techniques, commonly known as hole punching. Hole punching is moderately well-understood for UDP communication, but we show how it can be reliably used to set up peer-to-peer TCP streams as well. After gathering data on the reliability of this technique on a wide variety of deployed NATs, we nd that about 82 of the NATs tested support hole punching for UDP, and about 64 support hole punching for TCP streams. As NAT vendors become increasingly conscious of the needs of important P2P applications such as Voice over IP and online gaming protocols, support for hole punching is likely to increase in the future

[Go to top]

Characterization and measurement of tcp traversal through nats and firewalls (PDF)
by Saikat Guha and Paul Francis.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In recent years, the standards community has developed techniques for traversing NAT/firewall boxes with UDP (that is, establishing UDP flows between hosts behind NATs). Because of the asymmetric nature of TCP connection establishment, however, NAT traversal of TCP is more difficult. Researchers have recently proposed a variety of promising approaches for TCP NAT traversal. The success of these approaches, however, depend on how NAT boxes respond to various sequences of TCP (and ICMP) packets. This paper presents the first broad study of NAT behavior for a comprehensive set of TCP NAT traversal techniques over a wide range of commercial NAT products. We developed a publicly available software test suite that measures the NAT's responses both to a variety of isolated probes and to complete TCP connection establishments. We test sixteen NAT products in the lab, and 93 home NATs in the wild. Using these results, as well as market data for NAT products, we estimate the likelihood of successful NAT traversal for home networks. The insights gained from this paper can be used to guide both design of TCP NAT traversal protocols and the standardization of NAT/firewall behavior, including the IPv4-IPv6 translating NATs critical for IPv6 transition

[Go to top]

NFA

Decentralized Evaluation of Regular Expressions for Capability Discovery in Peer-to-Peer Networks (PDF)
by Maximilian Szengel.
Masters, Technische Universität München, November 2012. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis presents a novel approach for decentralized evaluation of regular expressions for capability discovery in DHT-based overlays. The system provides support for announcing capabilities expressed as regular expressions and discovering participants offering adequate capabilities. The idea behind our approach is to convert regular expressions into finite automatons and store the corresponding states and transitions in a DHT. We show how locally constructed DFA are merged in the DHT into an NFA without the knowledge of any NFA already present in the DHT and without the need for any central authority. Furthermore we present options of optimizing the DFA. There exist several possible applications for this general approach of decentralized regular expression evaluation. However, in this thesis we focus on the application of discovering users that are willing to provide network access using a specified protocol to a particular destination. We have implemented the system for our proposed approach and conducted a simulation. Moreover we present the results of an emulation of the implemented system in a cluster

[Go to top]

NICE

Scalable Application-Layer Multicast Simulations with OverSim
by Stephan Krause and Christian Hübsch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Application-Layer Multicast has become a promising class of protocols since IP Multicast has not found wide area deployment in the Internet. Developing such protocols requires in-depth analysis of their properties even with large numbers of participants—a characteristic which is at best hard to achieve in real network experiments. Several well-known simulation frameworks have been developed and used in recent years, but none has proved to be fitting the requirements for analyzing large-scale application-layer networks. In this paper we propose the OverSim framework as a promising simulation environment for scalabe Application-Layer Multicast research. We show that OverSim is able to manage even overlays with several thousand participants in short time while consuming comparably little memory. We compare the framework's runtime properties with the two exemplary Application-Layer Mutlicast protocols Scribe and NICE. The results show that both simulation time and memory consumption grow linearly with the number of nodes in highly feasible dimensions

[Go to top]

Name resolution

Toward secure name resolution on the internet
by Christian Grothoff, Matthias Wachs, Monika Ermert, and Jacob Appelbaum.
In Computers & Security, 2018. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) provides crucial name resolution functions for most Internet services. As a result, DNS traffic provides an important attack vector for mass surveillance, as demonstrated by the QUANTUMDNS and MORECOWBELL programs of the NSA. This article reviews how DNS works and describes security considerations for next generation name resolution systems. We then describe DNS variations and analyze their impact on security and privacy. We also consider Namecoin, the GNU Name System and RAINS, which are more radical re-designs of name systems in that they both radically change the wire protocol and also eliminate the existing global consensus on TLDs provided by ICANN. Finally, we assess how the different systems stack up with respect to the goal of improving security and privacy of name resolution for the future Internet

[Go to top]

Nearest neighbor searches

Privacy Preserving Nearest Neighbor Search (PDF)
by M. Shaneck, Yongdae Kim, and V. Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this work we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications. To this end, we present a novel algorithm based on secure multiparty computation primitives to compute the nearest neighbors of records in horizontally distributed data. We show how this algorithm can be used in three important data mining algorithms, namely LOF outlier detection, SNN clustering, and kNN classification

[Go to top]

Netflix

Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contenders (PDF)
by Frank McSherry and Ilya Mironov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy. Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty–i.e., noise–to computations, trading accuracy for privacy. We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise. We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides

[Go to top]

Network management

AutoNetkit: simplifying large scale, open-source network experimentation (PDF)
by Simon Knight, Askar Jaboldinov, Olaf Maennel, Iain Phillips, and Matthew Roughan.
In SIGCOMM Comput. Commun. Rev 42(4), 2012, pages 97-98. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A game-theoretic analysis of the implications of overlay network traffic on ISP peering (PDF)
by Jessie Hui Wang, Dah Ming Chiu, and John C. S. Lui.
In Computer Networks 52, October 2008, pages 2961-2974. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Inter-ISP traffic flow determines the settlement between ISPs and affects the perceived performance of ISP services. In today's Internet, the inter-ISP traffic flow patterns are controlled not only by ISPs' policy-based routing configuration and traffic engineering, but also by application layer routing. The goal of this paper is to study the economic implications of this shift in Internet traffic control assuming rational ISPs and subscribers. For this purpose, we build a general traffic model that predicts traffic patterns based on subscriber distribution and abstract traffic controls such as caching functions and performance sensitivity functions. We also build a game-theoretic model of subscribers picking ISPs, and ISPs making provisioning and peering decisions. In particular, we apply this to a local market where two ISPs compete for market share of subscribers under two traffic patterns: ''Web'' and ''P2P overlay'', that typifies the transition the current Internet is going through. Our methodology can be used to quantitatively demonstrate that (1) while economy of scale is the predominant property of the competitive ISP market, P2P traffic may introduce unfair distribution of peering benefit (i.e. free-riding); (2) the large ISP can restore more fairness by reducing its private capacity (bandwidth throttling), which has the drawback of hurting business growth; and (3) ISPs can reduce the level of peering (e.g. by reducing peering bandwidth) to restore more fairness, but this has the side-effect of also reducing the ISPs' collective bargaining power towards subscribers

[Go to top]

Object clustering

CISS: An efficient object clustering framework for DHT-based peer-to-peer applications
by Jinwon Lee, Hyonik Lee, Seungwoo Kang, Su Myeon Kim, and Junehwa Song.
In Comput. Netw 51(4), 2007, pages 1072-1094. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

OneSwarm

Forensic investigation of the OneSwarm anonymous filesharing system (PDF)
by Swagatika Prusty, Brian Neil Levine, and Marc Liberatore.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

OneSwarm is a system for anonymous p2p file sharing in use by thousands of peers. It aims to provide Onion Routing-like privacy and BitTorrent-like performance. We demonstrate several flaws in OneSwarm's design and implementation through three different attacks available to forensic investigators. First, we prove that the current design is vulnerable to a novel timing attack that allows just two attackers attached to the same target to determine if it is the source of queried content. When attackers comprise 15 of OneSwarm peers, we expect over 90 of remaining peers will be attached to two attackers and therefore vulnerable. Thwarting the attack increases OneSwarm query response times, making them longer than the equivalent in Onion Routing. Second, we show that OneSwarm's vulnerability to traffic analysis by colluding attackers is much greater than was previously reported, and is much worse than Onion Routing. We show for this second attack that when investigators comprise 25 of peers, over 40 of the network can be investigated with 80 precision to find the sources of content. Our examination of the OneSwarm source code found differences with the technical paper that significantly reduce security. For the implementation in use by thousands of people, attackers that comprise 25 of the network can successfully use this second attack against 98 of remaining peers with 95 precision. Finally, we show that a novel application of a known TCP-based attack allows a single attacker to identify whether a neighbor is the source of data or a proxy for it. Users that turn off the default rate-limit setting are exposed. Each attack can be repeated as investigators leave and rejoin the network. All of our attacks are successful in a forensics context: Law enforcement can use them legally ahead of a warrant. Furthermore, private investigators, who have fewer restrictions on their behavior, can use them more easily in pursuit of evidence for such civil suits as copyright infringement

[Go to top]

Privacy-preserving P2P data sharing with OneSwarm (PDF)
by Tomas Isdal, Michael Piatek, Arvind Krishnamurthy, and Thomas Anderson.
In SIGCOMM Comput. Commun. Rev 40(4), 2010, pages 111-122. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

OptAPO

PC-DPOP: a new partial centralization algorithm for distributed optimization (PDF)
by Adrian Petcu, Boi Faltings, and Roger Mailler.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Fully decentralized algorithms for distributed constraint optimization often require excessive amounts of communication when applied to complex problems. The OptAPO algorithm of [Mailler and Lesser, 2004] uses a strategy of partial centralization to mitigate this problem. We introduce PC-DPOP, a new partial centralization technique, based on the DPOP algorithm of [Petcu and Faltings, 2005]. PC-DPOP provides better control over what parts of the problem are centralized and allows this centralization to be optimal with respect to the chosen communication structure. Unlike OptAPO, PC-DPOP allows for a priory, exact predictions about privacy loss, communication, memory and computational requirements on all nodes and links in the network. Upper bounds on communication and memory requirements can be specified. We also report strong efficiency gains over OptAPO in experiments on three problem domains

[Go to top]

OverSim

Scalable Application-Layer Multicast Simulations with OverSim
by Stephan Krause and Christian Hübsch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Application-Layer Multicast has become a promising class of protocols since IP Multicast has not found wide area deployment in the Internet. Developing such protocols requires in-depth analysis of their properties even with large numbers of participants—a characteristic which is at best hard to achieve in real network experiments. Several well-known simulation frameworks have been developed and used in recent years, but none has proved to be fitting the requirements for analyzing large-scale application-layer networks. In this paper we propose the OverSim framework as a promising simulation environment for scalabe Application-Layer Multicast research. We show that OverSim is able to manage even overlays with several thousand participants in short time while consuming comparably little memory. We compare the framework's runtime properties with the two exemplary Application-Layer Mutlicast protocols Scribe and NICE. The results show that both simulation time and memory consumption grow linearly with the number of nodes in highly feasible dimensions

[Go to top]

P2P

A Secure and Resilient Communication Infrastructure for Decentralized Networking Applications (PDF)
by Matthias Wachs.
PhD, Technische Universität München, February 2015. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis provides the design and implementation of a secure and resilient communication infrastructure for decentralized peer-to-peer networks. The proposed communication infrastructure tries to overcome limitations to unrestricted communication on today's Internet and has the goal of re-establishing unhindered communication between users. With the GNU name system, we present a fully decentralized, resilient, and privacy-preserving alternative to DNS and existing security infrastructures

[Go to top]

Developing Peer-to-Peer Web Applications (PDF)
by Toni Ruottu.
Master's Thesis, University of Helsinki, September 2010. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform's suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data

[Go to top]

Reconnecting the internet with ariba: self-organizing provisioning of end-to-end connectivity in heterogeneous networks (PDF)
by Christian Hübsch, Christoph P. Mayer, Sebastian Mies, Roland Bless, Oliver Waldhorst, and Martina Zitterbart.
In SIGCOMM Comput. Commun. Rev 40(1), 2010, pages 131-132. (BibTeX entry) (Download bibtex record)
(direct link) (website)

End-to-End connectivity in today's Internet can no longer be taken for granted. Middleboxes, mobility, and protocol heterogeneity complicate application development and often result in application-specific solutions. In our demo we present ariba: an overlay-based approach to handle such network challenges and to provide consistent homogeneous network primitives in order to ease application and service development

[Go to top]

Providing basic security mechanisms in broker-less publish/subscribe systems (PDF)
by Muhammad Adnan Tariq, Boris Koldehofe, Ala Altaweel, and Kurt Rothermel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The provisioning of basic security mechanisms such as authentication and confidentiality is highly challenging in a content-based publish/subscribe system. Authentication of publishers and subscribers is difficult to achieve due to the loose coupling of publishers and subscribers. Similarly, confidentiality of events and subscriptions conflicts with content-based routing. In particular, content-based approaches in broker-less environments do not address confidentiality at all. This paper presents a novel approach to provide confidentiality and authentication in a broker-less content-based publish-subscribe system. The authentication of publishers and subscribers as well as confidentiality of events is ensured, by adapting the pairing-based cryptography mechanisms, to the needs of a publish/subscribe system. Furthermore, an algorithm to cluster subscribers according to their subscriptions preserves a weak notion of subscription confidentiality. Our approach provides fine grained key management and the cost for encryption, decryption and routing is in the order of subscribed attributes. Moreover, the simulation results verify that supporting security is affordable with respect to the cost for overlay construction and event dissemination latencies, thus preserving scalability of the system

[Go to top]

A Novel Testbed for P2P Networks (PDF)
by Pekka H. J. Perälä, Jori P. Paananen, Milton Mukhopadhyay, and Jukka-Pekka Laulajainen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Autonomous NAT Traversal (PDF)
by Andreas Müller, Nathan S Evans, Christian Grothoff, and Samy Kamkar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional NAT traversal methods require the help of a third party for signalling. This paper investigates a new autonomous method for establishing connections to peers behind NAT. The proposed method for Autonomous NAT traversal uses fake ICMP messages to initially contact the NATed peer. This paper presents how the method is supposed to work in theory, discusses some possible variations, introduces various concrete implementations of the proposed approach and evaluates empirical results of a measurement study designed to evaluate the efficacy of the idea in practice

[Go to top]

Scalable onion routing with Torsk (PDF)
by Jon McLachlan, Andrew Tran, Nicholas J. Hopper, and Yongdae Kim.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce Torsk, a structured peer-to-peer low-latency anonymity protocol. Torsk is designed as an interoperable replacement for the relay selection and directory service of the popular Tor anonymity network, that decreases the bandwidth cost of relay selection and maintenance from quadratic to quasilinear while introducing no new attacks on the anonymity provided by Tor, and no additional delay to connections made via Tor. The resulting bandwidth savings make a modest-sized Torsk network significantly cheaper to operate, and allows low-bandwidth clients to join the network. Unlike previous proposals for P2P anonymity schemes, Torsk does not require all users to relay traffic for others. Torsk utilizes a combination of two P2P lookup mechanisms with complementary strengths in order to avoid attacks on the confidentiality and integrity of lookups. We show by analysis that previously known attacks on P2P anonymity schemes do not apply to Torsk, and report on experiments conducted with a 336-node wide-area deployment of Torsk, demonstrating its efficiency and feasibility

[Go to top]

Hashing it out in public: Common failure modes of DHT-based anonymity schemes (PDF)
by Andrew Tran, Nicholas J. Hopper, and Yongdae Kim.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We examine peer-to-peer anonymous communication systems that use Distributed Hash Table algorithms for relay selection. We show that common design flaws in these schemes lead to highly effective attacks against the anonymity provided by the schemes. These attacks stem from attacks on DHT routing, and are not mitigated by the well-known DHT security mechanisms due to a fundamental mismatch between the security requirements of DHT routing's put/get functionality and anonymous routing's relay selection functionality. Our attacks essentially allow an adversary that controls only a small fraction of the relays to function as a global active adversary. We apply these attacks in more detail to two schemes: Salsa and Cashmere. In the case of Salsa, we show that an attacker that controls 10 of the relays in a network of size 10,000 can compromise more than 80 of all completed circuits; and in the case of Cashmere, we show that an attacker that controls 20 of the relays in a network of size 64000 can compromise 42 of the circuits

[Go to top]

PeerSim: A Scalable P2P Simulator (PDF)
by Alberto Montresor, Márk Jelasity, Gian Paolo Jesi, and Spyros Voulgaris.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The key features of peer-to-peer (P2P) systems are scalability and dynamism. The evaluation of a P2P protocol in realistic environments is very expensive and difficult to reproduce, so simulation is crucial in P2P research. PeerSim is an extremely scalable simulation environment that supports dynamic scenarios such as churn and other failure models. Protocols need to be specifically implemented for the PeerSim Java API, but with a reasonable effort they can be evolved into a real implementation. Testing in specified parameter-spaces is supported as well. PeerSim started out as a tool for our own research

[Go to top]

Bootstrapping Peer-to-Peer Systems Using IRC
by Mirko Knoll, Matthias Helling, Arno Wacker, Sebastian Holzapfel, and Torben Weis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Research in the area of peer-to-peer systems is mainly focused on structuring the overlay network. Little attention is paid to the process of setting up and joining a peer-to-peer overlay network, i.e. the bootstrapping of peer-to-peer networks. The major challenge is to get hold of one peer that is already in the overlay. Otherwise, the first peer must be able to detect that the overlay is currently empty. Successful P2P applications either provide a centralized server for this task (Skype) or they simply put the burden on the user (eMule). We propose an automatic solution which does not require any user intervention and does not exhibit a single point of failure. Such decentralized bootstrapping protocols are especially important for open non-commercial peer-to-peer systems which cannot provide a server infrastructure for bootstrapping. The algorithm we are proposing builds on the Internet Relay Chat (IRC), a highly available, open,and distributed network of chat servers. Our algorithm is designed to put only a very minimal load on the IRC servers.In measurements we show that our bootstrapping protocol scales very well, handles flash crowds, and does only put a constant load on the IRC system disregarding of the peer-to-peer overlay size

[Go to top]

Using link-layer broadcast to improve scalable source routing (PDF)
by Pengfei Di and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Scalable source routing (SSR) is a network layer routing protocol that provides services that are similar to those of structured peer-to-peer overlays. In this paper, we describe several improvements to the SSR protocol. They aim at providing nodes with more up-to-date routing information: 1. The use of link-layer broadcast enables all neighbors of a node to contribute to the forwarding process. 2. A light-weight and fast selection mechanism avoids packet duplication and optimizes the source route iteratively. 3. Nodes implicitly learn the network's topology from overheard broadcast messages. We present simulation results which show the performance gain of the proposed improvements: 1. The delivery ratio in settings with high mobility increases. 2. The required per-node state can be reduced as compared with the original SSR protocol. 3. The route stretch decreases. — These improvements are achieved without increasing the routing overhead

[Go to top]

Optimization of distributed services with UNISONO (PDF)
by unknown.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed services are a special case of P2P networks where nodes have several distinctive tasks. Based on previous work, we show how UNISONO provides a way to optimize these services to increase performance, efficiency and user experience. UNISONO is a generic framework for host-based distributed network measurements. In this talk, we present UNISONO as an Enabler for self-organizing Service Delivery Plattforms. We give a short overview of the UNISONO concept and show how distributed services benefit from its usage

[Go to top]

Membership-concealing overlay networks (PDF)
by Eugene Y. Vasserman, Rob Jansen, James Tyra, Nicholas J. Hopper, and Yongdae Kim.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Information Leaks in Structured Peer-to-peer Anonymous Communication Systems (PDF)
by Prateek Mittal and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We analyze information leaks in the lookup mechanisms of structured peer-to-peer anonymous communication systems and how these leaks can be used to compromise anonymity. We show that the techniques that are used to combat active attacks on the lookup mechanism dramatically increase information leaks and increase the efficacy of passive attacks. Thus there is a trade-off between robustness to active and passive attacks. We study this trade-off in two P2P anonymous systems, Salsa and AP3. In both cases, we find that, by combining both passive and active attacks, anonymity can be compromised much more effectively than previously thought, rendering these systems insecure for most proposed uses. Our results hold even if security parameters are changed or other improvements to the systems are considered. Our study therefore motivates the search for new approaches to P2P anonymous communication

[Go to top]

BitBlender: Light-Weight Anonymity for BitTorrent (PDF)
by Kevin Bauer, Damon McCoy, Dirk Grunwald, and Douglas Sicker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present BitBlender, an efficient protocol that provides an anonymity layer for BitTorrent traffic. BitBlender works by creating an ad-hoc multi-hop network consisting of special peers called "relay peers" that proxy requests and replies on behalf of other peers. To understand the effect of introducing relay peers into the BitTorrent system architecture, we provide an analysis of the expected path lengths as the ratio of relay peers to normal peers varies. A prototype is implemented and experiments are conducted on Planetlab to quantify the performance overhead associated with the protocol. We also propose protocol extensions to add confidentiality and access control mechanisms, countermeasures against traffic analysis attacks, and selective caching policies that simultaneously increase both anonymity and performance. We finally discuss the potential legal obstacles to deploying an anonymous file sharing protocol. This work is among the first to propose a privacy enhancing system that is designed specifically for a particular class of peer-to-peer traffic

[Go to top]

Why Share in Peer-to-Peer Networks? (PDF)
by Lian Jian and Jeffrey K. MacKie-Mason.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Prior theory and empirical work emphasize the enormous free-riding problem facing peer-to-peer (P2P) sharing networks. Nonetheless, many P2P networks thrive. We explore two possible explanations that do not rely on altruism or explicit mechanisms imposed on the network: direct and indirect private incentives for the provision of public goods. The direct incentive is a traffic redistribution effect that advantages the sharing peer. We find this incentive is likely insufficient to motivate equilibrium content sharing in large networks. We then approach P2P networks as a graph-theoretic problem and present sufficient conditions for sharing and free-riding to co-exist due to indirect incentives we call generalized reciprocity

[Go to top]

P4P: Provider Portal for Applications (PDF)
by Haiyong Xie, Y. Richard Yang, Arvind Krishnamurthy, Yanbin Grace Liu, and Abraham Silberschatz.
In SIGCOMM Computer Communication Review 38, August 2008, pages 351-362. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As peer-to-peer (P2P) emerges as a major paradigm for scalable network application design, it also exposes significant new challenges in achieving efficient and fair utilization of Internet network resources. Being largely network-oblivious, many P2P applications may lead to inefficient network resource usage and/or low application performance. In this paper, we propose a simple architecture called P4P to allow for more effective cooperative traffic control between applications and network providers. We conducted extensive simulations and real-life experiments on the Internet to demonstrate the feasibility and effectiveness of P4P. Our experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications

[Go to top]

Bootstrapping of Peer-to-Peer Networks (PDF)
by Chis GauthierDickey and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present the first heuristic for fully distributed bootstrapping of peer-to-peer networks. Our heuristic generates a stream of promising IP addresses to be probed as entry points. This stream is generated using statistical profiles using the IP ranges of start-of-authorities (SOAs) in the domain name system (DNS). We present experimental results demonstrating that with this approach it is efficient and practical to bootstrap Gnutella-sized peer-to-peer networks — without the need for centralized services or the public exposure of end-user's private IP addresses

[Go to top]

Reputation Systems for Anonymous Networks (PDF)
by Elli Androulaki, Seung Geol Choi, Steven M. Bellovin, and Tal Malkin.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a reputation scheme for a pseudonymous peer-to-peer (P2P) system in an anonymous network. Misbehavior is one of the biggest problems in pseudonymous P2P systems, where there is little incentive for proper behavior. In our scheme, using ecash for reputation points, the reputation of each user is closely related to his real identity rather than to his current pseudonym. Thus, our scheme allows an honest user to switch to a new pseudonym keeping his good reputation, while hindering a malicious user from erasing his trail of evil deeds with a new pseudonym

[Go to top]

Bridging and Fingerprinting: Epistemic Attacks on Route Selection (PDF)
by George Danezis and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Users building routes through an anonymization network must discover the nodes comprising the network. Yet, it is potentially costly, or even infeasible, for everyone to know the entire network. We introduce a novel attack, the route bridging attack, which makes use of what route creators do not know of the network. We also present new discussion and results concerning route fingerprinting attacks, which make use of what route creators do know of the network. We prove analytic bounds for both route fingerprinting and route bridging and describe the impact of these attacks on published anonymity-network designs. We also discuss implications for network scaling and client-server vs. peer-to-peer systems

[Go to top]

Improving User and ISP Experience through ISP-aided P2P Locality (PDF)
by Vinay Aggarwal, Obi Akonjang, and Anja Feldmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Despite recent improvements, P2P systems are still plagued by fundamental issues such as overlay/underlay topological and routing mismatch, which affects their performance and causes traffic strains on the ISPs. In this work, we aim to improve overall system performance for ISPs as well as P2P systems by means of traffic localization through improved collaboration between ISPs and P2P systems. More specifically, we study the effects of different ISP/P2P topologies as well as a broad range of influential user behavior characteristics, namely content availability, churn, and query patterns, on end-user and ISP experience. We show that ISP-aided P2P locality benefits both P2P users and ISPs, measured in terms of improved content download times, increased network locality of query responses and desired content, and overall reduction in P2P traffic

[Go to top]

Don't Clog the Queue: Circuit Clogging and Mitigation in P2P anonymity schemes (PDF)
by Jon McLachlan and Nicholas J. Hopper.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

At Oakland 2005, Murdoch and Danezis described an attack on the Tor anonymity service that recovers the nodes in a Tor circuit, but not the client. We observe that in a peer-to-peer anonymity scheme, the client is part of the circuit and thus the technique can be of greater significance in this setting. We experimentally validate this conclusion by showing that "circuit clogging" can identify client nodes using the MorphMix peer-to-peer anonymity protocol. We also propose and empirically validate the use of the Stochastic Fair Queueing discipline on outgoing connections as an efficient and low-cost mitigation technique

[Go to top]

Trust-Rated Authentication for Domain-Structured Distributed Systems (PDF)
by Ralph Holz, Heiko Niedermayer, Peter Hauck, and Georg Carle.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present an authentication scheme and new protocol for domain-based scenarios with inter-domain authentication. Our protocol is primarily intended for domain-structured Peer-to-Peer systems but is applicable for any domain scenario where clients from different domains wish to authenticate to each other. To this end, we make use of Trusted Third Parties in the form of Domain Authentication Servers in each domain. These act on behalf of their clients, resulting in a four-party protocol. If there is a secure channel between the Domain Authentication Servers, our protocol can provide secure authentication. To address the case where domains do not have a secure channel between them, we extend our scheme with the concept of trust-rating. Domain Authentication Servers signal security-relevant information to their clients (pre-existing secure channel or not, trust, ...). The clients evaluate this information to decide if it fits the security requirements of their application

[Go to top]

Tahoe: the least-authority filesystem (PDF)
by Zooko Wilcox-O'Hearn and Brian Warner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source

[Go to top]

Shortest-path routing in randomized DHT-based Peer-to-Peer systems
by Chih-Chiang Wang and Khaled Harfoush.
In Comput. Netw 52(18), 2008, pages 3307-3317. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Randomized DHT-based Peer-to-Peer (P2P) systems grant nodes certain flexibility in selecting their overlay neighbors, leading to irregular overlay structures but to better overall performance in terms of path latency, static resilience and local convergence. However, routing in the presence of overlay irregularity is challenging. In this paper, we propose a novel routing protocol, RASTER, that approximates shortest overlay routes between nodes in randomized DHTs. Unlike previously proposed routing protocols, RASTER encodes and aggregates routing information. Its simple bitmap-encoding scheme together with the proposed RASTER routing algorithm enable a performance edge over current overlay routing protocols. RASTER provides a forwarding overhead of merely a small constant number of bitwise operations, a routing performance close to optimal, and a better resilience to churn. RASTER also provides nodes with the flexibility to adjust the size of the maintained routing information based on their storage/processing capabilities. The cost of storing and exchanging encoded routing information is manageable and grows logarithmically with the number of nodes in the system

[Go to top]

Providing KBR Service for Multiple Applications (PDF)
by Pengfei Di, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Key based routing (KBR) enables peer-to-peer applications to create and use distributed services. KBR is more flexible than distributed hash tables (DHT). However, the broader the application area, the more important become performance issues for a KBR service. In this paper, we present a novel approach to provide a generic KBR service. Its key idea is to use a predictable address assignment scheme. This scheme allows peers to calculate the overlay address of the node that is responsible for a given key and application ID. A public DHT service such as OpenDHT can then resolve this overlay address to the transport address of the respective peer. We compare our solution to alternative proposals such as ReDiR and Diminished Chord. We conclude that our solution has a better worst case complexity for some important KBR operations and the required state. In particular, unlike ReDiR, our solution can guarantee a low latency for KBR route operations

[Go to top]

IgorFs: A Distributed P2P File System (PDF)
by Bernhard Amann, Benedikt Elser, Yaser Houri, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

IgorFs is a distributed, decentralized peer-to-peer (P2P) file system that is completely transparent to the user. It is built on top of the Igor peer-to-peer overlay network, which is similar to Chord, but provides additional features like service orientation or proximity neighbor and route selection. IgorFs offers an efficient means to publish data files that are subject to frequent but minor modifications. In our demonstration we show two use cases for IgorFs: the first example is (static) software-distribution and the second example is (dynamic) file distribution

[Go to top]

Characterizing unstructured overlay topologies in modern P2P file-sharing systems (PDF)
by Daniel Stutzbach, Reza Rejaie, and Subhabrata Sen.
In IEEE/ACM Trans. Netw 16(2), 2008, pages 267-280. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In recent years, peer-to-peer (P2P) file-sharing systems have evolved to accommodate growing numbers of participating peers. In particular, new features have changed the properties of the unstructured overlay topologies formed by these peers. Little is known about the characteristics of these topologies and their dynamics in modern file-sharing applications, despite their importance. This paper presents a detailed characterization of P2P overlay topologies and their dynamics, focusing on the modern Gnutella network. We present Cruiser, a fast and accurate P2P crawler, which can capture a complete snapshot of the Gnutella network of more than one million peers in just a few minutes, and show how inaccuracy in snapshots can lead to erroneous conclusions–such as a power-law degree distribution. Leveraging recent overlay snapshots captured with Cruiser, we characterize the graph-related properties of individual overlay snapshots and overlay dynamics across slices of back-to-back snapshots. Our results reveal that while the Gnutella network has dramatically grown and changed in many ways, it still exhibits the clustering and short path lengths of a small world network. Furthermore, its overlay topology is highly resilient to random peer departure and even systematic attacks. More interestingly, overlay dynamics lead to an "onion-like" biased connectivity among peers where each peer is more likely connected to peers with higher uptime. Therefore, long-lived peers form a stable core that ensures reachability among peers despite overlay dynamics

[Go to top]

Estimating churn in structured P2P networks (PDF)
by Andreas Binzenhöfer and Kenji Leibnitz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In structured peer-to-peer (P2P) networks participating peers can join or leave the system at arbitrary times, a process which is known as churn. Many recent studies revealed that churn is one of the main problems faced by any Distributed Hash Table (DHT). In this paper we discuss different possibilities of how to estimate the current churn rate in the system. In particular, we show how to obtain a robust estimate which is independent of the implementation details of the DHT. We also investigate the trade-offs between accuracy, overhead, and responsiveness to changes

[Go to top]

Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches (PDF)
by Nazanin Magharei and Reza Rejaie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Existing approaches to P2P streaming can be divided into two general classes: (i) tree-based approaches use push-based content delivery over multiple tree-shaped overlays, and (ii) mesh-based approaches use swarming content delivery over a randomly connected mesh. Previous studies have often focused on a particular P2P streaming mechanism and no comparison between these two classes has been conducted. In this paper, we compare and contrast the performance of representative protocols from each class using simulations. We identify the similarities and differences between these two approaches. Furthermore, we separately examine the behavior of content delivery and overlay construction mechanisms for both approaches in static and dynamic scenarios. Our results indicate that the mesh-based approach consistently exhibits a superior performance over the tree-based approach. We also show that the main factors attributing in the inferior performance of the tree-based approach are (i) the static mapping of content to a particular tree, and (ii) the placement of each peer as an internal node in one tree and as a leaf in all other trees

[Go to top]

Information Slicing: Anonymity Using Unreliable Overlays (PDF)
by Sachin Katti, Jeffery Cohen, and Dina Katabi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper proposes a new approach to anonymous communication called information slicing. Typically, anonymizers use onion routing, where a message is encrypted in layers with the public keys of the nodes along the path. Instead, our approach scrambles the message, divides it into pieces, and sends the pieces along disjoint paths. We show that information slicing addresses message confidentiality as well as source and destination anonymity. Surprisingly, it does not need any public key cryptography. Further, our approach naturally addresses the problem of node failures. These characteristics make it a good fit for use over dynamic peer-to-peer overlays. We evaluate the anonymity ofinformation slicing via analysis and simulations. Our prototype implementation on PlanetLab shows that it achieves higher throughput than onion routing and effectively copes with node churn

[Go to top]

Cooperative Data Backup for Mobile Devices (PDF)
by Ludovic Courtès.
Ph.D. thesis, March 2007. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile devices such as laptops, PDAs and cell phones are increasingly relied on but are used in contexts that put them at risk of physical damage, loss or theft. However, few mechanisms are available to reduce the risk of losing the data stored on these devices. In this dissertation, we try to address this concern by designing a cooperative backup service for mobile devices. The service leverages encounters and spontaneous interactions among participating devices, such that each device stores data on behalf of other devices. We first provide an analytical evaluation of the dependability gains of the proposed service. Distributed storage mechanisms are explored and evaluated. Security concerns arising from thecooperation among mutually suspicious principals are identified, and core mechanisms are proposed to allow them to be addressed. Finally, we present our prototype implementation of the cooperative backup service

[Go to top]

Local Production, Local Consumption: Peer-to-Peer Architecture for a Dependable and Sustainable Social Infrastructure (PDF)
by Kenji Saito, Eiichi Morino, Yoshihiko Suko, Takaaki Suzuki, and Jun Murai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) is a system of overlay networks such that participants can potentially take symmetrical roles. This translates itself into a design based on the philosophy of Local Production, Local Consumption (LPLC), originally an agricultural concept to promote sustainable local economy. This philosophy helps enhancing survivability of a society by providing a dependable economic infrastructure and promoting the power of individuals. This paper attempts to put existing works of P2P designs into the perspective of the five-layer architecture model to realize LPLC, and proposes future research directions toward integration of P2P studies for actualization of a dependable and sustainable social infrastructure

[Go to top]

Skype4Games (PDF)
by Tonio Triebel, Benjamin Guthier, and Wolfgang Effelsberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose to take advantage of the distributed multi-user Skype system for the implementation of an interactive online game. Skype combines efficient multi-peer support with the ability to get around firewalls and network address translation; in addition, speech is available to all game participants for free. We discuss the network requirements of interactive multi-player games, in particular concerning end-to-end delay and distributed state maintenance. We then introduce the multi-user support available in Skype and conclude that it should suffice for a game implementation. We explain how our multi-player game based on the Irrlicht graphics engine was implemented over Skype, and we present very promising results of an early performance evaluation

[Go to top]

Performance of Scalable Source Routing in Hybrid MANETs (PDF)
by Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Scalable source routing (SSR) is a novel routing approach for large unstructured networks such as mobile ad hoc networks, mesh networks, or sensor-actuator networks. It is especially suited for organically growing networks of many resource-limited mobile devices supported by a few fixed-wired nodes. SSR is a full-fledged network layer routing protocol that directly provides the semantics of a structured peer-to-peer network. Hence, it can serve as an efficient basis for fully decentralized applications on mobile devices. SSR combines source routing in the physical network with Chord-like routing in the virtual ring formed by the address space. Message forwarding greedily decreases the distance in the virtual ring while preferring physically short paths. Thereby, scalability is achieved without imposing artificial hierarchies or assigning location-dependent addresses

[Go to top]

A Game Theoretic Model of a Protocol for Data Possession Verification (PDF)
by Nouha Oualha, Pietro Michiardi, and Yves Roudier.
In A World of Wireless, Mobile and Multimedia Networks, International Symposium on, 2007, pages 1-6. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper discusses how to model a protocol for the verification of data possession intended to secure a peer-to-peer storage application. The verification protocol is a primitive for storage assessment, and indirectly motivates nodes to behave cooperatively within the application. The capability of the protocol to enforce cooperation between a data holder and a data owner is proved theoretically by modeling the verification protocol as a Bayesian game, and demonstrating that the solution of the game is an equilibrium where both parties are cooperative

[Go to top]

Dynamic Multipath Onion Routing in Anonymous Peer-To-Peer Overlay Networks
by Olaf Landsiedel, Alexis Pimenidis, and Klaus Wehrle.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Although recent years provided many protocols for anonymous routing in overlay networks, they commonly rely on the same communication paradigm: Onion Routing. In Onion Routing a static tunnel through an overlay network is build via layered encryption. All traffic exchanged by its end points is relayed through this tunnel. In contrast, this paper introduces dynamic multipath Onion Routing to extend the static Onion Routing paradigm. This approach allows each packet exchanged between two end points to travel along a different path. To provide anonymity the first half of this path is selected by the sender and the second half by the receiver of the packet. The results are manifold: First, dynamic multipath Onion Routing increases the resilience against threats, especially pattern and timing based analysis attacks. Second, the dynamic paths reduce the impact of misbehaving and overloaded relays. Finally, inspired by Internet routing, the forwarding nodes do not need to maintain any state about ongoing flows and so reduce the complexity of the router. In this paper, we describe the design of our dynamic Multipath Onion RoutEr (MORE) for peer-to-peer overlay networks, and evaluate its performance. Furthermore, we integrate address virtualization to abstract from Internet addresses and provide transparent support for IP applications. Thus, no application-level gateways, proxies or modifications of applications are required to sanitize protocols from network level information. Acting as an IP-datagram service, our scheme provides a substrate for anonymous communication to a wide range of applications using TCP and UDP

[Go to top]

CFR: a peer-to-peer collaborative file repository system (PDF)
by Meng-Ru Lin, Ssu-Hsuan Lu, Tsung-Hsuan Ho, Peter Lin, and Yeh-Ching Chung.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Due to the high availability of the Internet, many large cross-organization collaboration projects, such as SourceForge, grid systems etc., have emerged. One of the fundamental requirements of these collaboration efforts is a storage system to store and exchange data. This storage system must be highly scalable and can efficiently aggregate the storage resources contributed by the participating organizations to deliver good performance for users. In this paper, we propose a storage system, Collaborative File Repository (CFR), for large scale collaboration projects. CFR uses peer-to-peer techniques to achieve scalability, efficiency, and ease of management. In CFR, storage nodes contributed by the participating organizations are partitioned according to geographical regions. Files stored in CFR are automatically replicated to all regions. Furthermore, popular files are duplicated to other storage nodes of the same region. By doing so, data transfers between users and storage nodes are confined within their regions and transfer efficiency is enhanced. Experiments show that our replication can achieve high efficiency with a small number of duplicates

[Go to top]

Salsa: A Structured Approach to Large-Scale Anonymity (PDF)
by Arjun Nambiar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Highly distributed anonymous communications systems have the promise to reduce the effectiveness of certain attacks and improve scalability over more centralized approaches. Existing approaches, however, face security and scalability issues. Requiring nodes to have full knowledge of the other nodes in the system, as in Tor and Tarzan, limits scalability and can lead to intersection attacks in peer-to-peer configurations. MorphMix avoids this requirement for complete system knowledge, but users must rely on untrusted peers to select the path. This can lead to the attacker controlling the entire path more often than is acceptable.To overcome these problems, we propose Salsa, a structured approach to organizing highly distributed anonymous communications systems for scalability and security. Salsa is designed to select nodes to be used in anonymous circuits randomly from the full set of nodes, even though each node has knowledge of only a subset of the network. It uses a distributed hash table based on hashes of the nodes' IP addresses to organize the system. With a virtual tree structure, limited knowledge of other nodes is enough to route node lookups throughout the system. We use redundancy and bounds checking when performing lookups to prevent malicious nodes from returning false information without detection. We show that our scheme prevents attackers from biasing path selection, while incurring moderate overheads, as long as the fraction of malicious nodes is less than 20. Additionally, the system prevents attackers from obtaining a snapshot of the entire system until the number of attackers grows too large (e.g. 15 for 10000 peers and 256 groups). The number of groups can be used as a tunable parameter in the system, depending on the number of peers, that can be used to balance performance and security

[Go to top]

2Fast: Collaborative Downloads in P2P Networks (PDF)
by Pawel Garbacki, Alexandru Iosup, Dick H. J. Epema, and Maarten van Steen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

P2P systems that rely on the voluntary contribution of bandwidth by the individual peers may suffer from free riding. To address this problem, mechanisms enforcing fairness in bandwidth sharing have been designed, usually by limiting the download bandwidth to the available upload bandwidth. As in real environments the latter is much smaller than the former, these mechanisms severely affect the download performance of most peers. In this paper we propose a system called 2Fast, which solves this problem while preserving the fairness of bandwidth sharing. In 2Fast, we form groups of peers that collaborate in downloading a file on behalf of a single group member, which can thus use its full download bandwidth. A peer in our system can use its currently idle bandwidth to help other peers in their ongoing downloads, and get in return help during its own downloads. We assess the performance of 2Fast analytically and experimentally, the latter in both real and simulated environments. We find that in realistic bandwidth limit settings, 2Fast improves the download speed by up to a factor of 3.5 in comparison to state-of-the-art P2P download protocols

[Go to top]

Improving Sender Anonymity in a Structured Overlay with Imprecise Routing (PDF)
by Giuseppe Ciaccio.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In the framework of peer to peer distributed systems, the problem of anonymity in structured overlay networks remains a quite elusive one. It is especially unclear how to evaluate and improve sender anonymity, that is, untraceability of the peers who issue messages to other participants in the overlay. In a structured overlay organized as a chordal ring, we have found that a technique originally developed for recipient anonymity also improves sender anonymity. The technique is based on the use of imprecise entries in the routing tables of each participating peer. Simulations show that the sender anonymity, as measured in terms of average size of anonymity set, decreases slightly if the peers use imprecise routing; yet, the anonymity takes a better distribution, with good anonymity levels becoming more likely at the expenses of very high and very low levels. A better quality of anonymity service is thus provided to participants

[Go to top]

Havelaar: A Robust and Efficient Reputation System for Active Peer-to-Peer Systems (PDF)
by Dominik Grolimund, Luzius Meisser, Stefan Schmid, and Roger Wattenhofer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (p2p) systems have the potential to harness huge amounts of resources. Unfortunately, however, it has been shown that most of today's p2p networks suffer from a large fraction of free-riders, which mostly consume resources without contributing much to the system themselves. This results in an overall performance degradation. One particularly interesting resource is bandwidth. Thereby, a service differentiation approach seems appropriate, where peers contributing higher upload bandwidth are rewarded with higher download bandwidth in return. Keeping track of the contribution of each peer in an open, decentralized environment, however, is not trivial; many systems which have been proposed are susceptible to false reports. Besides being prone to attacks, some solutions have a large communication and computation overhead, which can even be linear in the number of transactionsan unacceptable burden in practical and active systems. In this paper, we propose a reputation system which overcomes this scaling problem. Our analytical and simulation results are promising, indicating that the mechanism is accurate and efficient, especially when applied to systems where there are lots of transactions (e.g., due to erasure coding)

[Go to top]

Breaking the Collusion Detection Mechanism of MorphMix (PDF)
by Parisa Tabriz and Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

MorphMix is a peer-to-peer circuit-based mix network designed to provide low-latency anonymous communication. MorphMix nodes incrementally construct anonymous communication tunnels based on recommendations from other nodes in the system; this P2P approach allows it to scale to millions of users. However, by allowing unknown peers to aid in tunnel construction, MorphMix is vulnerable to colluding attackers that only offer other attacking nodes in their recommendations. To avoid building corrupt tunnels, MorphMix employs a collusion detection mechanism to identify this type of misbehavior. In this paper, we challenge the assumptions of the collusion detection mechanism and demonstrate that colluding adversaries can compromise a significant fraction of all anonymous tunnels, and in some cases, a majority of all tunnels built. Our results suggest that mechanisms based solely on a node's local knowledge of the network are not sufficient to solve the difficult problem of detecting colluding adversarial behavior in a P2P system and that more sophisticated schemes may be needed

[Go to top]

Defending the Sybil Attack in P2P Networks: Taxonomy, Challenges, and a Proposal for Self-Registration (PDF)
by Jochen Dinger and Hannes Hartenstein.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The robustness of Peer-to-Peer (P2P) networks, in particular of DHT-based overlay networks, suffers significantly when a Sybil attack is performed. We tackle the issue of Sybil attacks from two sides. First, we clarify, analyze, and classify the P2P identifier assignment process. By clearly separating network participants from network nodes, two challenges of P2P networks under a Sybil attack become obvious: i) stability over time, and ii) identity differentiation. Second, as a starting point for a quantitative analysis of time-stability of P2P networks under Sybil attacks and under some assumptions with respect to identity differentiation, we propose an identity registration procedure called self-registration that makes use of the inherent distribution mechanisms of a P2P network

[Go to top]

Taxonomy of trust: Categorizing P2P reputation systems (PDF)
by Sergio Marti and Hector Garcia-Molina.
In Management in Peer-to-Peer Systems 50(4), March 2006, pages 472-484. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The field of peer-to-peer reputation systems has exploded in the last few years. Our goal is to organize existing ideas and work to facilitate system design. We present a taxonomy of reputation system components, their properties, and discuss how user behavior and technical constraints can conflict. In our discussion, we describe research that exemplifies compromises made to deliver a useable, implementable system

[Go to top]

On Object Maintenance in Peer-to-Peer Systems (PDF)
by Kiran Tati and Geoffrey M. Voelker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

This paper, we revisit object maintenance in peer-to-peer systems, focusing on how temporary and permanent churn impact the overheads associated with object maintenance. We have a number of goals: to highlight how different environments exhibit different degrees of temporary and permanent churn; to provide further insight into how churn in different environments affects the tuning of object maintenance strategies; and to examinehow object maintenance and churn interact with other constraints such as storage capacity. When possible, we highlight behavior independent of particular object maintenance strategies. When an issue depends on a particular strategy, though, we explore it in the context of a strategy in essence similar to TotalRecall, which uses erasure coding, lazy repair of data blocks, and random indirect placement (we also assume that repairs incorporate remaining blocks rather than regenerating redundancy from scratch)

[Go to top]

An Experimental Study of the Skype Peer-to-Peer VoIP System (PDF)
by Saikat Guha, Neil Daswani, and Ravi Jain.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Despite its popularity, relatively little is known about the traf- fic characteristics of the Skype VoIP system and how they differ from other P2P systems. We describe an experimental study of Skype VoIP traffic conducted over a one month period, where over 30 million datapoints were collected regarding the population of online clients, the number of supernodes, and their traffic characteristics. The results indicate that although the structure of the Skype system appears to be similar to other P2P systems, particularly KaZaA, there are several significant differences in traffic. The number of active clients shows diurnal and work-week behavior, correlating with normal working hours regardless of geography. The population of supernodes in the system tends to be relatively stable; thus node churn, a significant concern in other systems, seems less problematic in Skype. The typical bandwidth load on a supernode is relatively low, even if the supernode is relaying VoIP traffic. The paper aims to aid further understanding of a signifi- cant, successful P2P VoIP system, as well as provide experimental data that may be useful for design and modeling of such systems. These results also imply that the nature of a VoIP P2P system like Skype differs fundamentally from earlier P2P systems that are oriented toward file-sharing, and music and video download applications, and deserves more attention from the research community

[Go to top]

MyriadStore: A Peer-to-Peer Backup System (PDF)
by Birgir Stefansson, Antonios Thodis, Ali Ghodsi, and Seif Haridi.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional backup methods are error prone, cumbersome and expensive. Distributed backup applications have emerged as promising tools able to avoid these disadvantages, by exploiting unused disk space of remote computers. In this paper we propose MyriadStore, a distributed peer-to-peer backup system. MyriadStore makes use of a trading scheme that ensures that a user has as much available storage space in the system as the one he/she contributes to it. A mechanism for making challenges between the system's nodes ensures that this restriction is fulfilled. Furthermore, MyriadStore minimizes bandwidth requirements and migration costs by treating separately the storage of the system's meta-data and the storage of the backed up data. This approach also offers great flexibility on the placement of the backed up data, a property that facilitates the deployment of the trading scheme

[Go to top]

iDIBS: An Improved Distributed Backup System (PDF)
by Faruck Morcos, Thidapat Chantem, Philip Little, Tiago Gasiba, and Douglas Thain.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

iDIBS is a peer-to-peer backup system which optimizes the Distributed Internet Backup System (DIBS). iDIBS offers increased reliability by enhancing the robustness of existing packet transmission mechanism. Reed-Solomon erasure codes are replaced with Luby Transform codes to improve computation speed and scalability of large files. Lists of peers are automatically stored onto nodes to reduce recovery time. To realize these optimizations, an acceptable amount of data overhead and an increase in network utilization are imposed on the iDIBS system. Through a variety of experiments, we demonstrate that iDIBS significantly outperforms DIBS in the areas of data computational complexity, backup reliability, and overall performance

[Go to top]

Experiences in building and operating ePOST, a reliable peer-to-peer application (PDF)
by Alan Mislove, Ansley Post, Andreas Haeberlen, and Peter Druschel.
In SIGOPS Oper. Syst. Rev 40(4), 2006, pages 147-159. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer (p2p) technology can potentially be used to build highly reliable applications without a single point of failure. However, most of the existing applications, such as file sharing or web caching, have only moderate reliability demands. Without a challenging proving ground, it remains unclear whether the full potential of p2p systems can be realized.To provide such a proving ground, we have designed, deployed and operated a p2p-based email system. We chose email because users depend on it for their daily work and therefore place high demands on the availability and reliability of the service, as well as the durability, integrity, authenticity and privacy of their email. Our system, ePOST, has been actively used by a small group of participants for over two years.In this paper, we report the problems and pitfalls we encountered in this process. We were able to address some of them by applying known principles of system design, while others turned out to be novel and fundamental, requiring us to devise new solutions. Our findings can be used to guide the design of future reliable p2p systems and provide interesting new directions for future research

[Go to top]

Distributed Pattern Matching: A Key to Flexible and Efficient P2P Search
by R. Ahmed and R. Boutaba.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Flexibility and efficiency are the prime requirements for any P2P search mechanism. Existing P2P systems do not seem to provide satisfactory solution for achieving these two conflicting goals. Unstructured search protocols (as adopted in Gnutella and FastTrack), provide search flexibility but exhibit poor performance characteristics. Structured search techniques (mostly distributed hash table (DHT)-based), on the other hand, can efficiently route queries to target peers but support exact-match queries only. In this paper we present a novel P2P system, called distributed pattern matching system (DPMS), for enabling flexible and efficient search. Distributed pattern matching can be used to solve problems like wildcard searching (for file-sharing P2P systems), partial service description matching (for service discovery systems) etc. DPMS uses a hierarchy of indexing peers for disseminating advertised patterns. Patterns are aggregated and replicated at each level along the hierarchy. Replication improves availability and resilience to peer failure, and aggregation reduces storage overhead. An advertised pattern can be discovered using any subset of its 1-bits; this allows inexact matching and queries in conjunctive normal form. Search complexity (i.e., the number of peers to be probed) in DPMS is O (log N + zetalog N/log N), where N is the total number of peers and zeta is proportional to the number of matches, required in a search result. The impact of churn problem is less severe in DPMS than DHT-based systems. Moreover, DPMS provides guarantee on search completeness for moderately stable networks. We demonstrate the effectiveness of DPMS using mathematical analysis and simulation results

[Go to top]

Communication Networks On the fundamental communication abstraction supplied by P2P overlay networks
by Cramer Curt and Thomas Fuhrmann.
In unknown, 2006. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The disruptive advent of peer-to-peer (P2P) file sharing in 2000 attracted significant interest. P2P networks have matured from their initial form, unstructured overlays, to structured overlays like distributed hash tables (DHTs), which are considered state-of-the-art. There are huge efforts to improve their performance. Various P2P applications like distributed storage and application-layer multicast were proposed. However, little effort was spent to understand the communication abstraction P2P overlays supply. Only when it is understood, the reach of P2P ideas will significantly broaden. Furthermore, this clarification reveals novel approaches and highlights future directions. In this paper, we reconsider well-known P2P overlays, linking them to insights from distributed systems research. We conclude that the main communication abstraction is that of a virtual address space or application-specific naming. On this basis, P2P systems build a functional layer implementing, for example lookup, indirection and distributed processing. Our insights led us to identify interesting and unexplored points in the design space

[Go to top]

Churn Resistant de Bruijn Networks for Wireless on Demand Systems (PDF)
by Manuel Thiele, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless on demand systems typically need authentication, authorization and accounting (AAA) services. In a peer-to-peer (P2P) environment these AAA-services need to be provided in a fully decentralized manner. This excludes many cryptographic approaches since they need and rely on a central trusted instance. One way to accomplish AAA in a P2P manner are de Bruijn-networks, since there data can be routed over multiple non-overlapping paths, thereby hampering malicious nodes from manipulation that data. Originally, de Bruijn-networks required a rather fixed network structure which made them unsuitable for wireless networks. In this paper we generalize de Bruijn-networks to an arbitrary number of nodes while keeping all their desired properties. This is achieved by decoupling link degree and character set of the native de Bruijn graph. Furthermore we describe how this makes the resulting network resistant against node churn

[Go to top]

Tracking anonymous peer-to-peer VoIP calls on the internet (PDF)
by Xinyuan Wang, Shiping Chen, and Sushil Jajodia.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer VoIP calls are becoming increasingly popular due to their advantages in cost and convenience. When these calls are encrypted from end to end and anonymized by low latency anonymizing network, they are considered by many people to be both secure and anonymous.In this paper, we present a watermark technique that could be used for effectively identifying and correlating encrypted, peer-to-peer VoIP calls even if they are anonymized by low latency anonymizing networks. This result is in contrast to many people's perception. The key idea is to embed a unique watermark into the encrypted VoIP flow by slightly adjusting the timing of selected packets. Our analysis shows that it only takes several milliseconds time adjustment to make normal VoIP flows highly unique and the embedded watermark could be preserved across the low latency anonymizing network if appropriate redundancy is applied. Our analytical results are backed up by the real-time experiments performed on leading peer-to-peer VoIP client and on a commercially deployed anonymizing network. Our results demonstrate that (1) tracking anonymous peer-to-peer VoIP calls on the Internet is feasible and (2) low latency anonymizing networks are susceptible to timing attacks

[Go to top]

Influences on cooperation in BitTorrent communities (PDF)
by Nazareno Andrade, Miranda Mowbray, Aliandro Lima, Gustavo Wagner, and Matei Ripeanu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We collect BitTorrent usage data across multiple file-sharing communities and analyze the factors that affect users' cooperative behavior. We find evidence that the design of the BitTorrent protocol results in increased cooperative behavior over other P2P protocols used to share similar content (e.g. Gnutella). We also investigate two additional community-specific mechanisms that foster even more cooperation

[Go to top]

Determining the Peer Resource Contributions in a P2P Contract (PDF)
by Behrooz Khorshadi, Xin Liu, and Dipak Ghosal.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we study a scheme called P2P contract which explicitly specifies the resource contributions that are required from the peers. In particular, we consider a P2P file sharing system in which when a peer downloads the file it is required to serve the file to upto N other peers within a maximum period of time T. We study the behavior of this contribution scheme in both centralized and decentralized P2P networks. In a centralized architecture, new requests are forwarded to a central server which hands out the contract along with a list of peers from where the file can be downloaded. We show that a simple fixed contract (i.e., fixed values of N and T) is sufficient to create the required server capacity which adapts to the load. Furthermore, we show that T, the time part of the contract is a more important control parameter than N. In the case of a decentralized P2P architecture, each new request is broadcast to a certain neighborhood determined by the time-to-live (TTL) parameter. Each server receiving the request independently doles out a contract and the requesting peer chooses the one which is least constraining. If there are no servers in the neighborhood, the request fails. To achieve a good request success ratio, we propose an adaptive scheme to set the contracts without requiring global information. Through both analysis and simulation, we show that the proposed scheme adapts to the load and achieves low request failure rate with high server efficiency

[Go to top]

Peer-to-Peer Communication Across Network Address Translators (PDF)
by Pyda Srisuresh, Bryan Ford, and Dan Kegel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network Address Translation (NAT) causes well-known difficulties for peer-to-peer (P2P) communication, since the peers involved may not be reachable at any globally valid IP address. Several NAT traversal techniques are known, but their documentation is slim, and data about their robustness or relative merits is slimmer. This paper documents and analyzes one of the simplest but most robust and practical NAT traversal techniques, commonly known as hole punching. Hole punching is moderately well-understood for UDP communication, but we show how it can be reliably used to set up peer-to-peer TCP streams as well. After gathering data on the reliability of this technique on a wide variety of deployed NATs, we nd that about 82 of the NATs tested support hole punching for UDP, and about 64 support hole punching for TCP streams. As NAT vendors become increasingly conscious of the needs of important P2P applications such as Voice over IP and online gaming protocols, support for hole punching is likely to increase in the future

[Go to top]

P2P Contracts: a Framework for Resource and Service Exchange (PDF)
by Dipak Ghosal, Benjamin K. Poon, and Keith Kong.
In FGCS. Future Generations Computer Systems 21, March 2005, pages 333-347. (BibTeX entry) (Download bibtex record)
(direct link)

A crucial aspect of Peer-to-Peer (P2P) systems is that of providing incentives for users to contribute their resources to the system. Without such incentives, empirical data show that a majority of the participants act asfree riders. As a result, a substantial amount of resource goes untapped, and, frequently, P2P systems devolve into client-server systems with attendant issues of performance under high load. We propose to address the free rider problem by introducing the notion of a P2P contract. In it, peers are made aware of the benefits they receive from the system as a function of their contributions. In this paper, we first describe a utility-based framework to determine the components of the contract and formulate the associated resource allocation problem. We consider the resource allocation problem for a flash crowd scenario and show how the contract mechanism implemented using a centralized server can be used to quickly create pseudoservers that can serve out the requests. We then study a decentralized implementation of the P2P contract scheme in which each node implements the contract based on local demand. We show that in such a system, other than contributing storage and bandwidth to serve out requests, it is also important that peer nodes function as application-level routers to connect pools of available pseudoservers. We study the performance of the distributed implementation with respect to the various parameters including the terms of the contract and the triggers to create pseudoservers and routers

[Go to top]

Towards Autonomic Networking using Overlay Routing Techniques (PDF)
by Kendy Kutzner and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

With an ever-growing number of computers being embedded into our surroundings, the era of ubiquitous computing is approaching fast. However, as the number of networked devices increases, so does system complexity. Contrary to the goal of achieving an invisible computer, the required amount of management and human intervention increases more and more, both slowing down the growth rate and limiting the achievable size of ubiquitous systems. In this paper we present a novel routing approach that is capable of handling complex networks without any administrative intervention. Based on a combination of standard overlay routing techniques and source routes, this approach is capable of efficiently bootstrapping a routable network. Unlike other approaches that try to combine peer-to-peer ideas with ad-hoc networks, sensor networks, or ubiquitous systems, our approach is not based on a routing scheme. This makes the resulting system flexible and powerful with respect at application support as well as efficient with regard to routing overhead and system complexity

[Go to top]

A Taxonomy of Rational Attacks (PDF)
by Seth James Nielson and Scott A. Crosby.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

For peer-to-peer services to be effective, participating nodes must cooperate, but in most scenarios a node represents a self-interested party and cooperation can neither be expected nor enforced. A reasonable assumption is that a large fraction of p2p nodes are rational and will attempt to maximize their consumption of system resources while minimizing the use of their own. If such behavior violates system policy then it constitutes an attack. In this paper we identify and create a taxonomy for rational attacks and then identify corresponding solutions if they exist. The most effective solutions directly incentivize cooperative behavior, but when this is not feasible the common alternative is to incentivize evidence of cooperation instead

[Go to top]

Self-Stabilizing Ring Networks on Connected Graphs (PDF)
by Curt Cramer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Large networks require scalable routing. Traditionally, protocol overhead is reduced by introducing a hierarchy. This requires aggregation of nearby nodes under a common address prefix. In fixed networks, this is achieved administratively, whereas in wireless ad-hoc networks, dynamic assignments of nodes to aggregation units are required. As a result of the nodes commonly being assigned a random network address, the majority of proposed ad-hoc routing protocols discovers routes between end nodes by flooding, thus limiting the network size. Peer-to-peer (P2P) overlay networks offer scalable routing solutions by employing virtualized address spaces, yet assume an underlying routing protocol for end-to-end connectivity. We investigate a cross-layer approach to P2P routing, where the virtual address space is implemented with a network-layer routing protocol by itself. The Iterative Successor Pointer Rewiring Protocol (ISPRP) efficiently initializes a ring-structured network among nodes having but link-layer connectivity. It is fully self-organizing and issues only a small per-node amount of messages by keeping interactions between nodes as local as possible. The main contribution of this paper is a proof that ISPRP is self-stabilizing, that is, starting from an arbitrary initial state, the protocol lets the network converge into a correct state within a bounded amount of time

[Go to top]

A Self-Organizing Routing Scheme for Random Networks (PDF)
by Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most routing protocols employ address aggregation to achieve scalability with respect to routing table size. But often, as networks grow in size and complexity, address aggregation fails. Other networks, e.g. sensor-actuator networks or ad-hoc networks, that are characterized by organic growth might not at all follow the classical hierarchical structures that are required for aggregation. In this paper, we present a fully self-organizing routing scheme that is able to efficiently route messages in random networks with randomly assigned node addresses. The protocol combines peer-to-peer techniques with source routing and can be implemented to work with very limited resource demands. With the help of simulations we show that it nevertheless quickly converges into a globally consistent state and achieves a routing stretch of only 1.2 – 1.3 in a network with more than 105 randomly assigned nodes

[Go to top]

A Random Walk Based Anonymous Peer-to-Peer Protocol Design
by Jinsong Han, Yunhao Liu, Li Lu, Lei Hu, and Abhishek Patil.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity has been one of the most challenging issues in Ad Hoc environment such as P2P systems. In this paper, we propose an anonymous protocol called Random Walk based Anonymous Protocol (RWAP), in decentralized P2P systems. We evaluate RWAP by comprehensive trace driven simulations. Results show that RWAP significantly reduces traffic cost and encryption overhead compared with existing approaches

[Go to top]

Measuring Large Overlay Networks–The Overnet Example (PDF)
by Kendy Kutzner and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer overlay networks have grown significantly in size and sophistication over the last years. Meanwhile, distributed hash tables (DHT) provide efficient means to create global scale overlay networks on top of which various applications can be built. Although filesharing still is the most prominent example, other applications are well conceivable. In order to rationally design such applications, it is important to know (and understand) the properties of the overlay networks as seen from the respective application. This paper reports the results from a two week measurement of the entire Overnet network, the currently most widely deployed DHT-based overlay. We describe both, the design choices that made that measurement feasible and the results from the measurement itself. Besides the basic determination of network size, node availability and node distribution, we found unexpected results for the overlay latency distribution

[Go to top]

Location Awareness in Unstructured Peer-to-Peer Systems
by Yunhao Liu, Li Xiao, Xiaomei Liu, Lionel M. Ni, and Xiaodong Zhang.
In IEEE Trans. Parallel Distrib. Syst 16(2), 2005, pages 163-174. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-Peer (P2P) computing has emerged as a popular model aiming at further utilizing Internet information and resources. However, the mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical topology can cause a serious topology mismatch between the P2P overlay network and the physical underlying network. The topology mismatch problem brings great stress in the Internet infrastructure. It greatly limits the performance gain from various search or routing techniques. Meanwhile, due to the inefficient overlay topology, the flooding-based search mechanisms cause a large volume of unnecessary traffic. Aiming at alleviating the mismatching problem and reducing the unnecessary traffic, we propose a location-aware topology matching (LTM) technique. LTM builds an efficient overlay by disconnecting slow connections and choosing physically closer nodes as logical neighbors while still retaining the search scope and reducing response time for queries. LTM is scalable and completely distributed in the sense that it does not require any global knowledge of the whole overlay network. The effectiveness of LTM is demonstrated through simulation studies

[Go to top]

On lifetime-based node failure and stochastic resilience of decentralized peer-to-peer networks (PDF)
by Derek Leonard, Vivek Rai, and Dmitri Loguinov.
In SIGMETRICS Perform. Eval. Rev 33(1), 2005, pages 26-37. (BibTeX entry) (Download bibtex record)
(direct link) (website)

To understand how high rates of churn and random departure decisions of end-users affect connectivity of P2P networks, this paper investigates resilience of random graphs to lifetime-based node failure and derives the expected delay before a user is forcefully isolated from the graph and the probability that this occurs within his/her lifetime. Our results indicate that systems with heavy-tailed lifetime distributions are more resilient than those with light-tailed (e.g., exponential) distributions and that for a given average degree, k-regular graphs exhibit the highest resilience. As a practical illustration of our results, each user in a system with n = 100 billion peers, 30-minute average lifetime, and 1-minute node-replacement delay can stay connected to the graph with probability 1-1 n using only 9 neighbors. This is in contrast to 37 neighbors required under previous modeling efforts. We finish the paper by showing that many P2P networks are almost surely (i.e., with probability 1-o(1)) connected if they have no isolated nodes and derive a simple model for the probability that a P2P system partitions under churn

[Go to top]

ISPRP: A Message-Efficient Protocol for Initializing Structured P2P Networks (PDF)
by Curt Cramer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most research activities in the field of peer-to-peer (P2P) computing are concerned with routing in virtualized overlay networks. These overlays generally assume node connectivity to be provided by an underlying network-layer routing protocol. This duplication of functionality can give rise to severe inefficiencies. In contrast, we suggest a cross-layer approach where the P2P overlay network also provides the required network-layer routing functionality by itself. Especially in sensor networks, where special attention has to be paid to the nodes' limited capabilities, this can greatly help in reducing the message overhead. In this paper, we present a key building block for such a protocol, the iterative successor pointer rewiring protocol (ISPRP), which efficiently initializes a P2P routing network among a freshly deployed set of nodes having but link-layer connectivity. ISPRP works in a fully self-organizing way and issues only a small per-node amount of messages by keeping interactions between nodes as local as possible

[Go to top]

The Hybrid Chord Protocol: A Peer-to-peer Lookup Service for Context-Aware Mobile Applications (PDF)
by Stefan Zöls, Rüdiger Schollmeier, Wolfgang Kellerer, and Anthony Tarlano.
In IEEE ICN, Reunion Island, April 2005. LNCS 3421, 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental problem in Peer-to-Peer (P2P) overlay networks is how to efficiently find a node that shares a requested object. The Chord protocol is a distributed lookup protocol addressing this problem using hash keys to identify the nodes in the network and also the shared objects. However, when a node joins or leaves the Chord ring, object references have to be rearranged in order to maintain the hash key mapping rules. This leads to a heavy traffic load, especially when nodes stay in the Chord ring only for a short time. In mobile scenarios storage capacity, transmission data rate and battery power are limited resources, so the heavy traffic load generated by the shifting of object references can lead to severe problems when using Chord in a mobile scenario. In this paper, we present the Hybrid Chord Protocol (HCP). HCP solves the problem of frequent joins and leaves of nodes. As a further improvement of an efficient search, HCP supports the grouping of shared objects in interest groups. Our concept of using information profiles to describe shared objects allows defining special interest groups (context spaces) and a shared object to be available in multiple context spaces

[Go to top]

First and Second Generation of Peer-to-Peer Systems
by Jörg Eberspächer and Rüdiger Schollmeier.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-Peer (P2P) networks appeared roughly around the year 2000 when a broadband Internet infrastructure (even at the network edge) became widely available. Other than traditional networks Peer-to-Peer networks do not rely on a specific infrastructure offering transport services. Instead they form overlay structures focusing on content allocation and distribution based on TCP or HTTP connections. Whereas in a standard Client-Server configuration content is stored and provided only via some central server(s), Peer-to-Peer networks are highly decentralized and locate a desired content at some participating peer and provide the corresponding IP address of that peer to the searching peer. The download of that content is then initiated using a separate connection, often using HTTP. Thus, the high load usually resulting for a central server and its surrounding network is avoided leading to a more even distribution of load on the underlying physical network. On the other hand, such networks are typically subject to frequent changes because peers join and leave the network without any central control

[Go to top]

A survey of peer-to-peer content distribution technologies (PDF)
by Stephanos Androutsellis-Theotokis and Diomidis Spinellis.
In ACM Computing Surveys 36, December 2004, pages 335-371. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed computer architectures labeled "peer-to-peer" are designed for the sharing of computer resources (content, storage, CPU cycles) by direct exchange, rather than requiring the intermediation or support of a centralized server or authority. Peer-to-peer architectures are characterized by their ability to adapt to failures and accommodate transient populations of nodes while maintaining acceptable connectivity and performance.Content distribution is an important peer-to-peer application on the Internet that has received considerable research attention. Content distribution applications typically allow personal computers to function in a coordinated manner as a distributed storage medium by contributing, searching, and obtaining digital content.In this survey, we propose a framework for analyzing peer-to-peer content distribution technologies. Our approach focuses on nonfunctional characteristics such as security, scalability, performance, fairness, and resource management potential, and examines the way in which these characteristics are reflected in—and affected by—the architectural design decisions adopted by current peer-to-peer systems.We study current peer-to-peer systems and infrastructure technologies in terms of their distributed object location and routing mechanisms, their approach to content replication, caching and migration, their support for encryption, access control, authentication and identity, anonymity, deniability, accountability and reputation, and their use of resource trading and management schemes

[Go to top]

Reputation Management Framework and Its Use as Currency in Large-Scale Peer-to-Peer Networks (PDF)
by Rohit Gupta and Arun K. Somani.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we propose a reputation management framework for large-scale peer-to-peer (P2P) networks, wherein all nodes are assumed to behave selfishly. The proposed framework has several advantages. It enables a form of virtual currency, such that the reputation of nodes is a measure of their wealth. The framework is scalable and provides protection against attacks by malicious nodes. The above features are achieved by developing trusted communities of nodes whose members trust each other and cooperate to deal with the problem of nodesý selfishness and possible maliciousness

[Go to top]

Dissecting BitTorrent: Five Months in a Torrent's Lifetime (PDF)
by Mikel Izal, Guillaume Urvoy-Keller, E W Biersack, Pascal Felber, Anwar Al Hamra, and L Garcés-Erice.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peer-to-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers

[Go to top]

Designing Incentive mechanisms for peer-to-peer systems (PDF)
by John Chuang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

From file-sharing to mobile ad-hoc networks, community networking to application layer overlays, the peer-to-peer networking paradigm promises to revolutionize the way we design, build and use the communications network of tomorrow, transform the structure of the communications industry, and challenge our understanding of markets and democracies in a digital age. The fundamental premise of peer-to-peer systems is that individual peers voluntarily contribute resources to the system. We discuss some of the research opportunities and challenges in the design of incentive mechanisms for P2P systems

[Go to top]

An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol (PDF)
by Salman A. Baset and Henning G. Schulzrinne.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic

[Go to top]

Practical Anonymity for the Masses with MorphMix (PDF)
by Marc Rennhard and Bernhard Plattner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

MorphMix is a peer-to-peer circuit-based mix network to provide practical anonymous low-latency Internet access for millions of users. The basic ideas of MorphMix have been published before; this paper focuses on solving open problems and giving an analysis of the resistance to attacks and the performance it offers assuming realistic scenarios with very many users. We demonstrate that MorphMix scales very well and can support as many nodes as there are public IP addresses. In addition, we show that MorphMix is indeed practical because it provides good resistance from long-term profiling and offers acceptable performance despite the heterogeneity of the nodes and the fact that nodes can join or leave the system at any time

[Go to top]

A construction of locality-aware overlay network: mOverlay and its performance (PDF)
by Xin Yan Zhang, Qian Zhang, Zhensheng Zhang, Gang Song, and Wenwu Zhu.
In IEEE Journal on Selected Areas in Communications 22, January 2004, pages 18-28. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There are many research interests in peer-to-peer (P2P) overlay architectures. Most widely used unstructured P2P networks rely on central directory servers or massive message flooding, clearly not scalable. Structured overlay networks based on distributed hash tables (DHT) are expected to eliminate flooding and central servers, but can require many long-haul message deliveries. An important aspect of constructing an efficient overlay network is how to exploit network locality in the underlying network. We propose a novel mechanism, mOverlay, for constructing an overlay network that takes account of the locality of network hosts. The constructed overlay network can significantly decrease the communication cost between end hosts by ensuring that a message reaches its destination with small overhead and very efficient forwarding. To construct the locality-aware overlay network, dynamic landmark technology is introduced. We present an effective locating algorithm for a new host joining the overlay network. We then present a theoretical analysis and simulation results to evaluate the network performance. Our analysis shows that the overhead of our locating algorithm is O(logN), where N is the number of overlay network hosts. Our simulation results show that the average distance between a pair of hosts in the constructed overlay network is only about 11 of the one in a traditional, randomly connected overlay network. Network design guidelines are also provided. Many large-scale network applications, such as media streaming, application-level multicasting, and media distribution, can leverage mOverlay to enhance their performance

[Go to top]

An Asymptotically Optimal Scheme for P2P File Sharing (PDF)
by Panayotis Antoniadis, Costas Courcoubetis, and Richard Weber.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The asymptotic analysis of certain public good models for p2p systems suggests that when the aim is to maximize social welfare a fixed contribution scheme in terms of the number of files shared can be asymptotically optimal as the number of participants grows to infinity. Such a simple scheme eliminates free riding, is incentive compatible and obtains a value of social welfare that is within o(n) of that obtained by the second-best policy of the corresponding mechanism design formulation of the problem. We extend our model to account for file popularity, and discuss properties of the resulting equilibria. The fact that a simple optimization problem can be used to closely approximate the solution of the exact model (which is in most cases practically intractable both analytically and computationally), is of great importance for studying several interesting aspects of the system. We consider the evolution of the system to equilibrium in its early life, when both peers and the system planner are still learning about system parameters. We also analyse the case of group formation when peers belong to different classes (such as DSL and dial-up users), and it may be to their advantage to form distinct groups instead of a larger single group, or form such a larger group but avoid disclosing their class. We finally discuss the game that occurs when peers know that a fixed fee will be used, but the distribution of their valuations is unknown to the system designer

[Go to top]

Vulnerabilities and Security Threats in Structured Overlay Networks: A Quantitative Analysis (PDF)
by Mudhakar Srivatsa and Ling Liu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A number of recent applications have been built on distributed hash tables (DHTs) based overlay networks. Almost all DHT-based schemes employ a tight deterministic data placement and ID mapping schemes. This feature on one hand provides assurance on location of data if it exists, within a bounded number of hops, and on the other hand, opens doors for malicious nodes to lodge attacks that can potentially thwart the functionality of the overlay network. This paper studies several serious security threats in DHT-based systems through two targeted attacks at the overlay network's protocol layer. The first attack explores the routing anomalies that can be caused by malicious nodes returning incorrect lookup routes. The second attack targets the ID mapping scheme. We disclose that the malicious nodes can target any specific data item in the system; and corrupt/modify the data item to its favor. For each of these attacks, we provide quantitative analysis to estimate the extent of damage that can be caused by the attack; followed by experimental validation and defenses to guard the overlay networks from such attacks

[Go to top]

Total Recall: System Support for Automated Availability Management (PDF)
by Ranjita Bhagwan Kiran, Kiran Tati, Yu-chung Cheng, Stefan Savage, and Geoffrey M. Voelker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Availability is a storage system property that is both highly desired and yet minimally engineered. While many systems provide mechanisms to improve availability–such as redundancy and failure recovery–how to best configure these mechanisms is typically left to the system manager. Unfortunately, few individuals have the skills to properly manage the trade-offs involved, let alone the time to adapt these decisions to changing conditions. Instead, most systems are configured statically and with only a cursory understanding of how the configuration will impact overall performance or availability. While this issue can be problematic even for individual storage arrays, it becomes increasingly important as systems are distributed–and absolutely critical for the wide-area peer-to-peer storage infrastructures being explored. This paper describes the motivation, architecture and implementation for a new peer-to-peer storage system, called TotalRecall, that automates the task of availability management. In particular, the TotalRecall system automatically measures and estimates the availability of its constituent host components, predicts their future availability based on past behavior, calculates the appropriate redundancy mechanisms and repair policies, and delivers user-specified availability while maximizing efficiency

[Go to top]

Simple efficient load balancing algorithms for peer-to-peer systems (PDF)
by David Karger and Matthias Ruhl.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Load balancing is a critical issue for the efficient operation of peer-to-peer networks. We give two new load-balancing protocols whose provable performance guarantees are within a constant factor of optimal. Our protocols refine the consistent hashing data structure that underlies the Chord (and Koorde) P2P network. Both preserve Chord's logarithmic query time and near-optimal data migration cost.Consistent hashing is an instance of the distributed hash table (DHT) paradigm for assigning items to nodes in a peer-to-peer system: items and nodes are mapped to a common address space, and nodes have to store all items residing closeby in the address space.Our first protocol balances the distribution of the key address space to nodes, which yields a load-balanced system when the DHT maps items "randomly" into the address space. To our knowledge, this yields the first P2P scheme simultaneously achieving O(log n) degree, O(log n) look-up cost, and constant-factor load balance (previous schemes settled for any two of the three).Our second protocol aims to directly balance the distribution of items among the nodes. This is useful when the distribution of items in the address space cannot be randomized. We give a simple protocol that balances load by moving nodes to arbitrary locations "where they are needed." As an application, we use the last protocol to give an optimal implementation of a distributed data structure for range searches on ordered data

[Go to top]

Probabilistic Model Checking of an Anonymity System (PDF)
by Vitaly Shmatikov.
In Journal of Computer Security 12(3-4), 2004, pages 355-377. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We use the probabilistic model checker PRISM to analyze the Crowds system for anonymous Web browsing. This case study demonstrates how probabilistic model checking techniques can be used to formally analyze security properties of a peer-to-peer group communication system based on random message routing among members. The behavior of group members and the adversary is modeled as a discrete-time Markov chain, and the desired security properties are expressed as PCTL formulas. The PRISM model checker is used to perform automated analysis of the system and verify anonymity guarantees it provides. Our main result is a demonstration of how certain forms of probabilistic anonymity degrade when group size increases or random routing paths are rebuilt, assuming that the corrupt group members are able to identify and/or correlate multiple routing paths originating from the same sender

[Go to top]

Peer-to-Peer Overlays and Data Integration in a Life Science Grid (PDF)
by Curt Cramer, Andrea Schafferhans, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Databases and Grid computing are a good match. With the service orientation of Grid computing, the complexity of maintaining and integrating databases can be kept away from the actual users. Data access and integration is performed via services, which also allow to employ an access control. While it is our perception that many proposed Grid applications rely on a centralized and static infrastructure, Peer-to-Peer (P2P) technologies might help to dynamically scale and enhance Grid applications. The focus does not lie on publicly available P2P networks here, but on the self-organizing capabilities of P2P networks in general. A P2P overlay could, e.g., be used to improve the distribution of queries in a data Grid. For studying the combination of these three technologies, Grid computing, databases, and P2P, in this paper, we use an existing application from the life sciences, drug target validation, as an example. In its current form, this system has several drawbacks. We believe that they can be alleviated by using a combination of the service-based architecture of Grid computing and P2P technologies for implementing the services. The work presented in this paper is in progress. We mainly focus on the description of the current system state, its problems and the proposed new architecture. For a better understanding, we also outline the main topics related to the work presented here

[Go to top]

A Peer-to-Peer File Sharing System for Wireless Ad-Hoc Networks (PDF)
by unknown.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

File sharing in wireless ad-hoc networks in a peer to peer manner imposes many challenges that make conventional peer-to-peer systems operating on wire-line networks inapplicable for this case. Information and workload distribution as well as routing are major problems for members of a wireless ad-hoc network, which are only aware of their neighborhood. In this paper we propose a system that solves peer-to-peer filesharing problem for wireless ad-hoc networks. Our system works according to peer-to-peer principles, without requiring a central server, and distributes information regarding the location of shared files among members of the network. By means of a hashline and forming a tree-structure based on the topology of the network, the system is able to answer location queries, and also discover and maintain routing information that is used to transfer files from a source-peer to another peer

[Go to top]

PeerStore: Better Performance by Relaxing in Peer-to-Peer Backup (PDF)
by Martin Landers, Han Zhang, and Kian-Lee Tan.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Backup is cumbersome. To be effective, backups have to be made at regular intervals, forcing users to organize and store a growing collection of backup media. In this paper we propose a novel Peer-to-Peer backup system, PeerStore, that allows the user to store his backups on other people's computers instead. PeerStore is an adaptive, cost-effective system suitable for all types of networks ranging from LAN, WAN to large unstable networks like the Internet. The system consists of two layers: metadata layer and symmetric trading layer. Locating blocks and duplicate checking is accomplished by the metadata layer while the actual data distribution is done between pairs of peers after they have established a symmetric data trade. By decoupling the metadata management from data storage, the system offers a significant reduction of the maintenance cost and preserves fairness among peers. Results show that PeerStore has a reduced maintenance cost comparing to pStore. PeerStore also realizes fairness because of the symmetric nature of the trades

[Go to top]

Multifaceted Simultaneous Load Balancing in DHT-based P2P systems: A new game with old balls and bins (PDF)
by Karl Aberer, Anwitaman Datta, and Manfred Hauswirth.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we present and evaluate uncoordinated on-line algorithms for simultaneous storage and replication load-balancing in DHT-based peer-to-peer systems. We compare our approach with the classical balls into bins model, and point out the similarities but also the differences which call for new loadbalancing mechanisms specifically targeted at P2P systems. Some of the peculiarities of P2P systems, which make our problem even more challenging are that both the network membership and the data indexed in the network is dynamic, there is neither global coordination nor global information to rely on, and the load-balancing mechanism ideally should not compromise the structural properties and thus the search efficiency of the DHT, while preserving the semantic information of the data (e.g., lexicographic ordering to enable range searches)

[Go to top]

Mercury: supporting scalable multi-attribute range queries (PDF)
by Ashwin R. Bharambe, Mukesh Agrawal, and Srinivasan Seshan.
In SIGCOMM Comput. Commun. Rev 34(4), 2004, pages 353-366. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents the design of Mercury, a scalable protocol for supporting multi-attribute range-based searches. Mercury differs from previous range-based query systems in that it supports multiple attributes as well as performs explicit load balancing. To guarantee efficient routing and load balancing, Mercury uses novel light-weight sampling mechanisms for uniformly sampling random nodes in a highly dynamic overlay network. Our evaluation shows that Mercury is able to achieve its goals of logarithmic-hop routing and near-uniform load balancing.We also show that Mercury can be used to solve a key problem for an important class of distributed applications: distributed state maintenance for distributed games. We show that the Mercury-based solution is easy to use, and that it reduces the game's messaging overheard significantly compared to a naïve approach

[Go to top]

Leopard: A locality-aware peer-to-peer system with no hot spot (PDF)
by Yinzhe Yu, Sanghwan Lee, and Zhi-li Zhang.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental challenge in Peer-To-Peer (P2P) systems is how to locate objects of interest, namely, the look-up service problem. A key break-through towards a scalable and distributed solution of this problem is the distributed hash

[Go to top]

Distributed Job Scheduling in a Peer-to-Peer Video Recording System (PDF)
by Curt Cramer, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Since the advent of Gnutella, Peer-to-Peer (P2P) protocols have matured towards a fundamental design element for large-scale, self-organising distributed systems. Many research efforts have been invested to improve various aspects of P2P systems, like their performance, scalability, and so on. However, little experience has been gathered from the actual deployment of such P2P systems apart from the typical file sharing applications. To bridge this gap and to gain more experience in making the transition from theory to practice, we started building advanced P2P applications whose explicit goal is to be deployed in the wild. In this paper, we describe a fully decentralised P2P video recording system. Every node in the system is a networked computer (desktop PC or set-top box) capable of receiving and recording DVB-S, i.e. digital satellite TV. Like a normal video recorder, users can program their machines to record certain programmes. With our system, they will be able to schedule multiple recordings in parallel. It is the task of the system to assign the recordings to different machines in the network. Moreover, users can record broadcasts in the past, i.e. the system serves as a short-term archival storage

[Go to top]

Data durability in peer to peer storage systems (PDF)
by Gil Utard and Antoine Vernois.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we present a quantitative study of data survival in peer to peer storage systems. We first recall two main redundancy mechanisms: replication and erasure codes, which are used by most peer to peer storage systems like OceanStore, PAST or CFS, to guarantee data durability. Second we characterize peer to peer systems according to a volatility factor (a peer is free to leave the system at anytime) and to an availability factor (a peer is not permanently connected to the system). Third we model the behavior of a system as a Markov chain and analyse the average life time of data (MTTF) according to the volatility and availability factors. We also present the cost of the repair process based on these redundancy schemes to recover failed peers. The conclusion of this study is that when there is no high availability of peers, a simple replication scheme may be more efficient than sophisticated erasure codes

[Go to top]

Bootstrapping Locality-Aware P2P Networks (PDF)
by Curt Cramer, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Bootstrapping is a vital core functionality required by every peer-to-peer (P2P) overlay network. Nodes intending to participate in such an overlay network initially have to find at least one node that is already part of this network. While structured P2P networks (e.g. distributed hash tables, DHTs) define rules about how to proceed after this point, unstructured P2P networks continue using bootstrapping techniques until they are sufficiently connected. In this paper, we compare solutions applicable to the bootstrapping problem. Measurements of an existing system, the Gnutella web caches, highlight the inefficiency of this particular approach. Improved bootstrapping mechanisms could also incorporate locality-awareness into the process. We propose an advanced mechanism by which the overlay topology is–to some extent–matched with the underlying topology. Thereby, the performance of the overall system can be vastly improved

[Go to top]

Samsara: Honor Among Thieves in Peer-to-Peer Storage (PDF)
by Landon P. Cox and Brian D. Noble.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ongoing administrative costs. All of these run counter to the design philosophy of peer-to-peer systems. requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in return—a placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable. Claim storage overhead can be reduced when necessary by forwarding among chains of nodes, and eliminated when cycles are created. Forwarding chains increase the risk of exposure to failure, but such risk is modest under reasonable assumptions of utilization and simultaneous, persistent failure

[Go to top]

On the Practical Use of LDPC Erasure Codes for Distributed Storage Applications (PDF)
by James S. Plank and Michael G. Thomason.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper has been submitted for publication. Please see the above URL for current publication status. As peer-to-peer and widely distributed storage systems proliferate, the need to perform efficient erasure coding, instead of replication, is crucial to performance and efficiency. Low-Density Parity-Check (LDPC) codes have arisen as alternatives to standard erasure codes, such as Reed-Solomon codes, trading off vastly improved decoding performance for inefficiencies in the amount of data that must be acquired to perform decoding. The scores of papers written on LDPC codes typically analyze their collective and asymptotic behavior. Unfortunately, their practical application requires the generation and analysis of individual codes for finite systems. This paper attempts to illuminate the practical considerations of LDPC codes for peer-to-peer and distributed storage systems. The three main types of LDPC codes are detailed, and a huge variety of codes are generated, then analyzed using simulation. This analysis focuses on the performance of individual codes for finite systems, and addresses several important heretofore unanswered questions about employing LDPC codes in real-world systems. This material is based upon work supported by the National

[Go to top]

Identity Crisis: Anonymity vs. Reputation in P2P Systems (PDF)
by Sergio Marti and Hector Garcia-Molina.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

The effectiveness of reputation systems for peer-to-peer resource-sharing networks is largely dependent on the reliability of the identities used by peers in the network. Much debate has centered around how closely one's pseudoidentity in the network should be tied to their real-world identity, and how that identity is protected from malicious spoofing. In this paper we investigate the cost in efficiency of two solutions to the identity problem for peer-to-peer reputation systems. Our results show that, using some simple mechanisms, reputation systems can provide a factor of 4 to 20 improvement in performance over no reputation system, depending on the identity model used

[Go to top]

A game theoretic framework for incentives in P2P systems (PDF)
by Chiranjeeb Buragohain, Dvyakant Agrawal, and Subhash Suri.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer (P2P) networks are self-organizing, distributed systems, with no centralized authority or infrastructure. Because of the voluntary participation, the availability of resources in a P2P system can be highly variable and unpredictable. We use ideas from game theory to study the interaction of strategic and rational peers, and propose a differential service-based incentive scheme to improve the system's performance

[Go to top]

Incentives for Cooperation in Peer-to-Peer Networks (PDF)
by Kevin Lai, Michal Feldman, Ion Stoica, and John Chuang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this paper, our contributions are to generalize from the traditional symmetric EPD to the asymmetric transactions of P2P applications, map out the design space of EPD-based incentive techniques, and simulate a subset of these techniques. Our findings are as follows: Incentive techniques relying on private history (where entites only use their private histories of entities' actions) fail as the population size increases

[Go to top]

Defending Anonymous Communication Against Passive Logging Attacks (PDF)
by Matthew Wright, Micah Adler, Brian Neil Levine, and Clay Shields.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We study the threat that passive logging attacks poseto anonymous communications. Previous work analyzedthese attacks under limiting assumptions. We first describea possible defense that comes from breaking the assumptionof uniformly random path selection. Our analysisshows that the defense improves anonymity in the staticmodel, where nodes stay in the system, but fails in a dynamicmodel, in which nodes leave and join. Additionally,we use the dynamic model to show that the intersectionattack creates a vulnerability in certain peer-to-peer systemsfor anonymous communciations. We present simulationresults that show that attack times are significantlylower in practice than the upper bounds given by previouswork. To determine whether users' web traffic has communicationpatterns required by the attacks, we collectedand analyzed the web requests of users. We found that,for our study, frequent and repeated communication to thesame web site is common

[Go to top]

Herbivore: A Scalable and Efficient Protocol for Anonymous Communication (PDF)
by Sharad Goel, Mark Robson, Milo Polte, and Emin Gün Sirer.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity is increasingly important for networked applications amidst concerns over censorship and privacy. In this paper, we describe Herbivore, a peer-to-peer, scalable, tamper-resilient communication system that provides provable anonymity and privacy. Building on dining cryptographer networks, Herbivore scales by partitioning the network into anonymizing cliques. Adversaries able to monitor all network traffic cannot deduce the identity of a sender or receiver beyond an anonymizing clique. In addition to strong anonymity, Herbivore simultaneously provides high efficiency and scalability, distinguishing it from other anonymous communication protocols. Performance measurements from a prototype implementation show that the system can achieve high bandwidths and low latencies when deployed over the Internet

[Go to top]

An Efficient Peer-to-Peer File Sharing Exploiting Hierarchy and Asymmetry (PDF)
by Gisik Kwon and Kyung D. Ryu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many Peer-to-Peer (P2P) file sharing systems have been proposed to take advantage of high scalability and abundant resources at end-user machines. Previous approaches adopted either simple flooding or routing with complex structures, such as Distributed HashingTables (DHT). However, these approaches did not consider the heterogeneous nature of the machines and the hierarchy of networks on the Internet. This paper presents Peer-to-peer Asymmetric file Sharing System(PASS), a novel approach to P2P file sharing, which accounts for the different capabilities and network locations of the participating machines. Our system selects only a portion of high-capacity machines(supernodes) for routing support, and organizes the network by using location information. We show that our key-coverage based directory replication improves the file search performance to a small constant number of routing hops, regardless of the network size

[Go to top]

Usability and privacy: a study of Kazaa P2P file-sharing (PDF)
by Nathaniel S. Good and Aaron Krekelberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

P2P file sharing systems such as Gnutella, Freenet, and KaZaA, while primarily intended for sharing multimedia files, frequently allow other types of information to be shared. This raises serious concerns about the extent to which users may unknowingly be sharing private or personal information.In this paper, we report on a cognitive walkthrough and a laboratory user study of the KaZaA file sharing user interface. The majority of the users in our study were unable to tell what files they were sharing, and sometimes incorrectly assumed they were not sharing any files when in fact they were sharing all files on their hard drive. An analysis of the KaZaA network suggested that a large number of users appeared to be unwittingly sharing personal and private files, and that some users were indeed taking advantage of this and downloading files containing ostensibly private information

[Go to top]

A Transport Layer Abstraction for Peer-to-Peer Networks (PDF)
by Ronaldo A. Ferreira, Christian Grothoff, and Paul Ruth.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The initially unrestricted host-to-host communication model provided by the Internet Protocol has deteriorated due to political and technical changes caused by Internet growth. While this is not a problem for most client-server applications, peer-to-peer networks frequently struggle with peers that are only partially reachable. We describe how a peer-to-peer framework can hide diversity and obstacles in the underlying Internet and provide peer-to-peer applications with abstractions that hide transport specific details. We present the details of an implementation of a transport service based on SMTP. Small-scale benchmarks are used to compare transport services over UDP, TCP, and SMTP

[Go to top]

A Special-Purpose Peer-to-Peer File Sharing System for Mobile Ad Hoc Networks (PDF)
by Alexander Klemm, Er Klemm, Christoph Lindemann, and Oliver Waldhorst.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Establishing peer-to-peer (P2P) file sharing for mobile ad hoc networks ANET) requires the construction of a search algorithm for transmitting queries and search results as well as the development of a transfer protocol for downloading files matching a query. In this paper, we present a special-purpose system for searching and file transfer tailored to both the characteristics of MANET and the requirements of peer-to-peer file sharing. Our approach is based on an application layer overlay networlc As innovative feature, overlay routes are set up on demand by the search algorithm, closely matching network topology and transparently aggregating redundant transfer paths on a per-file basis. The transfer protocol guarantees high data rates and low transmission overhead by utilizing overlay routes. In a detailed ns2 simulation study, we show that both the search algorithm and the transfer protocol outperform offthe -shelf approaches based on a P2P file sharing system for the wireline Internet, TCP and a MANET routing protocol

[Go to top]

Reputation in P2P Anonymity Systems (PDF)
by Roger Dingledine, Nick Mathewson, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Decentralized anonymity systems tend to be unreliable, because users must choose nodes in the network without knowing the entire state of the network. Reputation systems promise to improve reliability by predicting network state. In this paper we focus on anonymous remailers and anonymous publishing, explain why the systems can benefit from reputation, and describe our experiences designing reputation systems for them while still ensuring anonymity. We find that in each example we first must redesign the underlying anonymity system to support verifiable transactions

[Go to top]

Range Queries over DHTs
by Sylvia Ratnasamy, Joseph M. Hellerstein, and S Shenker.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Hash Tables (DHTs) are scalable peer-to-peer systems that support exact match lookups. This paper describes the construction and use of a Prefix Hash Tree (PHT) – a distributed data structure that supports range queries over DHTs. PHTs use the hash-table interface of DHTs to construct a search tree that is efficient (insertions/lookups take DHT lookups, where D is the data domain being indexed) and robust (the failure of any given node in the search tree does not affect the availability of data stored at other nodes in the PHT)

[Go to top]

P-Grid: A Self-organizing Structured P2P System (PDF)
by Karl Aberer, Philippe Cudre-Mauroux, Anwitaman Datta, Zoran Despotovic, Manfred Hauswirth, Magdalena Punceva, and Roman Schmidt.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this paper was supported in part by the National Competence Center in Research on Mobile Information and Communication Systems (NCCR-MICS), a center supported by the Swiss National Science Foundation under grant number 5005-67322 and by SNSF grant 2100064994, "Peer-to-Peer Information Systems." messages. From the responses it (randomly) selects certain peers to which direct network links are established

[Go to top]

Peer-To-Peer Backup for Personal Area Networks (PDF)
by Boon Thau Loo, Anthony LaMarca, Gaetano Borriello, and Boon Thau Loo.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

FlashBack is a peer-to-peer backup algorithm designed for powerconstrained devices running in a personal area network (PAN). Backups are performed transparently as local updates initiate the spread of backup data among a subset of the currently available peers. Flashback limits power usage by avoiding flooding and keeping small neighbor sets. Flashback has also been designed to utilize powered infrastructure when possible to further extend device lifetime. We propose our architecture and algorithms, and present initial experimental results that illustrate FlashBack's performance characteristics

[Go to top]

An Overlay-Network Approach for Distributed Access to SRS (PDF)
by Thomas Fuhrmann, Andrea Schafferhans, and Thure Etzold.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SRS is a widely used system for integrating biologicaldatabases. Currently, SRS relies only on locally providedcopies of these databases. In this paper we propose a mechanism that also allows the seamless integration of remotedatabases. To this end, our proposed mechanism splits theexisting SRS functionality into two components and addsa third component that enables us to employ peer-to-peercomputing techniques to create optimized overlay-networkswithin which database queries can efficiently be routed. Asan additional benefit, this mechanism also reduces the administration effort that would be needed with a conventionalapproach using replicated databases

[Go to top]

Making gnutella-like P2P systems scalable (PDF)
by Yatin Chawathe, Lee Breslau, Nick Lanham, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Napster pioneered the idea of peer-to-peer file sharing, and supported it with a centralized file search facility. Subsequent P2P systems like Gnutella adopted decentralized search algorithms. However, Gnutella's notoriously poor scaling led some to propose distributed hash table solutions to the wide-area file search problem. Contrary to that trend, we advocate retaining Gnutella's simplicity while proposing new mechanisms that greatly improve its scalability. Building upon prior research [1, 12, 22], we propose several modifications to Gnutella's design that dynamically adapt the overlay topology and the search algorithms in order to accommodate the natural heterogeneity present in most peer-to-peer systems. We test our design through simulations and the results show three to five orders of magnitude improvement in total system capacity. We also report on a prototype implementation and its deployment on a testbed

[Go to top]

A Lightweight Currency Paradigm for the P2P Resource Market (PDF)
by David A. Turner and Keith W. Ross.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A P2P resource market is a market in which peers trade resources (including storage, bandwidth and CPU cycles) and services with each other. We propose a specific paradigm for a P2P resource market. This paradigm has five key components: (i) pairwise trading market, with peers setting their own prices for offered resources; (ii) multiple currency economy, in which any peer can issue its own currency; (iii) no legal recourse, thereby limiting the transaction costs in trades; (iv) a simple, secure application-layer protocol; and (v) entity identification based on the entity's unique public key. We argue that the paradigm can lead to a flourishing P2P resource market, allowing applications to tap into the huge pool of surplus peer resources. We illustrate the paradigm and its corresponding Lightweight Currency Protocol (LCP) with several application examples

[Go to top]

Kelips: Building an efficient and stable P2P DHT through increased memory and background overhead (PDF)
by Indranil Gupta, Kenneth P. Birman, Prakash Linga, Alan Demers, and Robbert Van Renesse.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A peer-to-peer (p2p) distributed hash table (DHT) system allows hosts to join and fail silently (or leave), as well as to insert and retrieve files (objects). This paper explores a new point in design space in which increased memory usage and constant background communication overheads are tolerated to reduce file lookup times and increase stability to failures and churn. Our system, called Kelips, uses peer-to-peer gossip to partially replicate file index information. In Kelips, (a) under normal conditions, file lookups are resolved with O(1) time and complexity (i.e., independent of system size), and (b) membership changes (e.g., even when a large number of nodes fail) are detected and disseminated to the system quickly. Per-node memory requirements are small in medium-sized systems. When there are failures, lookup success is ensured through query rerouting. Kelips achieves load balancing comparable to existing systems. Locality is supported by using topologically aware gossip mechanisms. Initial results of an ongoing experimental study are also discussed

[Go to top]

HIERAS: A DHT Based Hierarchical P2P Routing Algorithm
by Zhiyong Xu, Rui Min, and Yiming Hu.
In Parallel Processing, International Conference on, 2003, pages 0-187. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Routing algorithm has great influence on system overall performance in Peer-to-Peer (P2P) applications. In current DHT based routing algorithms, routing tasks are distributed across all system peers. However, a routing hop could happen between two widely separated peers with high network link latency which greatly increases system routing overheads. In this paper, we propose a new P2P routing algorithm— HIERAS to relieve this problem, it keeps scalability property of current DHT algorithms and improves system routing performance by the introduction of hierarchical structure. In HIERAS, we create several lower level P2P rings besides the highest level P2P ring. A P2P ring is a subset of the overall P2P overlay network. We create P2P rings in such a strategy that the average link latency between two peers in lower level rings is much smaller than higher level rings. Routing tasks are first executed in lower level rings before they go up to higher level rings, a large portion of routing hops previously executed in the global P2P ring are now replaced by hops in lower level rings, thus routing overheads can be reduced. The simulation results show HIERAS routing algorithm can significantly improve P2P system routing performance

[Go to top]

A cooperative internet backup scheme (PDF)
by Mark Lillibridge, Sameh Elnikety, Andrew D. Birrell, Mike Burrows, and Michael Isard.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a novel peer-to-peer backup technique that allows computers connected to the Internet to back up their data cooperatively: Each computer has a set of partner computers, which collectively hold its backup data. In return, it holds a part of each partner's backup data. By adding redundancy and distributing the backup data across many partners, a highly-reliable backup can be obtained in spite of the low reliability of the average Internet machine. Because our scheme requires cooperation, it is potentially vulnerable to several novel attacks involving free riding (e.g., holding a partner's data is costly, which tempts cheating) or disruption. We defend against these attacks using a number of new methods, including the use of periodic random challenges to ensure partners continue to hold data and the use of disk-space wasting to make cheating unprofitable. Results from an initial prototype show that our technique is feasible and very inexpensive: it appears to be one to two orders of magnitude cheaper than existing Internet backup services

[Go to top]

Asymptotically Efficient Approaches to Fault-Tolerance in Peer-to-Peer (PDF)
by Kirsten Hildrum and John Kubiatowicz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we show that two peer-to-peer systems, Pastry [13] and Tapestry [17] can be made tolerant to certain classes of failures and a limited class of attacks. These systems are said to operate properly if they can find the closest node matching a requested ID. The system must also be able to dynamically construct the necessary routing information when new nodes enter or the network changes. We show that with an additional factor of storage overhead and communication overhead, they can continue to achieve both of these goals in the presence of a constant fraction nodes that do not obey the protocol. Our techniques are similar in spirit to those of Saia et al. [14] and Naor and Wieder [10]. Some simple simulations show that these techniques are useful even with constant overhead

[Go to top]

Tarzan: A Peer-to-Peer Anonymizing Network Layer (PDF)
by Michael J. Freedman and Robert Morris.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tarzan is a peer-to-peer anonymous IP network overlay. Because it provides IP service, Tarzan is general-purpose and transparent to applications. Organized as a decentralized peer-to-peer overlay, Tarzan is fault-tolerant, highly scalable, and easy to manage.Tarzan achieves its anonymity with layered encryption and multi-hop routing, much like a Chaumian mix. A message initiator chooses a path of peers pseudo-randomly through a restricted topology in a way that adversaries cannot easily influence. Cover traffic prevents a global observer from using traffic analysis to identify an initiator. Protocols toward unbiased peer-selection offer new directions for distributing trust among untrusted entities.Tarzan provides anonymity to either clients or servers, without requiring that both participate. In both cases, Tarzan uses a network address translator (NAT) to bridge between Tarzan hosts and oblivious Internet hosts.Measurements show that Tarzan imposes minimal overhead over a corresponding non-anonymous overlay route

[Go to top]

Introducing MorphMix: Peer-to-Peer based Anonymous Internet Usage with Collusion Detection (PDF)
by Marc Rennhard and Bernhard Plattner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional mix-based systems are composed of a small set of static, well known, and highly reliable mixes. To resist traffic analysis attacks at a mix, cover traffic must be used, which results in significant bandwidth overhead. End-to-end traffic analysis attacks are even more difficult to counter because there are only a few entry-and exit-points in the system. Static mix networks also suffer from scalability problems and in several countries, institutions operating a mix could be targeted by legal attacks. In this paper, we introduce MorphMix, a system for peer-to-peer based anonymous Internet usage. Each MorphMix node is a mix and anyone can easily join the system. We believe that MorphMix overcomes or reduces several drawbacks of static mix networks. In particular, we argue that our approach offers good protection from traffic analysis attacks without employing cover traffic. But MorphMix also introduces new challenges. One is that an adversary can easily operate several malicious nodes in the system and try to break the anonymity of legitimate users by getting full control over their anonymous paths. To counter this attack, we have developed a collusion detection mechanism, which allows to identify compromised paths with high probability before they are being used

[Go to top]

Reliable MIX Cascade Networks through Reputation (PDF)
by Roger Dingledine and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a MIX cascade protocol and a reputation system that together increase the reliability of a network of MIX cascades. In our protocol, MIX nodes periodically generate a communally random seed that, along with their reputations, determines cascade configuration. Nodes send test messages to monitor their cascades. Senders can also demonstrate message decryptions to convince honest cascade members that a cascade is misbehaving. By allowing any node to declare the failure of its own cascade, we eliminate the need for global trusted witnesses

[Go to top]

Kademlia: A Peer-to-peer Information System Based on the XOR Metric (PDF)
by Petar Maymounkov and David Mazières.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a peer-to-peer distributed hash table with provable consistency and performance in a fault-prone environment. Our system routes queries and locates nodes using a novel XOR-based metric topology that simplifies the algorithm and facilitates our proof. The topology has the property that every message exchanged conveys or reinforces useful contact information. The system exploits this information to send parallel, asynchronous query messages that tolerate node failures without imposing timeout delays on users

[Go to top]

Anonymizing censorship resistant systems (PDF)
by Andrei Serjantov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we propose a new Peer-to-Peer architecture for a censorship resistant system with user, server and active-server document anonymity as well as efficient document retrieval. The retrieval service is layered on top of an existing Peer-to-Peer infrastructure, which should facilitate its implementation. The key idea is to separate the role of document storers from the machines visible to the users, which makes each individual part of the system less prone to attacks, and therefore to censorship. Indeed, if one server has been pressured into removal, the other server administrators may simply follow the precedent and remove the offending content themselves

[Go to top]

Viceroy: a scalable and dynamic emulation of the butterfly (PDF)
by Dahlia Malkhi, Moni Naor, and David Ratajczak.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a family of constant-degree routing networks of logarithmic diameter, with the additional property that the addition or removal of a node to the network requires no global coordination, only a constant number of linkage changes in expectation, and a logarithmic number with high probability. Our randomized construction improves upon existing solutions, such as balanced search trees, by ensuring that the congestion of the network is always within a logarithmic factor of the optimum with high probability. Our construction derives from recent advances in the study of peer-to-peer lookup networks, where rapid changes require efficient and distributed maintenance, and where the lookup efficiency is impacted both by the lengths of paths to requested data and the presence or elimination of bottlenecks in the network

[Go to top]

A Survey of Peer-to-Peer Security Issues (PDF)
by Dan S. Wallach.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer (p2p) networking technologies have gained popularity as a mechanism for users to share files without the need for centralized servers. A p2p network provides a scalable and fault-tolerant mechanism to locate nodes anywhere on a network without maintaining a large amount of routing state. This allows for a variety of applications beyond simple file sharing. Examples include multicast systems, anonymous communications systems, and web caches. We survey security issues that occur in the underlying p2p routing protocols, as well as fairness and trust issues that occur in file sharing and other p2p applications. We discuss how techniques, ranging from cryptography, to random network probing, to economic incentives, can be used to address these problems

[Go to top]

Security Considerations for Peer-to-Peer Distributed Hash Tables (PDF)
by Emil Sit and Robert Morris.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recent peer-to-peer research has focused on providing efficient hash lookup systems that can be used to build more complex systems. These systems have good properties when their algorithms are executed correctly but have not generally considered how to handle misbehaving nodes. This paper looks at what sorts of security problems are inherent in large peer-to-peer systems based on distributed hash lookup systems. We examine the types of problems that such systems might face, drawing examples from existing systems, and propose some design principles for detecting and preventing these problems

[Go to top]

Secure Routing in Wireless Sensor Networks: Attacks and Countermeasures (PDF)
by Chris Karlof and David Wagner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider routing security in wireless sensor networks. Many sensor network routing protocols have been proposed, but none of them have been designed with security as a goal. We propose security goals for routing in sensor networks, show how attacks against ad-hoc and peer-to-peer networks can be adapted into powerful attacks against sensor networks, introduce two classes of novel attacks against sensor networks — sinkholes and HELLO floods, and analyze the security of all the major sensor network routing protocols. We describe crippling attacks against all of them and suggest countermeasures and design considerations. This is the first such analysis of secure routing in sensor networks

[Go to top]

Secure routing for structured peer-to-peer overlay networks (PDF)
by Miguel Castro, Peter Druschel, Ayalvadi Ganesh, Antony Rowstron, and Dan S. Wallach.
In SIGOPS Oper. Syst. Rev 36(SI), 2002, pages 299-314. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Structured peer-to-peer overlay networks provide a substrate for the construction of large-scale, decentralized applications, including distributed storage, group communication, and content distribution. These overlays are highly resilient; they can route messages correctly even when a large fraction of the nodes crash or the network partitions. But current overlays are not secure; even a small fraction of malicious nodes can prevent correct message delivery throughout the overlay. This problem is particularly serious in open peer-to-peer systems, where many diverse, autonomous parties without preexisting trust relationships wish to pool their resources. This paper studies attacks aimed at preventing correct message delivery in structured peer-to-peer overlays and presents defenses to these attacks. We describe and evaluate techniques that allow nodes to join the overlay, to maintain routing state, and to forward messages securely in the presence of malicious nodes

[Go to top]

Scalable application layer multicast (PDF)
by Suman Banerjee, Bobby Bhattacharjee, and Christopher Kommareddy.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data delivery trees with desirable properties.We present extensive simulations of both our protocol and the Narada application-layer multicast protocol over Internet-like topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25), improved or similar end-to-end latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic.Finally, we present results from our wide-area testbed in which we experimented with 32-100 member groups distributed over 8 different sites. In our experiments, average group members established and maintained low-latency paths and incurred a maximum packet loss rate of less than 1 as members randomly joined and left the multicast group. The average control overhead during our experiments was less than 1 Kbps for groups of size 100

[Go to top]

Query-flood DoS attacks in gnutella (PDF)
by Neil Daswani and Hector Garcia-Molina.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a simple but effective traffic model that can be used to understand the effects of denial-of-service (DoS) attacks based on query floods in Gnutella networks. We run simulations based on the model to analyze how different choices of network topology and application level load balancing policies can minimize the effect of these types of DoS attacks. In addition, we also study how damage caused by query floods is distributed throughout the network, and how application-level policies can localize the damage

[Go to top]

Pastiche: Making Backup Cheap and Easy (PDF)
by Landon P. Cox, Christopher D. Murray, and Brian D. Noble.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Backup is cumbersome and expensive. Individual users almost never back up their data, and backup is a significant cost in large organizations. This paper presents Pastiche, a simple and inexpensive backup system. Pastiche exploits excess disk capacity to perform peer-to-peer backup with no administrative costs. Each node minimizes storage overhead by selecting peers that share a significant amount of data. It is easy for common installations to find suitable peers, and peers with high overlap can be identified with only hundreds of bytes. Pastiche provides mechanisms for confidentiality, integrity, and detection of failed or malicious peers. A Pastiche prototype suffers only 7.4 overhead for a modified Andrew Benchmark, and restore performance is comparable to cross-machine copy

[Go to top]

Ivy: A Read/Write Peer-to-Peer File System (PDF)
by Athicha Muthitacharoen, Robert Morris, Thomer M. Gil, and Bengie Chen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Ivy is a multi-user read/write peer-to-peer file system. Ivy has no centralized or dedicated components, and it provides useful integrity properties without requiring users to fully trust either the underlying peer-to-peer storage system or the other users of the file system

[Go to top]

Improving Data Availability through Dynamic Model-Driven Replication in Large Peer-to-Peer Communities (PDF)
by Kavitha Ranganathan, Adriana Iamnitchi, and Ian Foster.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Efficient data sharing in global peer-to-peer systems is complicated by erratic node failure, unreliable networkconnectivity and limited bandwidth.Replicating data onmultiple nodes can improve availability and response time.Yet determining when and where to replicate data in orderto meet performance goals in large-scale systems withmany users and files, dynamic network characteristics, and changing user behavior is difficult.We propose anapproach in which peers create replicas automatically in a decentralized fashion, as required to meet availabilitygoals.The aim of our framework is to maintain a thresholdlevel of availability at all times.We identify a set of factors that hinder data availabilityand propose a model that decides when more replication isnecessary.We evaluate the accuracy and performance ofthe proposed model using simulations.Our preliminaryresults show that the model is effective in predicting therequired number of replicas in the system

[Go to top]

Exploiting network proximity in distributed hash tables (PDF)
by Miguel Castro, Peter Druschel, and Y. Charlie Hu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Self-organizing peer-to-peer (p2p) overlay networks like CAN, Chord, Pastry and Tapestry (also called distributed hash tables or DHTs) offer a novel platform for a variety of scalable and decentralized distributed applications. These systems provide efficient and fault-tolerant routing, object location, and load balancing within a self-organizing overlay network. One important aspect of these systems is how they exploit network proximity in the underlying Internet. Three basic approaches have been proposed to exploit network proximity in DHTs, geographic layout, proximity routing and proximity neighbour selection. In this position paper, we briefly discuss the three approaches, contrast their strengths and shortcomings, and consider their applicability in the different DHT routing protocols. We conclude that proximity neighbor selection, when used in DHTs with prefixbased routing like Pastry and Tapestry, is highly effective and appears to dominate the other approaches

[Go to top]

Erasure Coding Vs. Replication: A Quantitative Comparison (PDF)
by Hakim Weatherspoon and John Kubiatowicz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer systems are positioned to take advantage of gains in network bandwidth, storage capacity, and computational resources to provide long-term durable storage infrastructures. In this paper, we quantitatively compare building a distributed storage infrastructure that is self-repairing and resilient to faults using either a replicated system or an erasure-resilient system. We show that systems employing erasure codes have mean time to failures many orders of magnitude higher than replicated systems with similar storage and bandwidth requirements. More importantly, erasure-resilient systems use an order of magnitude less bandwidth and storage to provide similar system durability as replicated systems

[Go to top]

Wide-area cooperative storage with CFS (PDF)
by Frank Dabek, Frans M. Kaashoek, David Karger, Robert Morris, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail

[Go to top]

pStore: A Secure Peer-to-Peer Backup System (PDF)
by Christopher Batten, Kenneth Barr, Arvind Saraf, and Stanley Trepetin.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In an effort to combine research in peer-to-peer systems with techniques for incremental backup systems, we propose pStore: a secure distributed backup system based on an adaptive peer-to-peer network. pStore exploits unused personal hard drive space attached to the Internet to provide the distributed redundancy needed for reliable and effective data backup. Experiments on a 30 node network show that 95 of the files in a 13 MB dataset can be retrieved even when 7 of the nodes have failed. On top of this reliability, pStore includes support for file encryption, versioning, and secure sharing. Its custom versioning system permits arbitrary version retrieval similar to CVS. pStore provides this functionality at less than 10 of the network bandwidth and requires 85 less storage capacity than simpler local tape backup schemes for a representative workload

[Go to top]

Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems (PDF)
by Antony Rowstron and Peter Druschel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer applications.Pastry performs application-level routing and object location in a potentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties

[Go to top]

Freenet: A Distributed Anonymous Information Storage and Retrieval System (PDF)
by Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and di$$cult for a node operator to determine or be held responsible for the actual physical contents of her own node

[Go to top]

Freenet: A Distributed Anonymous Information Storage and Retrieval System (PDF)
by Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and di$$cult for a node operator to determine or be held responsible for the actual physical contents of her own node

[Go to top]

P2P emulation

ModelNet-TE: An emulation tool for the study of P2P and traffic engineering interaction dynamics (PDF)
by D. Rossi, P. Veglia, M. Sammarco, and F. Larroca.
In Peer-to-Peer Networking and Applications, 2012, pages 1-19. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

PARC

Distributed Constraint Optimization as a Formal Model of Partially Adversarial Cooperation (PDF)
by Makoto Yokoo and Edmund H. Durfee.
In unknown(CSE-TR-101-9), 1991. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we argue that partially adversarial and partially cooperative (PARC) problems in distributed arti cial intelligence can be mapped into a formalism called distributed constraint optimization problems (DCOPs), which generalize distributed constraint satisfaction problems [Yokoo, et al. 90] by introducing weak constraints (preferences). We discuss several solution criteria for DCOP and clarify the relation between these criteria and di erent levels of agent rationality [Rosenschein and Genesereth 85], and show the algorithms for solving DCOPs in which agents incrementally exchange only necessary information to converge on a mutually satis able bsolution

[Go to top]

PGP

Receiver Anonymity via Incomparable Public Keys (PDF)
by Brent Waters, Edward W. Felten, and Amit Sahai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a new method for protecting the anonymity of message receivers in an untrusted network. Surprisingly, existing methods fail to provide the required level of anonymity for receivers (although those methods do protect sender anonymity). Our method relies on the use of multicast, along with a novel cryptographic primitive that we call an Incomparable Public Key cryptosystem, which allows a receiver to efficiently create many anonymous "identities" for itself without divulging that these separate "identities" actually refer to the same receiver, and without increasing the receiver's workload as the number of identities increases. We describe the details of our method, along with a prototype implementation

[Go to top]

Self-Organized Public-Key Management for Mobile Ad Hoc Networks (PDF)
by Srdjan Capkun, Levente Buttyán, and J-P Hubaux.
In IEEE Transactions on Mobile Computing 2(1), 2003, pages 52-64. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In contrast with conventional networks, mobile ad hoc networks usually do not provide online access to trusted authorities or to centralized servers, and they exhibit frequent partitioning due to link and node failures and to node mobility. For these reasons, traditional security solutions that require online trusted authorities or certificate repositories are not well-suited for securing ad hoc networks. In this paper, we propose a fully self-organized public-key management system that allows users to generate their public-private key pairs, to issue certificates, and to perform authentication regardless of the network partitions and without any centralized services. Furthermore, our approach does not require any trusted authority, not even in the system initialization phase

[Go to top]

Small Worlds in Security Systems: an Analysis of the PGP Certificate Graph (PDF)
by Srdan Capkun, Levente Buttyán, and Jean-Pierre Hubaux.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a new approach to securing self-organized mobile ad hoc networks. In this approach, security is achieved in a fully self-organized manner; by this we mean that the security system does not require any kind of certification authority or centralized server, even for the initialization phase. In our work, we were inspired by PGP [15] because its operation relies solely on the acquaintances between users. We show that the small-world phenomenon naturally emerges in the PGP system as a consequence of the self-organization of users. We show this by studying the PGP certificate graph properties and by quantifying its small-world characteristics. We argue that the certificate graphs of self-organized security systems will exhibit a similar small-world phenomenon, and we provide a way to model self-organized certificate graphs. The results of the PGP certificate graph analysis and graph modelling can be used to build new self-organized security systems and to test the performance of the existing proposals. In this work, we refer to such an example

[Go to top]

PIER

Querying the internet with PIER (PDF)
by Ryan Huebsch, Joseph M. Hellerstein, Nick Lanham, Boon Thau Loo, S Shenker, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

PIR

Tools for privacy preserving distributed data mining (PDF)
by Chris Clifton, Murat Kantarcioglu, Jaideep Vaidya, Xiaodong Lin, and Michael Y. Zhu.
In SIGKDD Explorations Newsletter 4(2), December 2002, pages 28-34. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Privacy preserving mining of distributed data has numerous applications. Each application poses different constraints: What is meant by privacy, what are the desired results, how is the data distributed, what are the constraints on collaboration and cooperative computing, etc. We suggest that the solution to this is a toolkit of components that can be combined for specific privacy-preserving data mining applications. This paper presents some components of such a toolkit, and shows how they can be used to solve several privacy-preserving data mining problems

[Go to top]

PIR-Tor

PIR-Tor: Scalable Anonymous Communication Using Private Information Retrieval (PDF)
by Prateek Mittal, Femi Olumofin, Carmela Troncoso, Nikita Borisov, and Ian Goldberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Existing anonymous communication systems like Tor do not scale well as they require all users to maintain up-to-date information about all available Tor relays in the system. Current proposals for scaling anonymous communication advocate a peer-to-peer (P2P) approach. While the P2P paradigm scales to millions of nodes, it provides new opportunities to compromise anonymity. In this paper, we step away from the P2P paradigm and advocate a client-server approach to scalable anonymity. We propose PIR-Tor, an architecture for the Tor network in which users obtain information about only a few onion routers using private information retrieval techniques. Obtaining information about only a few onion routers is the key to the scalability of our approach, while the use of private retrieval information techniques helps preserve client anonymity. The security of our architecture depends on the security of PIR schemes which are well understood and relatively easy to analyze, as opposed to peer-to-peer designs that require analyzing extremely complex and dynamic systems. In particular, we demonstrate that reasonable parameters of our architecture provide equivalent security to that of the Tor network. Moreover, our experimental results show that the overhead of PIR-Tor is manageable even when the Tor network scales by two orders of magnitude

[Go to top]

PKI

A Censorship-Resistant, Privacy-Enhancing and Fully Decentralized Name System (PDF)
by Matthias Wachs, Martin Schanzenbach, and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) is vital for access to information on the Internet. This makes it a target for attackers whose aim is to suppress free access to information. This paper introduces the design and implementation of the GNU Name System (GNS), a fully decentralized and censorship-resistant name system. GNS provides a privacy-enhancing alternative to DNS which preserves the desirable property of memorable names. Due to its design, it can also double as a partial replacement of public key infrastructures, such as X.509. The design of GNS incorporates the capability to integrate and coexist with DNS. GNS is based on the principle of a petname system and builds on ideas from the Simple Distributed Security Infrastructure (SDSI), addressing a central issue with the decentralized mapping of secure identifiers to memorable names: namely the impossibility of providing a global, secure and memorable mapping without a trusted authority. GNS uses the transitivity in the SDSI design to replace the trusted root with secure delegation of authority, thus making petnames useful to other users while operating under a very strong adversary model. In addition to describing the GNS design, we also discuss some of the mechanisms that are needed to smoothly integrate GNS with existing processes and procedures in Web browsers. Specifically, we show how GNS is able to transparently support many assumptions that the existing HTTP(S) infrastructure makes about globally unique names

[Go to top]

Trust-Rated Authentication for Domain-Structured Distributed Systems (PDF)
by Ralph Holz, Heiko Niedermayer, Peter Hauck, and Georg Carle.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present an authentication scheme and new protocol for domain-based scenarios with inter-domain authentication. Our protocol is primarily intended for domain-structured Peer-to-Peer systems but is applicable for any domain scenario where clients from different domains wish to authenticate to each other. To this end, we make use of Trusted Third Parties in the form of Domain Authentication Servers in each domain. These act on behalf of their clients, resulting in a four-party protocol. If there is a secure channel between the Domain Authentication Servers, our protocol can provide secure authentication. To address the case where domains do not have a secure channel between them, we extend our scheme with the concept of trust-rating. Domain Authentication Servers signal security-relevant information to their clients (pre-existing secure channel or not, trust, ...). The clients evaluate this information to decide if it fits the security requirements of their application

[Go to top]

POSIX

Cryogenic: Enabling Power-Aware Applications on Linux (PDF)
by Alejandra Morales.
Masters, Technische Universität München, February 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As a means of reducing power consumption, hardware devices are capable to enter into sleep-states that have low power consumption. Waking up from those states in order to return to work is typically a rather energy-intensive activity. Some existing applications have non-urgent tasks that currently force hardware to wake up needlessly or prevent it from going to sleep. It would be better if such non-urgent activities could be scheduled to execute when the respective devices are active to maximize the duration of sleep-states. This requires cooperation between applications and the kernel in order to determine when the execution of a task will not be expensive in terms of power consumption. This work presents the design and implementation of Cryogenic, a POSIX-compatible API that enables clustering tasks based on the hardware activity state. Specifically, Cryogenic's API allows applications to defer their execution until other tasks use the device they want to use. As a result, two actions that contribute to reduce the device energy consumption are achieved: reduce the number of hardware wake-ups and maximize the idle periods. The energy measurements enacted at the end of this thesis demonstrate that, for the specific setup and conditions present during our experimentation, Cryogenic is capable to achieve savings between 1 and 10 for a USB WiFi device. Although we ideally target mobile platforms, Cryogenic has been developed by means a new Linux module that integrates with the existing POSIX event loop system calls. This allows to use Cryogenic on many different platforms as long as they use a GNU/Linux distribution as the main operating system. An evidence of this can be found in this thesis, where we demonstrate the power savings on a single-board computer

[Go to top]

Panic

An Approach for Home Routers to Securely Erase Sensitive Data (PDF)
by Nicolas Bene s.
Bachelor Thesis, Technische Universität München, October 2014. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Home routers are always-on low power embedded systems and part of the Internet infrastructure. In addition to the basic router functionality, they can be used to operate sensitive personal services, such as for private web and email servers, secure peer-to-peer networking services like GNUnet and Tor, and encrypted network file system services. These services naturally involve cryptographic operations with the cleartext keys being stored in RAM. This makes router devices possible targets to physical attacks by home intruders. Attacks include interception of unprotected data on bus wires, alteration of firmware through exposed JTAG headers, or recovery of cryptographic keys through the cold boot attack. This thesis presents Panic!, a combination of open hardware design and free software to detect physical integrity attacks and to react by securely erasing cryptographic keys and other sensitive data from memory. To improve auditability and to allow cheap reproduction, the components of Panic! are kept simple in terms of conceptual design and lines of code. First, the motivation to use home routers for services besides routing and the need to protect their physical integrity is discussed. Second, the idea and functionality of the Panic! system is introduced and the high-level interactions between its components explained. Third, the software components to be run on the router are described. Fourth, the requirements of the measurement circuit are declared and a prototype is presented. Fifth, some characteristics of pressurized environments are discussed and the difficulties for finding adequate containments are explained. Finally, an outlook to tasks left for the future is given

[Go to top]

Paris metro pricing

Internet pricing with a game theoretical approach: concepts and examples (