### Refine

#### Year of publication

- 2010 (2) (remove)

#### Language

- English (2) (remove)

#### Has Fulltext

- yes (2) (remove)

#### Keywords

#### Institute

- Informatik (2) (remove)

We are living in a connected world, surrounded by interwoven technical systems. Since they pervade more and more aspects of our everyday lives, a thorough understanding of the structure and dynamics of these systems is becoming increasingly important. However - rather than being blueprinted and constructed at the drawing board - many technical infrastructures like for example the Internet's global router network, the World Wide Web, large scale Peer-to-Peer systems or the power grid - evolve in a distributed fashion, beyond the control of a central instance and influenced by various surrounding conditions and interdependencies. Hence, due to this increase in complexity, making statements about the structure and behavior of tomorrow's networked systems is becoming increasingly complicated. A number of failures has shown that complex structures can emerge unintentionally that resemble those which can be observed in biological, physical and social systems. In this dissertation, we investigate how such complex phenomena can be controlled and actively used. For this, we review methodologies stemming from the field of random and complex networks, which are being used for the study of natural, social and technical systems, thus delivering insights into their structure and dynamics. A particularly interesting finding is the fact that the efficiency, dependability and adaptivity of natural systems can be related to rather simple local interactions between a large number of elements. We review a number of interesting findings about the formation of complex structures and collective dynamics and investigate how these are applicable in the design and operation of large scale networked computing systems. A particular focus of this dissertation are applications of principles and methods stemming from the study of complex networks in distributed computing systems that are based on overlay networks. Here we argue how the fact that the (virtual) connectivity in such systems is alterable and widely independent from physical limitations facilitates a design that is based on analogies between complex network structures and phenomena studied in statistical physics. Based on results about the properties of scale-free networks, we present a simple membership protocol by which scale-free overlay networks with adjustable degree distribution exponent can be created in a distributed fashion. With this protocol we further exemplify how phase transition phenomena - as occurring frequently in the domain of statistical physics - can actively be used to quickly adapt macroscopic statistical network parameters which are known to massively influence the stability and performance of networked systems. In the case considered in this dissertation, the adaptation of the degree distribution exponent of a random, scale-free overlay allows - within critical regions - a change of relevant structural and dynamical properties. As such, the proposed scheme allows to make sound statements about the relation between the local behavior of individual nodes and large scale properties of the resulting complex network structures. For systems in which the degree distribution exponent cannot easily be derived for example from local protocol parameters, we further present a distributed, probabilistic mechanism which can be used to monitor a network's degree distribution exponent and thus to reason about important structural qualities. Finally, the dissertation shifts its focus towards the study of complex, non-linear dynamics in networked systems. We consider a message-based protocol which - based on the Kuramoto model for coupled oscillators - achieves a stable, global synchronization of periodic heartbeat events. The protocol's performance and stability is evaluated in different network topologies. We further argue that - based on existing findings about the interrelation between spectral network properties and the dynamics of coupled oscillators - the proposed protocol allows to monitor structural properties of networked computing systems. An important aspect of this dissertation is its interdisciplinary approach towards a sensible and constructive handling of complex structures and collective dynamics in networked systems. The associated investigation of distributed systems from the perspective of non-linear dynamics and statistical physics highlights interesting parallels both to biological and physical systems. This foreshadows systems whose structures and dynamics can be analyzed and understood in the conceptual frameworks of statistical physics and complex systems.

This work addresses the algorithmic tractability of hard combinatorial problems. Basically, we are considering \NP-hard problems. For those problemsrnwerncan not find a polynomial time algorithm. Several algorithmic approaches already exist which deal with this dilemma. Amongrnthemrnwe find (randomized) approximation algorithms and heuristics. Even though in practice they often work in reasonable time they usually do not return anrnoptimal solution. If we constrain optimality then there are only two methods which suffice for this purpose: exponential time algorithms andrnparameterized algorithms. In the first approach we seek to design algorithms consuming exponentially many steps who are more clever than some trivialrnalgorithm (whornsimply enumerates all solution candidates).rnTypically, the naive enumerative approach yields an algorithm with run time $\Oh^*(2^n)$. So, the general task is to construct algorithms obeying a run time of rnthe form $\Oh^*(c^n)$ where $c<2$.rn The second approach considers an additional parameter $k$ besides the input size $n$. This parameter shouldrnprovide more information about the problem and cover a typical characteristic. The standard parameterization is to see $k$ as an upper (lower, resp.)rnbound on the solution size in case of a minimization (maximization, resp.) problem. Then a parameterized algorithm should solve the problem in time $f(k)\cdot n^\beta$rnwhere $\beta$ is a constant and $f$ is independent of $n$. In principle this method aims to restrict the combinatorial difficulty of the problem tornthe parameter $k$ (if possible). The basic hypothesis is that $k$ is small with respect to the overall input size.rnIn both fields a frequent standard technique is the design of branching algorithms. These algorithms solve the problem by traversing the solutionrnspace in a clever way. They frequently select an entity of the input and create two new subproblems, one where this entity is considered as part ofrnthernfuture solution and another one where it is excluded from it. Then in both cases by fixing this entity possibly other entities will be fixed. If so then therntraversedrnnumber of possible solution is smaller than the whole solution space. The visited solutions can be arranged like a search tree. To estimate thernrun time of such algorithms there is need for a method to obtain tight upper bounds on the size of the search trees. In the field of exponential timernalgorithms a powerful technique called Measure&Conquer has been developed for this purpose. It has been applied successfully to manyrnproblems, especially to problems where other algorithmic attacks could not break the trivial run time upper bound. rnOn the other hand in the field of parameterized algorithms Measure&Conquer is almost not known. This piece of work will presentrnexamples where this technique can be used in this field. It also will point out what differences have to be made in order to successfully applyrnthe technique. Further, exponential time algorithms for hard problems where Measure&Conquer is applied are presented. Another aspect is thatrna formalization (and generalization) of the notion of a search tree is given. It is shown that for certain problems such a formalization is extremely useful.rn