Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (26) (entfernen)
Volltext vorhanden
- ja (26) (entfernen)
Schlagworte
- Algorithmus (3)
- Middleware (3)
- Abfrageverarbeitung (2)
- Ad-hoc-Netz (2)
- Mobile Computing (2)
- OBDD (2)
- OBDDs (2)
- Peer-to-Peer-Netz (2)
- Selbstorganisation (2)
- mobile computing (2)
- (general) boustrophedon (returning) finite automata (1)
- (general) jumping finite automata (1)
- (regular : regular) array grammars (1)
- (regulär: regulär) Array-Grammati (1)
- Algorithmische Lerntheorie (1)
- Amortisierte Laufzeitanalyse (1)
- Amortized run time analysis (1)
- Analyse (1)
- Ankunftsprozess (1)
- Applikationsunterstützung (1)
- Automata Theory (1)
- Automatentheorie (1)
- BMAP (1)
- Bilddatenverarbeitung (1)
- Client-server-Konzept (1)
- Cluster (1)
- Consistency (1)
- DICOM-image (1)
- Datenspeicherung (1)
- Datenstruktur (1)
- Datenverdichtung (1)
- Directed Graphs (1)
- Disambiguierung von Personennamen (1)
- Distributed Systems (1)
- Dokumentverarbeitung (1)
- E-Learning (1)
- Elektronische Bibliothek (1)
- Erweiterung (1)
- Eulerian trails (1)
- Exact Algorithms (1)
- Exakte Algorithmen (1)
- Exponental time algorithms (1)
- Exponentialzeit Algorithmen (1)
- Fallbasiertes Schließen (1)
- Flussdiagramm (1)
- Formal Verification (1)
- Formal languages (1)
- Gerichteter Graph (1)
- Graph Minors (1)
- Graph Rewriting (1)
- Graph Visualization (1)
- Graphen mit Eulerschen Pfaden (1)
- Graphvisualisierung (1)
- Hyperlink-Management (1)
- Hyperlink-Proposals (1)
- Immersion <Virtuelle Realität> (1)
- Information Visualization (1)
- Informationssystem (1)
- Informationsvisualisierung (1)
- Integrated Circuits (1)
- Internet (1)
- Internetdienst (1)
- Kollaboration <Informatik> (1)
- Kom (1)
- Kombinatorische Optimierung (1)
- Komplexe Netzwerke (1)
- Komplexe Systeme (1)
- Komplexität (1)
- Komponente <Software> (1)
- Konsistenz (1)
- Logischer Entwurf (1)
- MMVE (1)
- Markov Jump Process (1)
- Markov-Kette (1)
- Markov-Prozess (1)
- Mathematische Lerntheorie (1)
- Measure & Conquer (1)
- Metadaten (1)
- Minor <Graphentheorie> (1)
- Mobile Networks (1)
- Mobile Telekommunikation (1)
- Modellierung (1)
- Multicast Communication (1)
- Multicastingverfahren (1)
- NP-hartes Problem (1)
- Netzwerk (1)
- Netzwerksimulation (1)
- Nichtlineare Dynamik (1)
- Näherungsverfahren (1)
- Operations Research (1)
- Ordered Binary Decision Diagrams (1)
- Overlay Network (1)
- Overlay-Netz (1)
- Parameterisierte Algorithmen (1)
- Parameterized Algorithms (1)
- Parametrisierte Approximation (1)
- Peer-to-Peer Network (1)
- Periodic Queues (1)
- Personenname (1)
- Pipeline (1)
- Prozessor (1)
- Pufferspeicher (1)
- Queues (1)
- RPC (1)
- Radiologie (1)
- Rechnernetz (1)
- Reduktionssystem (1)
- Regular Expressions (1)
- Regulärer Ausdruck (1)
- Request-Prediction (1)
- Robustheit (1)
- Server (1)
- Software Visualization (1)
- Softwarevisualisierung (1)
- Speicherdirektzugriff (1)
- Standard ML (1)
- Statistical Mechanics of complex networks (1)
- Statistische Mechanik komplexer Netze (1)
- Store-And-Forward Network (1)
- Store-And-Forward Netzwerk (1)
- Streaming <Kommunikationstechnik> (1)
- Synchronisierung (1)
- Synergie (1)
- Syntaktische Analyse (1)
- Telekommunikationsnetz (1)
- Theoretische Informatik (1)
- Transitionssystem (1)
- Verifikation (1)
- Verteiltes System (1)
- Virtual Environment (1)
- Virtuelle Realität (1)
- Virtuelle Umgebung (1)
- Visualisierung (1)
- Visualization (1)
- Warteschlangentheorie (1)
- Wartesystem (1)
- Web-Applications (1)
- Website-Management (1)
- Weitverkehrsnetz (1)
- Workflow-Programm (1)
- XML (1)
- XOR Parity (1)
- Zufallsgraph (1)
- ad-hoc network (1)
- algorithm analysis (1)
- application support (1)
- archiving (1)
- cache behavior (1)
- clustering (1)
- complex networks (1)
- complex systems (1)
- complexity (1)
- compression (1)
- computational geometry (1)
- controlled queueing system (1)
- convex hull (1)
- digital library (1)
- divide and conquer (1)
- endliche Boustrophedon-Automaten (1)
- formal verification (1)
- grammatical inference (1)
- historical metadata (1)
- home pages (1)
- incremental algorithm (1)
- information retrieval (1)
- jumping endliche Automaten (1)
- memory distance (1)
- middleware (1)
- mobile Telekommunikation (1)
- mobile ad-hoc network (1)
- multicore (1)
- multihop Netzwerk (1)
- multihop network (1)
- network simulation (1)
- numerical analysis (1)
- optimal control (1)
- parameterised approximation (1)
- partitioning (1)
- person name disambiguation (1)
- processing (1)
- radiology (1)
- reordering (1)
- search engine (1)
- threshold (1)
- time complexity (1)
- web-based services (1)
- zurückkehrende(RFA) (1)
Institut
- Informatik (26) (entfernen)
In dieser Arbeit präsentieren wir einen systematischen Ansatz zur Multithread Implementierung von Divide und Conquer sowie inkrementellen Algorithmen. Wir stellen für beide Kategorien von Algorithmen ein Framework vor. Beide Frameworks sollen die Behandlung von Threads erleichtern, die Implementierung von parallelen Algorithmen beschleunigen und die Rechenzeit verringern. Mit Hilfe der Frameworks parallelisieren wir beispielhaft zahlreiche Algorithmen insbesondere aus dem Bereich Computational Geometry, unter anderem: Sortieralgorithmen, konvexe Hülle Algorithmen und Triangulierungen. Der Programmcode zu diese Arbeit ist in C++ unter Verwendung templatisierter Klassen implementiert und basiert auf der LEDA Bibliothek.
In dieser Arbeit wird ein Middlewaremodell für Anwendungen in mobilen ad-hoc Netzwerken vorgestellt. Der primäre Fokus liegt dabei auf Netzen, die auf Basis von mobilen Endbenutzergeräten wie PDAs, PocketPCs oder Smartphones gebildet werden und die über eine drahtlose Kommunikationstechnik wie WLAN oder Bluetooth verfügen. Zur Identifizierung der Ansprüche an eine Middleware in diesen Netzen wurden Anwendungen aus dem universitären Umfeld untersucht. Da die Kommunikation in drahtlosen Netzen nicht immer so transparent wie in drahtgebundenen Infrastrukturnetzen erfolgen kann, enthält die Middleware ein Kommunikationsframework, welches die Adaption und auch die Kombination von Kommunikationsmechanismen ermöglicht. Neben der direkten Auslieferung von Nachrichten werden auch Mechanismen zur sukzessiven Weitergabe von Nachrichten, die so genannte Store-and-Forward Kommunikation, angeboten. Hier steht insbesondere die Weitergabe von Informationen im Vorbeigehen (En-Passant) im Vordergrund. Die Komponenten der Middleware und darauf basierenden Applikationen werden als interagierende mobile Objekte modelliert. Die Interaktion dieser Objekte erfolgt aufgrund der Dynamik der mobilen ad-hoc Netze ereignisorientiert. Sowohl die Interaktion, als auch die Adaption und Wahl von Kommunikationsmechanismen erfolgt auf Basis von Informationen über das direkte Umfeld, Applikationswissen über das Netzwerk und Erfahrungen aus der Vergangenheit. Diese Informationen, die von Objekten verbreitet und gesammelt werden können, werden als Spuren bezeichnet. Da eine Aufhebung der Transparenz der Middlewarekomponenten einen höheren Entwicklungsaufwand bedeutet, wird eine Realisierung der Objektinteraktion in Objektrepräsentanten vorgeschlagen. Objektrepräsentanten ermöglichen die Kommunikation mit entfernten Objekten, das lokale Verwalten von Informationen über diese Objekte und schließlich die Repräsentierung von Objekten ohne reale Objektinstanz. Objekte ohne Objektinstanz werden verwendet, um Gerätegruppen mit bestimmten Eigenschaften zusammenzufassen, beispielsweise Objekte an einem bestimmten geographischen Ort. Somit ist auch das Wissen über geographische Orte und deren Identifizierung ein wichtiger Aspekt des Modells. Aufgrund der Anwendungsuntersuchungen ist sukzessive ein Middlewareprototyp entstanden, der in eine Simulationsumgebung für mobile ad-hoc Netze integriert ist. Die Umgebung erlaubt neben der reinen Simulation mobiler ad-hoc Netze und der zu untersuchenden Anwendung auch die Integration realer Geräte und die Ausführung der Anwendungen auf realen Geräten. Somit konnte, neben einer simulativen Untersuchung, auch eine praktische Evaluation der Anwendungen in Feldversuchen durchgeführt werden.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
Hardware bugs can be extremely expensive, financially. Because microprocessors and integrated circuits have become omnipresent in our daily live and also because of their continously growing complexity, research is driven towards methods and tools that are supposed to provide higher reliability of hardware designs and their implementations. Over the last decade Ordered Binary Decision Diagrams (OBDDs) have been well proven to serve as a data structure for the representation of combinatorial or sequential circuits. Their conciseness and their efficient algorithmic properties are responsible for their huge success in formal verification. But, due to Shannon's counting argument, OBDDs can not always guarantee the concise representation of a given design. In this thesis, Parity Ordered Binary Decision Diagrams are presented, which are a true extension of OBDDs. In addition to the regular branching nodes of an OBDD, functional nodes representing a parity operation are integrated into the data structure, thus resulting in Parity-OBDDs. Parity-OBDDs are more powerful than OBDDs are, but, they are no longer a canonical representation. Besides theoretical aspects of Parity-OBDDs, algorithms for their efficient manipulation are the main focus of this thesis. Furthermore, an analysis on the factors that influence the Parity-OBDD representation size gives way for the development of heuristic algorithms for their minimization. The results of these analyses as well as the efficiency of the data structure are also supported by experiments. Finally, the algorithmic concept of Parity-OBDDs is extended to Mod-p-Decision Diagrams (Mod-p-DDs) for the representation of functions that are defined over an arbitrary finite domain.
Today, usage of complex circuit designs in computers, in multimedia applications and communication devices is widespread and still increasing. At the same time, due to Moore's Law we do not expect to see an end in the growth of the complexity of digital circuits. The decreasing ability of common validation techniques -- like simulation -- to assure correctness of a circuit design enlarges the need for formal verification techniques. Formal verification delivers a mathematical proof that a given implementation of a design fulfills its specification. One of the basic and during the last years widely used data structure in formal verification are the so called Ordered Binary Decision Diagrams (OBDDs) introduced by R. Bryant in 1986. The topic of this thesis is integration of structural high-level information in the OBDD-based formal verification of sequential systems. This work consist of three major parts, covering different layers of formal verification applications: At the application layer, an assertion checking methodology, integrated in the verification flow of the high-level design and verification tool Protocol Compiler is presented. At the algorithmic layer, new approaches for partitioning of transition relations of complex finite state machines, that significantly improve the performance of OBDD-based sequential verification are introduced. Finally, at the data structure level, dynamic variable reordering techniques that drastically reduce the time required for reordering without a trade-off in OBDD-size are described. Overall, this work demonstrates how a tighter integration of applications by using structural information can significantly improve the efficiency of formal verification applications in an industrial setting.
We are living in a connected world, surrounded by interwoven technical systems. Since they pervade more and more aspects of our everyday lives, a thorough understanding of the structure and dynamics of these systems is becoming increasingly important. However - rather than being blueprinted and constructed at the drawing board - many technical infrastructures like for example the Internet's global router network, the World Wide Web, large scale Peer-to-Peer systems or the power grid - evolve in a distributed fashion, beyond the control of a central instance and influenced by various surrounding conditions and interdependencies. Hence, due to this increase in complexity, making statements about the structure and behavior of tomorrow's networked systems is becoming increasingly complicated. A number of failures has shown that complex structures can emerge unintentionally that resemble those which can be observed in biological, physical and social systems. In this dissertation, we investigate how such complex phenomena can be controlled and actively used. For this, we review methodologies stemming from the field of random and complex networks, which are being used for the study of natural, social and technical systems, thus delivering insights into their structure and dynamics. A particularly interesting finding is the fact that the efficiency, dependability and adaptivity of natural systems can be related to rather simple local interactions between a large number of elements. We review a number of interesting findings about the formation of complex structures and collective dynamics and investigate how these are applicable in the design and operation of large scale networked computing systems. A particular focus of this dissertation are applications of principles and methods stemming from the study of complex networks in distributed computing systems that are based on overlay networks. Here we argue how the fact that the (virtual) connectivity in such systems is alterable and widely independent from physical limitations facilitates a design that is based on analogies between complex network structures and phenomena studied in statistical physics. Based on results about the properties of scale-free networks, we present a simple membership protocol by which scale-free overlay networks with adjustable degree distribution exponent can be created in a distributed fashion. With this protocol we further exemplify how phase transition phenomena - as occurring frequently in the domain of statistical physics - can actively be used to quickly adapt macroscopic statistical network parameters which are known to massively influence the stability and performance of networked systems. In the case considered in this dissertation, the adaptation of the degree distribution exponent of a random, scale-free overlay allows - within critical regions - a change of relevant structural and dynamical properties. As such, the proposed scheme allows to make sound statements about the relation between the local behavior of individual nodes and large scale properties of the resulting complex network structures. For systems in which the degree distribution exponent cannot easily be derived for example from local protocol parameters, we further present a distributed, probabilistic mechanism which can be used to monitor a network's degree distribution exponent and thus to reason about important structural qualities. Finally, the dissertation shifts its focus towards the study of complex, non-linear dynamics in networked systems. We consider a message-based protocol which - based on the Kuramoto model for coupled oscillators - achieves a stable, global synchronization of periodic heartbeat events. The protocol's performance and stability is evaluated in different network topologies. We further argue that - based on existing findings about the interrelation between spectral network properties and the dynamics of coupled oscillators - the proposed protocol allows to monitor structural properties of networked computing systems. An important aspect of this dissertation is its interdisciplinary approach towards a sensible and constructive handling of complex structures and collective dynamics in networked systems. The associated investigation of distributed systems from the perspective of non-linear dynamics and statistical physics highlights interesting parallels both to biological and physical systems. This foreshadows systems whose structures and dynamics can be analyzed and understood in the conceptual frameworks of statistical physics and complex systems.
This thesis centers on formal tree languages and on their learnability by algorithmic methods in abstractions of several learning settings. After a general introduction, we present a survey of relevant definitions for the formal tree concept as well as special cases (strings) and refinements (multi-dimensional trees) thereof. In Chapter 3 we discuss the theoretical foundations of algorithmic learning in a specific type of setting of particular interest in the area of Grammatical Inference where the task consists in deriving a correct formal description for an unknown target language from various information sources (queries and/or finite samples) in a polynomial number of steps. We develop a parameterized meta-algorithm that incorporates several prominent learning algorithms from the literature in order to highlight the basic routines which regardless of the nature of the information sources have to be run through by all those algorithms alike. In this framework, the intended target descriptions are deterministic finite-state tree automata. We discuss the limited transferability of this approach to another class of descriptions, residual finite-state tree automata, for which we propose several learning algorithms as well. The learnable class by these techniques corresponds to the class of regular tree languages. In Chapter 4we outline a recent range of attempts in Grammatical Inference to extend the learnable language classes beyond regularity and even beyond context-freeness by techniques based on syntactic observations which can be subsumed under the term 'distributional learning', and we describe learning algorithms in several settings for the tree case taking this approach. We conclude with some general reflections on the notion of learning from structural information.
Diese Arbeit stellt eine einheitliche Workbench zur Entwicklung von mobilen Anwendungen in multihop Ad-Hoc-Netzwerken vor. Die einheitliche Workbench besteht aus drei Bausteinen: einem Simulator für diese Netzwerke, einer hybriden Emulationsumgebung für mobile multihop Ad-Hoc-Netzwerke und einer Ausführungsplattform für Anwendungen auf mobilen Geräten. Anwendungen können bereits im Simulator vollständig implementiert und evaluiert werden. Der entstandene Code kann unverändert sowohl im Emulationsteil als auch auf der Ausführungsplattform für mobile Geräte eingesetzt werden. Im passenden dreistufigen Entwicklungsprozeß wird die Anwendung im Simulator implementiert und getestet, bevor sie -- zusammen mit einer graphischen Oberfläche -- in der hybriden Emulationsumgebung weiter evaluiert und optimiert wird. Zuletzt erfolgt die Erprobung auf mobilen Geräten im Feldversuch. Mehrere tausend bis zehntausend mobile Geräte können in der Workbench durch die Beschränkung auf die topologischen Aspekte des Ad-Hoc-Netzwerks und eine besondere Bewegungsmodellabstraktion, die die besonders effiziente Berechnung der Bewegungs- und Konnektivitätsdaten der Geräte ermöglicht, effizient simuliert werden. Die Vorausberechnung und Wiederverwendung dieser Daten ist möglich. Simulationen können in Echtzeit detailliert visualisiert werden, wobei die Art der Visualisierung und das Ausgabeformat vom Benutzer definiert werden können. Die Workbench und der Softwareentwicklungsprozeß für Anwendungen in mobilen multihop Ad-Hoc-Netzwerken werden anhand einer Fallstudie erprobt. Dabei werden die Erfahrungen bei der Implementierung einer Middleware für Ad-Hoc-Anwendungen sowie bei der Entwicklung einer selbstorganisierenden Auktionsanwendung aufgezeigt.