Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (26) (entfernen)
Volltext vorhanden
- ja (26) (entfernen)
Schlagworte
- Algorithmus (3)
- Middleware (3)
- Abfrageverarbeitung (2)
- Ad-hoc-Netz (2)
- Mobile Computing (2)
- OBDD (2)
- OBDDs (2)
- Peer-to-Peer-Netz (2)
- Selbstorganisation (2)
- mobile computing (2)
Institut
- Informatik (26) (entfernen)
Virtuelle Umgebungen erfreuen sich in den letzten Jahren einer großen Beliebtheit und können kontinuierlich wachsende Nutzerzahlen vorweisen. Deshalb wird erwartet, dass die ohnehin große Nutzergemeinde weiterhin wächst. Dieses Wachstum stellt jedoch die Entwickler und die Betreiber von virtuellen Umgebungen vor eine Herausforderung. Die meisten heutzutage erfolgreichen virtuellen Umgebungen basieren auf einer Client-Server-Infrastruktur, in der der Server für die Verwaltung der gesamten Umgebung zuständig ist. Bei einer gleichzeitigen Inanspruchnahme durch viele Nutzer werden die serverbasierten Umgebungen jedoch massiven Skalierbarkeits- und Lastverteilungsproblemen ausgesetzt. Diese Probleme führen beispielsweise dazu, dass Nutzer Wartezeiten beim Einloggen in Kauf nehmen müssen bzw. selbst im Falle eines erfolgreichen Logins die "überfüllten" virtuellen Regionen nicht betreten dürfen. Deshalb wird in dieser Arbeit untersucht, ob eine Verbesserung der Skalierbarkeit und der Lastverteilung in virtuellen Umgebungen durch die Verwendung alternativer Infrastrukturansätze erreicht werden kann. Außerdem werden Maßnahmen zur weiteren Entlastung der Basis-Infrastruktur entwickelt. Diese Verbesserung soll zum einen dadurch erzielt werden, dass anstelle eines Servers ein Server-Verbund (Peer-to-Peer-System) als eine Art Management-Schicht agieren soll, in der jeder Knoten nur einen bestimmten Teil der virtuellen Umgebung verwaltet. Dabei wird die gewünschte Lastverteilung durch die Aufteilung der Verantwortungsbereiche auf die Knoten der Management-Schicht erreicht. Gleichzeitig entsteht aber ein Mehraufwand für die Konstruktion bzw. Erhaltung einer solchen Management-Schicht. Deshalb werden in dieser Arbeit die Vor- und Nachteile eines Infrastrukturwechsels hin zu dezentralisierten Infrastruktur-Ansätzen untersucht. Zum anderen wird eine spürbare Entlastung der Management-Schicht und somit eine Verbesserung der Skalierbarkeit und der Lastverteilung durch die Übertragung einiger Verwaltungsaufgaben an die Clients angestrebt. Zu diesen Verwaltungsaufgaben zählen die Gruppenkommunikation und das Konsistenz-Handling. Dazu wird in dieser Arbeit erforscht, inwiefern die Clients, die sich innerhalb derselben virtuellen Region befinden, selbst die Gruppenkommunikation über eine Multicast-Infrastruktur regeln können bzw. in die Wahrung der Konsistenz eines gemeinsamen Zustandes und der gemeinsam genutzten Objekte involviert werden können. Bei dem clientbasierten Konsistenz-Handling sollte vor allem auf die besonderen Interaktivitäts- und Konsistenzanforderungen, die Nutzer an die virtuellen Umgebungen stellen, geachtet werden. Dazu wird ein neues Konsistenzmodell definiert, welches die Einhaltung der strengen Interaktivitätsanforderungen erzwingt und die Konsistenz mit einer bestimmten Wahrscheinlichkeit garantieren kann.
In dieser Arbeit präsentieren wir einen systematischen Ansatz zur Multithread Implementierung von Divide und Conquer sowie inkrementellen Algorithmen. Wir stellen für beide Kategorien von Algorithmen ein Framework vor. Beide Frameworks sollen die Behandlung von Threads erleichtern, die Implementierung von parallelen Algorithmen beschleunigen und die Rechenzeit verringern. Mit Hilfe der Frameworks parallelisieren wir beispielhaft zahlreiche Algorithmen insbesondere aus dem Bereich Computational Geometry, unter anderem: Sortieralgorithmen, konvexe Hülle Algorithmen und Triangulierungen. Der Programmcode zu diese Arbeit ist in C++ unter Verwendung templatisierter Klassen implementiert und basiert auf der LEDA Bibliothek.
Automata theory is the study of abstract machines. It is a theory in theoretical computer science and discrete mathematics (a subject of study in mathematics and computer science). The word automata (the plural of automaton) comes from a Greek word which means "self-acting". Automata theory is closely related to formal language theory [99, 101]. The theory of formal languages constitutes the backbone of the field of science now generally known as theoretical computer science. This thesis aims to introduce a few types of automata and studies then class of languages recognized by them. Chapter 1 is the road map with introduction and preliminaries. In Chapter 2 we consider few formal languages associated to graphs that has Eulerian trails. We place few languages in the Chomsky hierarchy that has some other properties together with the Eulerian property. In Chapter 3 we consider jumping finite automata, i. e., finite automata in which input head after reading and consuming a symbol, can jump to an arbitrary position of the remaining input. We characterize the class of languages described by jumping finite automata in terms of special shuffle expressions and survey other equivalent notions from the existing literature. We could also characterize some super classes of this language class. In Chapter 4 we introduce boustrophedon finite automata, i. e., finite automata working on rectangular shaped arrays (i. e., pictures) in a boustrophedon mode and we also introduce returning finite automata that reads the input, line after line, does not alters the direction like boustrophedon finite automata i. e., reads always from left to right, line after line. We provide close relationships with the well-established class of regular matrix (array) languages. We sketch possible applications to character recognition and kolam patterns. Chapter 5 deals with general boustrophedon finite automata, general returning finite automata that read with different scanning strategies. We show that all 32 different variants only describe two different classes of array languages. We also introduce Mealy machines working on pictures and show how these can be used in a modular design of picture processing devices. In Chapter 6 we compare three different types of regular grammars of array languages introduced in the literature, regular matrix grammars, (regular : regular) array grammars, isometric regular array grammars, and variants thereof, focusing on hierarchical questions. We also refine the presentation of (regular : regular) array grammars in order to clarify the interrelations. In Chapter 7 we provide further directions of research with respect to the study that we have done in each of the chapters.
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.