Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (26) (entfernen)
Schlagworte
- Algorithmus (3)
- Middleware (3)
- Abfrageverarbeitung (2)
- Ad-hoc-Netz (2)
- Mobile Computing (2)
- OBDD (2)
- OBDDs (2)
- Peer-to-Peer-Netz (2)
- Selbstorganisation (2)
- mobile computing (2)
- (general) boustrophedon (returning) finite automata (1)
- (general) jumping finite automata (1)
- (regular : regular) array grammars (1)
- (regulär: regulär) Array-Grammati (1)
- Algorithmische Lerntheorie (1)
- Amortisierte Laufzeitanalyse (1)
- Amortized run time analysis (1)
- Analyse (1)
- Ankunftsprozess (1)
- Applikationsunterstützung (1)
- Automata Theory (1)
- Automatentheorie (1)
- BMAP (1)
- Bilddatenverarbeitung (1)
- Client-server-Konzept (1)
- Cluster (1)
- Consistency (1)
- DICOM-image (1)
- Datenspeicherung (1)
- Datenstruktur (1)
- Datenverdichtung (1)
- Directed Graphs (1)
- Disambiguierung von Personennamen (1)
- Distributed Systems (1)
- Dokumentverarbeitung (1)
- E-Learning (1)
- Elektronische Bibliothek (1)
- Erweiterung (1)
- Eulerian trails (1)
- Exact Algorithms (1)
- Exakte Algorithmen (1)
- Exponental time algorithms (1)
- Exponentialzeit Algorithmen (1)
- Fallbasiertes Schließen (1)
- Flussdiagramm (1)
- Formal Verification (1)
- Formal languages (1)
- Gerichteter Graph (1)
- Graph Minors (1)
- Graph Rewriting (1)
- Graph Visualization (1)
- Graphen mit Eulerschen Pfaden (1)
- Graphvisualisierung (1)
- Hyperlink-Management (1)
- Hyperlink-Proposals (1)
- Immersion <Virtuelle Realität> (1)
- Information Visualization (1)
- Informationssystem (1)
- Informationsvisualisierung (1)
- Integrated Circuits (1)
- Internet (1)
- Internetdienst (1)
- Kollaboration <Informatik> (1)
- Kom (1)
- Kombinatorische Optimierung (1)
- Komplexe Netzwerke (1)
- Komplexe Systeme (1)
- Komplexität (1)
- Komponente <Software> (1)
- Konsistenz (1)
- Logischer Entwurf (1)
- MMVE (1)
- Markov Jump Process (1)
- Markov-Kette (1)
- Markov-Prozess (1)
- Mathematische Lerntheorie (1)
- Measure & Conquer (1)
- Metadaten (1)
- Minor <Graphentheorie> (1)
- Mobile Networks (1)
- Mobile Telekommunikation (1)
- Modellierung (1)
- Multicast Communication (1)
- Multicastingverfahren (1)
- NP-hartes Problem (1)
- Netzwerk (1)
- Netzwerksimulation (1)
- Nichtlineare Dynamik (1)
- Näherungsverfahren (1)
- Operations Research (1)
- Ordered Binary Decision Diagrams (1)
- Overlay Network (1)
- Overlay-Netz (1)
- Parameterisierte Algorithmen (1)
- Parameterized Algorithms (1)
- Parametrisierte Approximation (1)
- Peer-to-Peer Network (1)
- Periodic Queues (1)
- Personenname (1)
- Pipeline (1)
- Prozessor (1)
- Pufferspeicher (1)
- Queues (1)
- RPC (1)
- Radiologie (1)
- Rechnernetz (1)
- Reduktionssystem (1)
- Regular Expressions (1)
- Regulärer Ausdruck (1)
- Request-Prediction (1)
- Robustheit (1)
- Server (1)
- Software Visualization (1)
- Softwarevisualisierung (1)
- Speicherdirektzugriff (1)
- Standard ML (1)
- Statistical Mechanics of complex networks (1)
- Statistische Mechanik komplexer Netze (1)
- Store-And-Forward Network (1)
- Store-And-Forward Netzwerk (1)
- Streaming <Kommunikationstechnik> (1)
- Synchronisierung (1)
- Synergie (1)
- Syntaktische Analyse (1)
- Telekommunikationsnetz (1)
- Theoretische Informatik (1)
- Transitionssystem (1)
- Verifikation (1)
- Verteiltes System (1)
- Virtual Environment (1)
- Virtuelle Realität (1)
- Virtuelle Umgebung (1)
- Visualisierung (1)
- Visualization (1)
- Warteschlangentheorie (1)
- Wartesystem (1)
- Web-Applications (1)
- Website-Management (1)
- Weitverkehrsnetz (1)
- Workflow-Programm (1)
- XML (1)
- XOR Parity (1)
- Zufallsgraph (1)
- ad-hoc network (1)
- algorithm analysis (1)
- application support (1)
- archiving (1)
- cache behavior (1)
- clustering (1)
- complex networks (1)
- complex systems (1)
- complexity (1)
- compression (1)
- computational geometry (1)
- controlled queueing system (1)
- convex hull (1)
- digital library (1)
- divide and conquer (1)
- endliche Boustrophedon-Automaten (1)
- formal verification (1)
- grammatical inference (1)
- historical metadata (1)
- home pages (1)
- incremental algorithm (1)
- information retrieval (1)
- jumping endliche Automaten (1)
- memory distance (1)
- middleware (1)
- mobile Telekommunikation (1)
- mobile ad-hoc network (1)
- multicore (1)
- multihop Netzwerk (1)
- multihop network (1)
- network simulation (1)
- numerical analysis (1)
- optimal control (1)
- parameterised approximation (1)
- partitioning (1)
- person name disambiguation (1)
- processing (1)
- radiology (1)
- reordering (1)
- search engine (1)
- threshold (1)
- time complexity (1)
- web-based services (1)
- zurückkehrende(RFA) (1)
Institut
- Informatik (26) (entfernen)
This work addresses the algorithmic tractability of hard combinatorial problems. Basically, we are considering \NP-hard problems. For those problems we can not find a polynomial time algorithm. Several algorithmic approaches already exist which deal with this dilemma. Among them we find (randomized) approximation algorithms and heuristics. Even though in practice they often work in reasonable time they usually do not return an optimal solution. If we constrain optimality then there are only two methods which suffice for this purpose: exponential time algorithms and parameterized algorithms. In the first approach we seek to design algorithms consuming exponentially many steps who are more clever than some trivial algorithm (who simply enumerates all solution candidates). Typically, the naive enumerative approach yields an algorithm with run time $\Oh^*(2^n)$. So, the general task is to construct algorithms obeying a run time of the form $\Oh^*(c^n)$ where $c<2$. The second approach considers an additional parameter $k$ besides the input size $n$. This parameter should provide more information about the problem and cover a typical characteristic. The standard parameterization is to see $k$ as an upper (lower, resp.) bound on the solution size in case of a minimization (maximization, resp.) problem. Then a parameterized algorithm should solve the problem in time $f(k)\cdot n^\beta$ where $\beta$ is a constant and $f$ is independent of $n$. In principle this method aims to restrict the combinatorial difficulty of the problem to the parameter $k$ (if possible). The basic hypothesis is that $k$ is small with respect to the overall input size. In both fields a frequent standard technique is the design of branching algorithms. These algorithms solve the problem by traversing the solution space in a clever way. They frequently select an entity of the input and create two new subproblems, one where this entity is considered as part of the future solution and another one where it is excluded from it. Then in both cases by fixing this entity possibly other entities will be fixed. If so then the traversed number of possible solution is smaller than the whole solution space. The visited solutions can be arranged like a search tree. To estimate the run time of such algorithms there is need for a method to obtain tight upper bounds on the size of the search trees. In the field of exponential time algorithms a powerful technique called Measure&Conquer has been developed for this purpose. It has been applied successfully to many problems, especially to problems where other algorithmic attacks could not break the trivial run time upper bound. On the other hand in the field of parameterized algorithms Measure&Conquer is almost not known. This piece of work will present examples where this technique can be used in this field. It also will point out what differences have to be made in order to successfully apply the technique. Further, exponential time algorithms for hard problems where Measure&Conquer is applied are presented. Another aspect is that a formalization (and generalization) of the notion of a search tree is given. It is shown that for certain problems such a formalization is extremely useful.
Spatial Queues
(2000)
In the present thesis, a theoretical framework for the analysis of spatial queues is developed. Spatial queues are a generalization of the classical concept of queues as they provide the possibility of assigning properties to the users. These properties may influence the queueing process, but may also be of interest for themselves. As a field of application, mobile communication networks are modeled by spatial queues in order to demonstrate the advantage of including user properties into the queueing model. In this application, the property of main interest is the user's position in the network. After a short introduction, the second chapter contains an examination of the class of Markov-additive jump processes, including expressions for the transition probabilities and the expectation as well as laws of large numbers. Chapter 3 contains the definition and analysis of the central concept of spatial Markovian arrival processes (shortly: SMAPs) as a special case of Markov-additive jump processes, but also as a natural generalization from the well-known concept of BMAPs. In chapters 4 and 5, SMAPs serve as arrival streams for the analyzed periodic SMAP/M/c/c and SMAP/G/infinity queues, respectively. These types of queues find application as models or planning tools for mobile communication networks. The analysis of these queues involves new methods such that even for the special cases of BMAP inputs (i.e. non-spatial queues) new results are obtained. In chapter 6, a procedure for statistical parameter estimation is proposed along with its numerical results. The thesis is concluded by an appendix which collects necessary results from the theories of Markov jump processes and stochastic point fields. For special classes of Markov jump processes, new results have been obtained, too.
The visualization of relational data is at the heart of information visualization. The prevalence of visual representations for this kind of data is based on many real world examples spread over many application domains: protein-protein interaction networks in the field of bioinformatics, hyperlinked documents in the World Wide Web, call graphs in software systems, or co-author networks are just four instances of a rich source of relational datasets. The most common visual metaphor for this kind of data is definitely the node-link approach, which typically suffers from visual clutter caused by many edge crossings. Many sophisticated algorithms have been developed to layout a graph efficiently and with respect to a list of aesthetic graph drawing criteria. Relations between objects normally change over time. Visualizing the dynamics means an additional challenge for graph visualization researchers. Applying the same layout algorithms for static graphs to intermediate states of dynamic graphs may also be a strategy to compute layouts for an animated graph sequence that shows the dynamics. The major drawback of this approach is the high cognitive effort for a viewer of the animation to preserve his mental map. To tackle this problem, a sophisticated layout algorithm has to inspect the whole graph sequence and compute a layout with as little changes as possible between subsequent graphs. The main contribution and ultimate goal of this thesis is the visualization of dynamic compound weighted multi directed graphs as a static image that targets at visual clutter reduction and at mental map preservation. To achieve this goal, we use a radial space-filling visual metaphor to represent the dynamics in relational data. As a side effect the obtained pictures are very aesthetically appealing. In this thesis we firstly describe static graph visualizations for rule sets obtained by extracting knowledge from software archives under version control. In a different work we apply animated node-link diagrams to code-developer relationships to show the dynamics in software systems. An underestimated visualization paradigm is the radial representation of data. Though this kind of data has a long history back to centuries-old statistical graphics, only little efforts have been done to fully explore the benefits of this paradigm. We evaluated a Cartesian and a radial counterpart of a visualization technique for visually encoding transaction sequences and dynamic compound digraphs with both an eyetracking and an online study. We found some interesting phenomena apart from the fact that also laymen in graph theory can understand the novel approach in a short time and apply it to datasets. The thesis is concluded by an aesthetic dimensions framework for dynamic graph drawing, future work, and currently open issues.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
Many real-life phenomena, such as computer systems, communication networks, manufacturing systems, supermarket checkout lines as well as structural military systems can be represented by means of queueing models. Looking at queueing models, a controller may considerably improve the system's performance by reducing queue lengths, or increasing the throughput, or diminishing the overhead, whereas in the absence of a controller the system behavior may get quite erratic, exhibiting periods of high load and long queues followed by periods, during which the servers remain idle. The theoretical foundations of controlled queueing systems are led in the theory of Markov, semi-Markov and semi-regenerative decision processes. In this thesis, the essential work consists in designing controlled queueing models and investigation of their optimal control properties for the application in the area of the modern telecommunication systems, which should satisfy the growing demands for quality of service (QoS). For two types of optimization criterion (the model without penalties and with set-up costs), a class of controlled queueing systems is defined. The general case of the queue that forms this class is characterized by a Markov Additive Arrival Process and heterogeneous Phase-Type service time distributions. We show that for these queueing systems the structural properties of optimal control policies, e.g. monotonicity properties and threshold structure, are preserved. Moreover, we show that these systems possess specific properties, e.g. the dependence of optimal policies on the arrival and service statistics. In order to practically use controlled stochastic models, it is necessary to obtain a quick and an effective method to find optimal policies. We present the iteration algorithm which can be successfully used to find an optimal solution in case of a large state space.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
This work is concerned with two kinds of objects: regular expressions and finite automata. These formalisms describe regular languages, i.e., sets of strings that share a comparatively simple structure. Such languages - and, in turn, expressions and automata - are used in the description of textual patterns, workflow and dependence modeling, or formal verification. Testing words for membership in any given such language can be implemented using a fixed - i.e., finite - amount of memory, which is conveyed by the phrasing finite-automaton. In this aspect they differ from more general classes, which require potentially unbound memory, but have the potential to model less regular, i.e., more involved, objects. Other than expressions and automata, there are several further formalisms to describe regular languages. These formalisms are all equivalent and conversions among them are well-known.However, expressions and automata are arguably the notions which are used most frequently: regular expressions come natural to humans in order to express patterns, while finite automata translate immediately to efficient data structures. This raises the interest in methods to translate among the two notions efficiently. In particular,the direction from expressions to automata, or from human input to machine representation, is of great practical relevance. Probably the most frequent application that involves regular expressions and finite automata is pattern matching in static text and streaming data. Common tools to locate instances of a pattern in a text are the grep application or its (many) derivatives, as well as awk, sed and lex. Notice that these programs accept slightly more general patterns, namely ''POSIX expressions''. Concerning streaming data, regular expressions are nowadays used to specify filter rules in routing hardware.These applications have in common that an input pattern is specified in form a regular expression while the execution applies a regular automaton. As it turns out, the effort that is necessary to describe a regular language, i.e., the size of the descriptor,varies with the chosen representation. For example, in the case of regular expressions and finite automata, it is rather easy to see that any regular expression can be converted to a finite automaton whose size is linear in that of the expression. For the converse direction, however, it is known that there are regular languages for which the size of the smallest describing expression is exponential in the size of the smallest describing automaton.This brings us to the subject at the core of the present work: we investigate conversions between expressions and automata and take a closer look at the properties that exert an influence on the relative sizes of these objects.We refer to the aspects involved with these consideration under the titular term of Relative Descriptional Complexity.
In dieser Arbeit wird ein Middlewaremodell für Anwendungen in mobilen ad-hoc Netzwerken vorgestellt. Der primäre Fokus liegt dabei auf Netzen, die auf Basis von mobilen Endbenutzergeräten wie PDAs, PocketPCs oder Smartphones gebildet werden und die über eine drahtlose Kommunikationstechnik wie WLAN oder Bluetooth verfügen. Zur Identifizierung der Ansprüche an eine Middleware in diesen Netzen wurden Anwendungen aus dem universitären Umfeld untersucht. Da die Kommunikation in drahtlosen Netzen nicht immer so transparent wie in drahtgebundenen Infrastrukturnetzen erfolgen kann, enthält die Middleware ein Kommunikationsframework, welches die Adaption und auch die Kombination von Kommunikationsmechanismen ermöglicht. Neben der direkten Auslieferung von Nachrichten werden auch Mechanismen zur sukzessiven Weitergabe von Nachrichten, die so genannte Store-and-Forward Kommunikation, angeboten. Hier steht insbesondere die Weitergabe von Informationen im Vorbeigehen (En-Passant) im Vordergrund. Die Komponenten der Middleware und darauf basierenden Applikationen werden als interagierende mobile Objekte modelliert. Die Interaktion dieser Objekte erfolgt aufgrund der Dynamik der mobilen ad-hoc Netze ereignisorientiert. Sowohl die Interaktion, als auch die Adaption und Wahl von Kommunikationsmechanismen erfolgt auf Basis von Informationen über das direkte Umfeld, Applikationswissen über das Netzwerk und Erfahrungen aus der Vergangenheit. Diese Informationen, die von Objekten verbreitet und gesammelt werden können, werden als Spuren bezeichnet. Da eine Aufhebung der Transparenz der Middlewarekomponenten einen höheren Entwicklungsaufwand bedeutet, wird eine Realisierung der Objektinteraktion in Objektrepräsentanten vorgeschlagen. Objektrepräsentanten ermöglichen die Kommunikation mit entfernten Objekten, das lokale Verwalten von Informationen über diese Objekte und schließlich die Repräsentierung von Objekten ohne reale Objektinstanz. Objekte ohne Objektinstanz werden verwendet, um Gerätegruppen mit bestimmten Eigenschaften zusammenzufassen, beispielsweise Objekte an einem bestimmten geographischen Ort. Somit ist auch das Wissen über geographische Orte und deren Identifizierung ein wichtiger Aspekt des Modells. Aufgrund der Anwendungsuntersuchungen ist sukzessive ein Middlewareprototyp entstanden, der in eine Simulationsumgebung für mobile ad-hoc Netze integriert ist. Die Umgebung erlaubt neben der reinen Simulation mobiler ad-hoc Netze und der zu untersuchenden Anwendung auch die Integration realer Geräte und die Ausführung der Anwendungen auf realen Geräten. Somit konnte, neben einer simulativen Untersuchung, auch eine praktische Evaluation der Anwendungen in Feldversuchen durchgeführt werden.
Due to the breath-taking growth of the World Wide Web (WWW), the need for fast and efficient web applications becomes more and more urgent. In this doctoral thesis, the emphasis will be on two concrete tasks for improving Internet applications. On the one hand, a major problem of many of today's Internet applications may be described as the performance of the Client/Server-communication: servers often take a long time to respond to a client's request. There are several strategies to overcome this problem of high user-perceived latencies; one of them is to predict future user-requests. This way, time-consuming calculations on the server's side can be performed even before the corresponding request is being made. Furthermore, in certain situations, also the pre-fetching or the pre-sending of data might be appropriate. Those ideas will be discussed in detail in the second part of this work. On the other hand, a focus will be placed on the problem of proposing hyperlinks to improve the quality of rapid written texts, at first glance, an entirely different problem to predicting client requests. Ultra-modern online authoring systems that provide possibilities to check link-consistencies and administrate link management should also propose links in order to improve the usefulness of the produced HTML-documents. In the third part of this elaboration, we will describe a possibility to build a hyperlink-proposal module based on statistical information retrieval from hypertexts. These two problem categories do not seem to have much in common. It is one aim of this work to show that there are certain, similar solution strategies to look after both problems. A closer comparison and an abstraction of both methodologies will lead to interesting synergetic effects. For example, advanced strategies to foresee future user-requests by modeling time and document aging can be used to improve the quality of hyperlink-proposals too.
Die eigene Web-Präsenz hat sich für Unternehmen zu einem nicht mehr wegzudenkenden Instrument, einer flexiblen Schnittstelle zum Kunden, entwickelt. Mit ihr wird eine global erreichbare Anlaufstelle offeriert: Die Chancen der Online-Techniken nutzend ist dem Unternehmen mit seiner Web-Präsenz ein Medium für die "klassische" Werbung, darüber hinaus ein Direkt-Marketing- und Dialog-Medium sowie weiterhin ein eigenes Marktforschungs-Instrument an die Hand gegeben. Dessen Entwicklung und speziell sein Betrieb stellen für ein Unternehmen in verschiedenen Hinsichten eine ernstzunehmende Herausforderung dar. Aus organisatorischer Sicht ist hier die nicht einfach zu realisierende Integration der Web-Präsenz in das Unternehmen anzustreben: Nicht einem isoliert arbeitenden, technikverliebten Mitarbeiter, dem "`Webmaster"', obliegt die Pflege der Web-Präsenz als Allein-Zuständigen, sondern entsprechend ihren Zuständigkeiten übernehmen anteilig die in den Fachabteilungen des Unternehmens tätigen Arbeitskräfte diese Aufgabe. Aus technischer Sicht stellt eine moderne Web-Präsenz ein umfangreiches, und bei hoher Dynamik entsprechend komplexes Produkt dar, für dessen Realisierung nicht unerhebliche Anstrengungen unternommen werden müssen -- insbesondere dann, wenn die zugrundeliegende Technik den organisatorischen Anforderungen innerhalb des Unternehmens gerecht werden soll. Bedingt durch die hohe Innovationsrate im Umfeld der Internet-Technologie befinden sich die Anforderungen an die Betreiber einer Web-Präsenz in einem steten Wandel und erschweren damit den Unternehmen den Einstieg in dieses Medium. Gerade die in vielen Fällen in den Unternehmen noch eingesetzten proprietären Lösungen erfordern aus technischer Sicht einen hohen Integrationsaufwand, wenn die darin vorgehaltenen Informationen für die Web-Präsenz automatisiert einer Zweitverwendung zugeführt werden sollen. Über die zusätzliche Nutzung der Unternehmens-Web-Präsenz bzw. des Systems, mit dem sie erstellt und verwaltet wird, für die unternehmensinterne Arbeit, d.h. für das Intranet, kann der für das Unternehmen entstehende Aufwand relativiert werden. In das Bild des Lebenszyklus' einer modernen Web-Präsenz eingebettet, steht die Konzeption und Entwicklung eines an die im Umfeld des Web-Präsenz-Managements in Unternehmen vorzufindenden Randbedingungen flexibel anpassbaren Online-Redaktionssystems im Vordergrund dieser Arbeit. Zentraler Bestandteil seiner Konzeption ist die Idee, ein sich aus derart vielen separaten Elementen zusammensetzendes, komplexes Gesamtdokument wie eine Unternehmens-Web-Präsenz durch eine wachsende Mitarbeiterzahl -- direkt von deren jeweiligen Arbeitsplätzen aus -- gemeinsam aufzubauen und zu pflegen. Der Praxiseinsatz des bedarfsgerecht in seinen endgültigen Ausprägungen auf die speziellen Belange der Unternehmen maßgeschneiderten Systems begründet die konsequente Nutzung von Internet-Technologien sowie die auf offenen Standards basierende Implementation durch flexible plattformunabhängige Installation und einfache Integration. Im Anschluss an ein einleitendes Kapitel beschreibt diese Arbeit ausgehend von der Problemstellung sowie technischen Grundlagen die Konzeption, Implementation und den Einsatz des webbasierten Online-Redaktionssystems bevor eine Einordnung und schließlich die Zusammenfassung den Abschluss dieser Arbeit bilden.