004 Datenverarbeitung; Informatik
This work addresses the algorithmic tractability of hard combinatorial problems. Basically, we are considering \NP-hard problems. For those problems we can not find a polynomial time algorithm. Several algorithmic approaches already exist which deal with this dilemma. Among them we find (randomized) approximation algorithms and heuristics. Even though in practice they often work in reasonable time they usually do not return an optimal solution. If we constrain optimality then there are only two methods which suffice for this purpose: exponential time algorithms and parameterized algorithms. In the first approach we seek to design algorithms consuming exponentially many steps who are more clever than some trivial algorithm (who simply enumerates all solution candidates). Typically, the naive enumerative approach yields an algorithm with run time $\Oh^*(2^n)$. So, the general task is to construct algorithms obeying a run time of the form $\Oh^*(c^n)$ where $c<2$. The second approach considers an additional parameter $k$ besides the input size $n$. This parameter should provide more information about the problem and cover a typical characteristic. The standard parameterization is to see $k$ as an upper (lower, resp.) bound on the solution size in case of a minimization (maximization, resp.) problem. Then a parameterized algorithm should solve the problem in time $f(k)\cdot n^\beta$ where $\beta$ is a constant and $f$ is independent of $n$. In principle this method aims to restrict the combinatorial difficulty of the problem to the parameter $k$ (if possible). The basic hypothesis is that $k$ is small with respect to the overall input size. In both fields a frequent standard technique is the design of branching algorithms. These algorithms solve the problem by traversing the solution space in a clever way. They frequently select an entity of the input and create two new subproblems, one where this entity is considered as part of the future solution and another one where it is excluded from it. Then in both cases by fixing this entity possibly other entities will be fixed. If so then the traversed number of possible solution is smaller than the whole solution space. The visited solutions can be arranged like a search tree. To estimate the run time of such algorithms there is need for a method to obtain tight upper bounds on the size of the search trees. In the field of exponential time algorithms a powerful technique called Measure&Conquer has been developed for this purpose. It has been applied successfully to many problems, especially to problems where other algorithmic attacks could not break the trivial run time upper bound. On the other hand in the field of parameterized algorithms Measure&Conquer is almost not known. This piece of work will present examples where this technique can be used in this field. It also will point out what differences have to be made in order to successfully apply the technique. Further, exponential time algorithms for hard problems where Measure&Conquer is applied are presented. Another aspect is that a formalization (and generalization) of the notion of a search tree is given. It is shown that for certain problems such a formalization is extremely useful.
Hardware bugs can be extremely expensive, financially. Because microprocessors and integrated circuits have become omnipresent in our daily live and also because of their continously growing complexity, research is driven towards methods and tools that are supposed to provide higher reliability of hardware designs and their implementations. Over the last decade Ordered Binary Decision Diagrams (OBDDs) have been well proven to serve as a data structure for the representation of combinatorial or sequential circuits. Their conciseness and their efficient algorithmic properties are responsible for their huge success in formal verification. But, due to Shannon's counting argument, OBDDs can not always guarantee the concise representation of a given design. In this thesis, Parity Ordered Binary Decision Diagrams are presented, which are a true extension of OBDDs. In addition to the regular branching nodes of an OBDD, functional nodes representing a parity operation are integrated into the data structure, thus resulting in Parity-OBDDs. Parity-OBDDs are more powerful than OBDDs are, but, they are no longer a canonical representation. Besides theoretical aspects of Parity-OBDDs, algorithms for their efficient manipulation are the main focus of this thesis. Furthermore, an analysis on the factors that influence the Parity-OBDD representation size gives way for the development of heuristic algorithms for their minimization. The results of these analyses as well as the efficiency of the data structure are also supported by experiments. Finally, the algorithmic concept of Parity-OBDDs is extended to Mod-p-Decision Diagrams (Mod-p-DDs) for the representation of functions that are defined over an arbitrary finite domain.