Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (341)
- Wissenschaftlicher Artikel (123)
- Arbeitspapier (19)
- Buch (Monographie) (15)
- Konferenzveröffentlichung (9)
- Ausgabe (Heft) zu einer Zeitschrift (5)
- Beitrag zu einer (nichtwissenschaftlichen) Zeitung oder Zeitschrift (4)
- Habilitation (3)
- Masterarbeit (2)
- Sonstiges (2)
- Teil eines Buches (Kapitel) (1)
- Retrodigitalisat (1)
Sprache
- Englisch (525) (entfernen)
Volltext vorhanden
- ja (525) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (20)
- Fernerkundung (18)
- Optimierung (18)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Europäische Union (9)
- Finanzierung (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (57)
- Mathematik (47)
- Fachbereich 6 (39)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (25)
- Informatik (19)
- Anglistik (14)
- Rechtswissenschaft (14)
Today, usage of complex circuit designs in computers, in multimedia applications and communication devices is widespread and still increasing. At the same time, due to Moore's Law we do not expect to see an end in the growth of the complexity of digital circuits. The decreasing ability of common validation techniques -- like simulation -- to assure correctness of a circuit design enlarges the need for formal verification techniques. Formal verification delivers a mathematical proof that a given implementation of a design fulfills its specification. One of the basic and during the last years widely used data structure in formal verification are the so called Ordered Binary Decision Diagrams (OBDDs) introduced by R. Bryant in 1986. The topic of this thesis is integration of structural high-level information in the OBDD-based formal verification of sequential systems. This work consist of three major parts, covering different layers of formal verification applications: At the application layer, an assertion checking methodology, integrated in the verification flow of the high-level design and verification tool Protocol Compiler is presented. At the algorithmic layer, new approaches for partitioning of transition relations of complex finite state machines, that significantly improve the performance of OBDD-based sequential verification are introduced. Finally, at the data structure level, dynamic variable reordering techniques that drastically reduce the time required for reordering without a trade-off in OBDD-size are described. Overall, this work demonstrates how a tighter integration of applications by using structural information can significantly improve the efficiency of formal verification applications in an industrial setting.
In this thesis we focus on the development and investigation of methods for the computation of confluent hypergeometric functions. We point out the relations between these functions and parabolic boundary value problems and demonstrate applications to models of heat transfer and fluid dynamics. For the computation of confluent hypergeometric functions on compact (real or complex) intervals we consider a series expansion based on the Hadamard product of power series. It turnes out that the partial sums of this expansion are easily computable and provide a better rate of convergence in comparison to the partial sums of the Taylor series. Regarding the computational accuracy the problem of cancellation errors is reduced considerably. Another important tool for the computation of confluent hypergeometric functions are recurrence formulae. Although easy to implement, such recurrence relations are numerically unstable e.g. due to rounding errors. In order to circumvent these problems a method for computing recurrence relations in backward direction is applied. Furthermore, asymptotic expansions for large arguments in modulus are considered. From the numerical point of view the determination of the number of terms used for the approximation is a crucial point. As an application we consider initial-boundary value problems with partial differential equations of parabolic type, where we use the method of eigenfunction expansion in order to determine an explicit form of the solution. In this case the arising eigenfunctions depend directly on the geometry of the considered domain. For certain domains with some special geometry the eigenfunctions are of confluent hypergeometric type. Both a conductive heat transfer model and an application in fluid dynamics is considered. Finally, the application of several heat transfer models to certain sterilization processes in food industry is discussed.
In this thesis, we study the convergence behavior of an efficient optimization method used for the identification of parameters for underdetermined systems. The research is motivated by optimization problems arising from the estimation of parameters in neural networks as well as in option pricing models. In the first application, we are concerned with neural networks used to forecasting stock market indices. Since neural networks are able to describe extremely complex nonlinear structures they are used to improve the modelling of the nonlinear dependencies occurring in the financial markets. Applying neural networks to the forecasting of economic indicators, we are confronted with a nonlinear least squares problem of large dimension. Furthermore, in this application the number of parameters of the neural network to be determined is usually much larger than the number of patterns which are available for the determination of the unknowns. Hence, the residual function of our least squares problem is underdetermined. In option pricing, an important but usually not known parameter is the volatility of the underlying asset of the option. Assuming that the underlying asset follows a one-factor continuous diffusion model with nonconstant drift and volatility term, the value of an European call option satisfies a parabolic initial value problem with the volatility function appearing in one of the coefficients of the parabolic differential equation. Using this system equation, the estimation of the volatility function is described by a nonlinear least squares problem. Since the adaption of the volatility function is based only on a small number of observed market data these problems are naturally ill-posed. For the solution of these large-scale underdetermined nonlinear least squares problems we use a fully iterative inexact Gauss-Newton algorithm. We show how the structure of a neural network as well as that of the European call price model can be exploited using iterative methods. Moreover, we present theoretical statements for the convergence of the inexact Gauss-Newton algorithm applied to the less examined case of underdetermined nonlinear least squares problems. Finally, we present numerical results for the application of neural networks to the forecasting of stock market indices as well as for the construction of the volatility function in European option pricing models. In case of the latter application, we discretize the parabolic differential equation using a finite difference scheme and we elucidate convergence problems of the discrete scheme when the initial condition is not everywhere differentiable.
This work is concerned with arbitrage bounds for prices of contingent claims under transaction costs, but regardless of other conceivable market frictions. Assumptions on the underlying market are held as weak as convenient for the deduction of meaningful results that make good economic sense. In discrete time we also allow for underlying price processes with uncountable state space. In continuous time the underlying price process is modeled by a semimartingale. For the most part we could avoid any stronger assumptions. The main problems with which we deal in this work are the modelling of (proportional) transaction costs, Fundamental Theorems of Asset Pricing under transaction costs, dual characterizations of arbitrage bounds under transaction costs, Quantile-Hedging under transaction costs, alternatives to the Black-Scholes model in continuous time (under transaction costs). The results apply to stock and currency markets.
Hardware bugs can be extremely expensive, financially. Because microprocessors and integrated circuits have become omnipresent in our daily live and also because of their continously growing complexity, research is driven towards methods and tools that are supposed to provide higher reliability of hardware designs and their implementations. Over the last decade Ordered Binary Decision Diagrams (OBDDs) have been well proven to serve as a data structure for the representation of combinatorial or sequential circuits. Their conciseness and their efficient algorithmic properties are responsible for their huge success in formal verification. But, due to Shannon's counting argument, OBDDs can not always guarantee the concise representation of a given design. In this thesis, Parity Ordered Binary Decision Diagrams are presented, which are a true extension of OBDDs. In addition to the regular branching nodes of an OBDD, functional nodes representing a parity operation are integrated into the data structure, thus resulting in Parity-OBDDs. Parity-OBDDs are more powerful than OBDDs are, but, they are no longer a canonical representation. Besides theoretical aspects of Parity-OBDDs, algorithms for their efficient manipulation are the main focus of this thesis. Furthermore, an analysis on the factors that influence the Parity-OBDD representation size gives way for the development of heuristic algorithms for their minimization. The results of these analyses as well as the efficiency of the data structure are also supported by experiments. Finally, the algorithmic concept of Parity-OBDDs is extended to Mod-p-Decision Diagrams (Mod-p-DDs) for the representation of functions that are defined over an arbitrary finite domain.
Mobile computing poses different requirements on middleware than more traditional desktop systems interconnected by fixed networks. Not only the characteristics of mobile network technologies as for example lower bandwidth and unreliability demand for customized support. Moreover, the devices employed in mobile settings usually are less powerful than their desktop counterparts. Slow processors, a fairly limited amount of memory, and smaller displays are typical properties of mobile equipment, again requiring special treatment. Furthermore, user mobility results in additional requirements on appropriate middleware support. As opposed to the quite static environments dominating the world of desktop computing, dynamic aspects gain more importance. Suitable strategies and techniques for exploring the environment e.g. in order to discover services available locally are only one example. Managing resources in a fault-tolerant manner, reducing the impact ill-behaved clients have on system stability define yet another exemplary prerequisite. Most state of the art middleware has been designed for use in the realm of static, resource rich environments and hence is not immediately applicable in mobile settings as set forth above. The work described throughout this thesis aims at investigating the suitability of different middleware technologies with regard to application design, development, and deployment in the context of mobile networks. Mostly based upon prototypes, shortcomings of those technologies are identified and possible solutions are proposed and evaluated where appropriate. Besides tailoring middleware to specific communication and device characteristics, the cellular structure of current mobile networks may and shall be exploited in favor of more scalable and robust systems. Hence, an additional topic considered within this thesis is to point out and investigate suitable approaches permitting to benefit from such cellular infrastructures. In particular, a system architecture for the development of applications in the context of mobile networks will be proposed. An evaluation of this architecture employing mobile agents as flexible, network-side representatives for mobile terminals is performed, again based upon a prototype application. In summary, this thesis aims at providing several complementary approaches regarding middleware support tailored for mobile, cellular networks, a field considered to be of rising importance in a world where mobile communication and particularly data services emerge rapidly, augmenting the globally interconnecting, wired Internet.
The study at hand deals with madness as it is represented in English Canadian fiction. The topic seemed most interesting and fruitful for analysis due to the fact that as the ways madness has been defined, understood, described, judged and handled differ quite profoundly from society to society, from era to era, as the language, ideas and associations surrounding insanity are both strongly culture-relative and shifting, madness as a theme of myth and literature has always been a excellent vehicle to mirror the assumptions and arguments, the aspirations and nostalgia, the beliefs and values, hopes and fears of its age and society. Thus, while the overall intent of this study is to elucidate some discernible patterns of structure and style which accompany the use of madness in Canadian literature, to investigate the varying sorts of portrayal and the conventions of presentation, to interpret the use of madness as literary devices and to highlight the different statements which are made, the continuity, variation, and changes in the theme of madness provide an informing principle in terms of certain Canadian experiences and perceptions. By examining madness as it represents itself in Canadian literature and considering the respective explorations of the deranged mind within their historical context, I hope to demonstrate that literary interpretations of madness both reflect and question cultural, political, religious and psychological assumptions of their times and that certain symptoms or usages are characteristic of certain periods. Such an approach, it is hoped, might not only contribute towards an assessment of the wealth of associations which surround madness and the ambivalence with which it is viewed, but also shed some light on the Canadian imagination. As such this study can be considered not only as a history of literary madness, but a history of Canadian society and the Canadian mind.
XML (Extensible Markup Language) ist ein sequentielles Format zur Speicherung und Übermittlung strukturierter Daten. Obwohl es ursprünglich für die Dokumentenverarbeitung entwickelt wurde, findet XML heute Verwendung in nahezu allen Bereichen der Datenverarbeitung, insbesondere aber im Internet. Jede XML-Dokumentenverarbeitungs-Software basiert auf einem XML-Parser. Der Parser liest ein Dokument in XML-Syntax ein und stellt es als Dokumentbaum der eigentlichen Anwendung zur Verfügung. Dokumentenverarbeitung ist dann im wesentlichen die Manipulation von Bäumen. Moderne funktionale Programmiersprachen wie SML und Haskell unterstützen Bäume als Basis-Datentypen und sind daher besonders gut für die Implementierung von Dokumentenverarbeitungs-Systemen geeignet. Um so erstaunlicher ist es, dass dieser Bereich zum größten Teil von Java-Software dominiert wird. Dies ist nicht zuletzt darauf zurückzuführen, dass noch keine vollständige Implementierung der XML-Syntax als Parser in einer funktionalen Programmiersprache vorliegt. Eine der wichtigsten Aufgaben in der Dokumentenverarbeitung ist Querying, d.h. die Lokalisierung von Teildokumenten, die eine angegebene Strukturbedingung erfüllen und in einem bestimmten Kontext stehen. Die baumartige Auffassung von Dokumenten in XML erlaubt die Realisierung des Querying mithilfe von Techniken aus der Theorie der Baumsprachen und Baumautomaten. Allerdings müssen diese Techniken an die speziellen Anforderungen von XML angepasst werden. Eine dieser Anforderungen ist, dass auch extrem große Dokumente verarbeitet werden müssen. Deshalb sollte der Querying-Algorithmus in einem einzigen Durchlauf durch das Dokument ausführbar sein, ohne den Dokumentbaum explizit im Speicher aufbauen zu müssen. Diese Arbeit besteht aus zwei Teilen. Der erste Teil beschreibt den XML- Parser fxp, der vollständig in SML programmiert wurde. Insbesondere werden die Erfahrungen mit SML diskutiert, die während der Implementierung von fxp gewonnen wurden. Es folgt eine Analyse des Laufzeit-Verhaltens von fxp und ein Vergleich mit anderen XML-Parsern, die in imperativen oder objekt- orientierten Programmiersprachen entwickelt wurden. Im zweiten Teil beschreiben wir einen Algorithmus zum Querying von XML- Dokumenten, der auf der Theorie der Waldautomaten fundiert ist. Er findet alle Treffer einer Anfrage in höchstens zwei Durchläufen durch das Dokument. Für eine wichtige Teilklasse von Anfragen kann das Querying sogar in einem einzelnen Durchlauf realisiert werden. Außerdem wird die Implementierung des Algorithmus in SML mit Hilfe von fxp dargestellt.
Since November 1997, we started to focus on the population ecology of two sympatric Sinonatrix snakes in the Chutzuhu swamp, northern Taiwan. At the same time we also examined some specimens from Senckenberg Natural History Museum, Frankfurt am Main and accumulated field data of some observation made on S. percarinata suriki from Fushan botanical garden, Sanping and Gaoshu, Taiwan. According to the specimens examined, we suspect that the close phylogeny of S. percarinata suriki may come from two ancestors, northeast Taiwan population closest to Fujien or Zehjiang and the southwest population closest to Guandong or Vietnam. This pattern was also represented in some molecular phylogeny studies of freshwater fish in Taiwan. There were 22,462 trap-nights, taken from the Chutzuhu swamp, during the period November 1999 to September 2001 and 361 snakes were collected, comprising five species and 617 snake-times. The population sizes were based on the Lincoln-Peterson index and were estimated to be 988-±326 in S. annularis and 129-±78 in S. percarinata suriki. Movement and home range data showed S. annularis is a restricted activity water snake and S. percarinata suriki possesses great mobility in spatial patterns, but movement ability seems to be influenced by the size of the aquatic environment. S. annularis is live-bearing, on average 8.19 neonates and this principally occurs in September; S. percarinata suriki lays 6-24 eggs, but due to insufficient observations no conclusions can be drawn. It must be noted that oviposition was also noted in September. The reproductive mode may reflect on thermal requirement differences of the two sympatric snakes. S. annularis tended to be a fish (98%) eater and S. percarinata suriki take 50% fish and 50% frogs in their diet. Middle to high ground cover marshland appears to be the favorite microhabitat of S. annularis, and S. percarinata suriki seems prefer open creeks and ditches. The population condition of S. annularis in the Chutzuhu swamp seems to be rapidly deteriorating and this trend is also reflected in the BCI declines, low proportion stomach contents and diseases of S. annularis. Water seems to be the major influencing factor and strongly correlates with the conservation strategy. Conservation proposals for S. annularis in the Chutzuhu swamp will be formulated. During this study period we also developed an efficient technique for snake morphological data accumulation and image database, with the aid of the following devices, PC notebook and scanner, which is adapted for practical field studies. We also want to propose a component system for the establishment of a fundamental snake population databases (FPDS) for long-term snake ecological studies and monitoring herein.
Since the end of the British Empire, which had provided white Australians with points of view, attitudes and stereotypes of the world - including perceptions of their own role in it -, rediscovering an international identity has been an Australian quest. Many turned to European roots; others to the Aboriginal landscape; Blanche d"Alpuget and Christopher J. Koch are two who have ventured into Asia for the culturally and spiritually regenerative materials necessary to redefine Australia in the post-colonial world. They have taken Eastern concepts of "self", and "soul" and forged them with the Australian obsession of fear and desire of contact with the "other" in a looking-glass of hybrid, Austral-Asian myth to reveal the true soul of Australian identity. Along with a brief historical and literary background to the triangular relationship between white Australia, Asia, and the West, this study- goal is to identify some of the Southeast Asian symbols, myths and literary structures which Koch and d"Alpuget integrate into the Western tradition. Central elements include: dichotomies as of personality, righteousness, and virtue; the "Otherworld", where one may approach enlightenment, but at the risk of falling into self-delusion; archetypes of the Hindu divine feminine; Eastern roots of Koch- themes of the "double man"; concepts of the forces of "light" and "dark"; the semiotics of time and meaning; and the central Eastern metaphor of the mirror by which Australia creates interdependent images of itself and of Asia.
Until today the effects of many chlorinated hydrocarbons (e.g. DDT, PCBs) against the specific organisms are still a subject of controversial discussions. It was also the case for potential endocrine effects to influence the spermatogenesis correlated with possible changes of the population's vitality. To clear this situation, three questions could be at the centre of attention: 1) Do the chemicals cause a special harmful effect on the male reproductive tract? 2) Could some particular chemical mixtures act to bind and activate the human estrogen receptor (hER)? 3) Are the life stages of an organism specially sensitive to the effects of chemicals and therefore be established as Screening-Test-System? the connected effects of DDT and Arochlor 1254 as single substance and in 1:1 mixture according to their estrogenic effectiveness on zebrafish (Brachydanio rerio) were therefore investigated. the concentrations of the pesticides and their mixture ranged between 0.05-µg/l and 500-µg/l and separated by a factor of 10. It was turned out that the test concentrations of 500-µg/l were too toxic to zebrafish in all the cases. The experiment was followed up with four concentrations of DDT, A54 as well as their 1:1 mixture anew each separated by a factor of 10 and ranging between 0.05-µg/l and 50-µg/l. The bioaccumulation test within 8 days showed that the zebrafish accumulated the chemicals, but no equilibrum was reached and the concentration 0.05-µg/l was established as No Observed Effect Concentration (NOEC). Putting up on these analyses, the investigation of the life cycle (LC) starting with fertilized eggs demonstrated a reduction in the rate of hatchability, reproduction and length of fish emerged. These reductions involved the duration of the life cycle stages (LCS) which consequently lasted longer than expected. Exposure time and level of the tested chemicals accelerated the occurrence of these effects which were more significant when the chemical mixtures were used too. To establish whether the parameter assessed were correlated to the male reproductive tract, the quality, quantity and life span of sperm were assessed using the methods of Leong (1988) and Shapiro et al (1994). The sperm degeneration observed, led us to investigate the spermatogenesis and the ultrastructure of the testes. This last experiment showed a significant reduction of the late stage of spermatogenesis and the heterophagic vacuoles which play an important role in the spermatid maturation. It could therefore be concluded that, DDT and A54 could act synergically and cause disorders of the male reproductive tract of male zebrafish and influence also their growth.
ASEAN and ASEAN Plus Three: Manifestations of Collective Identities in Southeast and East Asia?
(2003)
East Asia is a region undergoing vast structural changes. As the region moved closer together economically and politically following the breakdown of the bipolar world order and the ensuing expansion of intra-regional interdependencies, the states of the region faced the challenge of having to actively recast their mutual relations. At the same time, throughout the 1990s, the West became increasingly interested in trans- and inter-regional dialogue and cooperation with the emerging economies of East Asia. These developments gave rise to a "new regionalism", which eventually also triggered debates on Asian identities and the region's potential to integrate. Before this backdrop, this thesis analyzes in how far both the Association of Southeast Asian Nations (ASEAN), which has been operative since 1967 and thus embodies the "old regionalism" of Southeast Asia, and the ASEAN Plus Three forum (APT: the ASEAN states plus China, Japan and South Korea), which has come into existence in the aftermath of the Asian economic crisis of 1997, can be said to represent intergovernmental manifestations of specific collective identities in Southeast and East Asia, respectively. Based on profiles of the respective discursive, behavioral and motivational patterns as well as the integrative potential of ASEAN and APT, this study establishes in how far the member states adhere to sustainable collective patterns of interaction, expectations and objectives, and assesses in how far they can be said to form specific 'ingroups'. Four studies on collective norms, readiness to pool sovereignty, solidarity and attitudes vis-Ã -vis relevant third states show that ASEAN has evolved a certain degree of collective identity, though the Association's political relevance and coherence is frequently thwarted by changes in its external environment. A study on the cooperative and integrative potential of APT yields no manifest evidence of an ongoing or incipient pan-East Asian identity formation process.
The vision of a future information and communication society has prompted leading politicians in the United States, the European Union and Japan to influence or even lead the economic and social transition in the context of an active technology policy. The technological development of society, however, is a product of a complex interplay of technological, economic and socio-political constraints. These constraints limit the political decision-making and implementation abilities. Moreover, facts and information are continuously changing during a paradigmatic technological, economic and social shift, which limits political decision-making abilities. This study compares political decision-making to promote computer-mediated communications in the Triad since the beginning of the 1980s, on four levels: the development of a political vision, the long-term aims and strategies, technology policy (e.g. the promotion of technological development and competition policy) and regulatory policy (e.g. universal access, protection of privacy and intellectual property). While technology policy tends to be uncontroversial, during a paradigmatic shift regulatory policy is difficult and lengthy. Nevertheless, the inclusion of interest groups, which rise during this paradigmatic shift and which are close to the technologies and their societal consequences, help to aid decision-making processes. In this context, politics in the United States has been more successful that in the European Union and especially Japan. Although this study predates the rise of eCommerce over the Internet, it addresses many of the themes underlying it. Of these themes, many remain politically unsettled, both on national, supranational and especially international levels. For example, for encryption and secure payments, which are necessary for eCommerce, no international standards do yet exist. The issue of taxation has hardly been opened for discussions. In sum, this study does not only offer a historical overview of the development of the Internet, but it also discusses issues of continuing present concern.
The main purpose of this dissertation is to solve the following question: How will the emergence of the Euro influence the currency composition of the NICs?monetary reserves? Taiwan and Thailand are chosen as our investigation subjects. There are two sorts of motives for central banks' reserve holdings, i.e., intervention-related motives and portfolio-related motives. The need for reserve holdings resulting from intervention-related motives are justified because of the costs resulting from exchange rate instability. On the other hand, we use the Tobin-Markowitz model to justify the need for monetary reserves held for portfolio-related motives. The operational implication of this distinction is the separation of monetary reserves into two tranches corresponding to different objectives. An analysis of a central bank's transaction balance is a money quality analysis. Such an analysis has to do with transaction costs and non-pecuniary rates of return. The facts point out, that the Euro's emergence will not change the fact that the USD will continue to be the major currency of transaction balances of the central banks in Taiwan and Thailand. In order to answer the question about diversification of monetary reserves as idle balance in the two NICs, we carry out an analysis of the portfolio approach, which is based on the basic ideas of the Tobin-Markowitz model. This analysis shows that Taiwan and/or Thailand respectively cannot reduce risk at a given rate of return or increase the rate of return at a given risk by diversifying their monetary reserves as idle balance from the USD to the Euro.
Due to the breath-taking growth of the World Wide Web (WWW), the need for fast and efficient web applications becomes more and more urgent. In this doctoral thesis, the emphasis will be on two concrete tasks for improving Internet applications. On the one hand, a major problem of many of today's Internet applications may be described as the performance of the Client/Server-communication: servers often take a long time to respond to a client's request. There are several strategies to overcome this problem of high user-perceived latencies; one of them is to predict future user-requests. This way, time-consuming calculations on the server's side can be performed even before the corresponding request is being made. Furthermore, in certain situations, also the pre-fetching or the pre-sending of data might be appropriate. Those ideas will be discussed in detail in the second part of this work. On the other hand, a focus will be placed on the problem of proposing hyperlinks to improve the quality of rapid written texts, at first glance, an entirely different problem to predicting client requests. Ultra-modern online authoring systems that provide possibilities to check link-consistencies and administrate link management should also propose links in order to improve the usefulness of the produced HTML-documents. In the third part of this elaboration, we will describe a possibility to build a hyperlink-proposal module based on statistical information retrieval from hypertexts. These two problem categories do not seem to have much in common. It is one aim of this work to show that there are certain, similar solution strategies to look after both problems. A closer comparison and an abstraction of both methodologies will lead to interesting synergetic effects. For example, advanced strategies to foresee future user-requests by modeling time and document aging can be used to improve the quality of hyperlink-proposals too.
My dissertation is concerned with contemporary (Anglo-)Canadian immigrant fiction and proposes an analytic grid with which it may be appreciated and compared more adequately. As a starting-point serves the general observation that the works of many Canadian immigrant writers are characterised by a focus on their respective home cultures as well as on their Canadian host culture. Following the ground-breaking work of Northrop Frye, Margaret Atwood and David Staines, the categories of "there" and "here" are suggested in order to reflect this double encoding of Canadian immigrant literature. However, "here" and "there" are more than spatial configurations in that they represent a concern with issues of multiculturalism and postcolonialism. Both of which are informed by an emphasis on difference and identity, and difference and identity are also what the narratives of M.G. Vassanji, Neil Bissoondath and Rohinton Mistry are preoccupied with. My study sets out to show two things: On the one hand, it attempts to exemplify the complexity and interrelatedness of "there" and "here" in a representative fashion. Hence in their treatments of difference, M.G. Vassanji, Neil Bissoondath and Rohinton Mistry come up with comparable identity constructions "here" and "there" respectively. On the other hand, special attention is paid to the strategies by which Vassanji, Bissoondath and Mistry construct difference and corroborate their respective understandings of identity.
The discretization of optimal control problems governed by partial differential equations typically leads to large-scale optimization problems. We consider flow control involving the time-dependent Navier-Stokes equations as state equation which is stamped by exactly this property. In order to avoid the difficulties of dealing with large-scale (discretized) state equations during the optimization process, a reduction of the number of state variables can be achieved by employing a reduced order modelling technique. Using the snapshot proper orthogonal decomposition method, one obtains a low-dimensional model for the computation of an approximate solution to the state equation. In fact, often a small number of POD basis functions suffices to obtain a satisfactory level of accuracy in the reduced order solution. However, the small number of degrees of freedom in a POD based reduced order model also constitutes its main weakness for optimal control purposes. Since a single reduced order model is based on the solution of the Navier-Stokes equations for a specified control, it might be an inadequate model when the control (and consequently also the actual corresponding flow behaviour) is altered, implying that the range of validity of a reduced order model, in general, is limited. Thus, it is likely to meet unreliable reduced order solutions during a control problem solution based on one single reduced order model. In order to get out of this dilemma, we propose to use a trust-region proper orthogonal decomposition (TRPOD) approach. By embedding the POD based reduced order modelling technique into a trust-region framework with general model functions, we obtain a mechanism for updating the reduced order models during the optimization process, enabling the reduced order models to represent the flow dynamics as altered by the control. In fact, a rigorous convergence theory for the TRPOD method is obtained which justifies this procedure also from a theoretical point of view. Benefiting from the trust-region philosophy, the TRPOD method guarantees to save a lot of computational work during the control problem solution, since the original state equation only has to be solved if we intend to update our model function in the trust-region framework. The optimization process itself is completely based on reduced order information only.
Many real-life phenomena, such as computer systems, communication networks, manufacturing systems, supermarket checkout lines as well as structural military systems can be represented by means of queueing models. Looking at queueing models, a controller may considerably improve the system's performance by reducing queue lengths, or increasing the throughput, or diminishing the overhead, whereas in the absence of a controller the system behavior may get quite erratic, exhibiting periods of high load and long queues followed by periods, during which the servers remain idle. The theoretical foundations of controlled queueing systems are led in the theory of Markov, semi-Markov and semi-regenerative decision processes. In this thesis, the essential work consists in designing controlled queueing models and investigation of their optimal control properties for the application in the area of the modern telecommunication systems, which should satisfy the growing demands for quality of service (QoS). For two types of optimization criterion (the model without penalties and with set-up costs), a class of controlled queueing systems is defined. The general case of the queue that forms this class is characterized by a Markov Additive Arrival Process and heterogeneous Phase-Type service time distributions. We show that for these queueing systems the structural properties of optimal control policies, e.g. monotonicity properties and threshold structure, are preserved. Moreover, we show that these systems possess specific properties, e.g. the dependence of optimal policies on the arrival and service statistics. In order to practically use controlled stochastic models, it is necessary to obtain a quick and an effective method to find optimal policies. We present the iteration algorithm which can be successfully used to find an optimal solution in case of a large state space.
Hydrodynamic processes play a fundamental role in the distribution of salt within mangrove-fringed estuaries and mangrove forests. In this thesis, two hydrodynamic processes and their ecological implications were examined. (1) Passive Irrigation and Functional Morphology of Crustacean Burrows in Rhizophora-forests. The mangrove Rhizophora excludes more than 90% of the seawater salt at water intake at the roots. By means of conductivity methods and resin casting, it was found that crustacean burrows play a key role in the removal of excess salt from the root zone. Salt diffuses from the roots into the burrows, and is efficiently flushed from the burrows by rainwater infiltration and tidal irrigation. The burrows contribute significantly to favourable conditions for the growth of Rhizophora trees. (2) Trapping of Mangrove Propagules due to Density-driven Secondary Circulation in Tropical Estuaries. In North East Australian estuaries, mangrove propagules are drifted upstream by density-driven axial surface convergences. Propagules accumulate in hydrodynamic traps upstream from suitable habitat, where they are trapped at least for the entire tropical dry season. Axial convergences may provide an efficient barrier for propagule exchange across estuaries. In such estuaries, mangrove populations can be regarded as floristically isolated, not unlike island communities, even though the populations lie on a continuous coastline. This effect may contribute to the disjunct distribution observed in some mangrove species. The outcomes of this work contribute to the understanding of the importance of salt as a growth and habitat-restricting factor in the mangrove environment.
This dissertation develops a rationale of how to use fossil data in solving biogeographical and ecological problems. It is argued that large amounts of fossil data of high quality can be used to document the evolutionary processes (the origin, development, formation and dynamics) of Arealsystems, which can be divided into six stages in North America: the Refugium Stage (before 15,000 years ago: > 15 ka), the Dispersal Stage (from 8,000 to 15,000 years ago: 8.0 - 15 ka), the Developing Stage (from 3,000 to 8,000 years ago: 3.0 - 8.0 ka), the Transitional Stage (from 1,000 to 3,000 years ago: 1 - 3 ka), the Primitive Stage (from 5,00 to 1,000 years ago: 0.5 - 1 ka) and the Human Disturbing Stage (during the last 500 years: < 0.5 ka). The division into these six stages is based on geostatistical analysis of the FAUNMAP database that contains 43,851 fossil records collected from 1860 to 1994 in North America. Fossil data are one of the best materials to test the glacial refugia theory. Glacial refugia represent areas where flora and fauna were preserved during the glacial period, characterized by richness in species and endemic species at present. This means that these (endemic) species should have distributed purely or primarily in these areas during the glacial period. The refugia can therefore be identified by fossil records of that period. If it is not the case, the richness in (endemic) species may not be the result of the glacial refugia. By exploring where mammals lived during the Refugium Stage (> 15 ka), seven refugia in North America can be identified: the California Refugium, the Mexico Refugium, the Florida Refugium, the Appalachia Refugium, the Great Basin Refugium, the Rocky Mountain Refugium and the Great Lake Refugium. The first five refugia coincide well with De Lattin- dispersal centers recognized by biogeographical methods using data on modern distributions. The individuals of a species are not evenly distributed over its Arealsystem. Brown- Hot Spots Model shows that in most cases there is an enormous variation in abundance within an areal of a species: In a census, zero or only a very few individuals occur at most sample locations, but tens or hundreds are found at a few sample sites. Locations where only a few individuals can be sampled in a survey are called "cool spots", and sites where tens or hundreds of individuals can be observed in a survey are called "hot spots". Many areas within the areal are uninhabited, which are called "holes". This model has direct implications for analyzing fossil data: Hot spots have a much higher local population density than cool spots. The chances to discover fossil individuals of a species are much higher in sediments located in a "hot spot" area than in a "cool spot" area. Therefore much higher MNIs (Minimum Number of Individuals) of the species should be found in fossil localities located in the hot spot than in the cool spot area. There are only a few hot spots but many cool spots within an areal of a single hypothetical species, consequently only a few fossil sites can provide with much high MNIs, whereas most other sites can only provide with very low MNIs. This prediction has been proved to be true by analysis of 70 species in FAUMAP containing over 100 fossil records. The temporal and spatial variation in abundance can be reconstructed from the temporospatial distribution of the MNIs of a species over its Arealsystem. Areas with no fossil records from the last thousands of years may be holes, and sites with much higher MNIs may be hot spots, while locations with low MNIs may be cool spots. Although the hot spots of many species can remain unchanged in an area over thousands of years, our study shows that a large shift of hot spots occurred mainly around 1,500-1,000 years ago. There are three directions of movement: from the west side to the east side of the Rockies, from the East of the USA to the east side of the Rockies and from the west side of the Rockies to the Southwest of the USA. The first two directions of shift are called Lewis and Clark- pattern, which can be verified with the observations mad by Lewis and Clark during their expedition in 1805-1806. The historical process of this pattern may well explain the 200-year-old puzzle why big game then abundant on the east side were rare on the west side of the Rocky Mountains noted by modern ecologists and biogeographers. The third direction of shift is called Bayham- pattern. This pattern can be tested by the model of Late Holocene resource intensification first described by Frank E. Bayham. The historical process creating the Bayham pattern will challenge the classic explanation of the Late Holocene resource intensification. An environmental change model has been proposed to account for the shift of hot spots. Implications of glacial refugia and hot spots areas for wildlife management and effective conservation are discussed. Suggestions for paleontologists and zooarchaeologists regarding how to provide more valuable information in their future excavation and research for other disciplines are given.
Spatial Queues
(2000)
In the present thesis, a theoretical framework for the analysis of spatial queues is developed. Spatial queues are a generalization of the classical concept of queues as they provide the possibility of assigning properties to the users. These properties may influence the queueing process, but may also be of interest for themselves. As a field of application, mobile communication networks are modeled by spatial queues in order to demonstrate the advantage of including user properties into the queueing model. In this application, the property of main interest is the user's position in the network. After a short introduction, the second chapter contains an examination of the class of Markov-additive jump processes, including expressions for the transition probabilities and the expectation as well as laws of large numbers. Chapter 3 contains the definition and analysis of the central concept of spatial Markovian arrival processes (shortly: SMAPs) as a special case of Markov-additive jump processes, but also as a natural generalization from the well-known concept of BMAPs. In chapters 4 and 5, SMAPs serve as arrival streams for the analyzed periodic SMAP/M/c/c and SMAP/G/infinity queues, respectively. These types of queues find application as models or planning tools for mobile communication networks. The analysis of these queues involves new methods such that even for the special cases of BMAP inputs (i.e. non-spatial queues) new results are obtained. In chapter 6, a procedure for statistical parameter estimation is proposed along with its numerical results. The thesis is concluded by an appendix which collects necessary results from the theories of Markov jump processes and stochastic point fields. For special classes of Markov jump processes, new results have been obtained, too.
This work is concerned with the numerical solution of optimization problems that arise in the context of ground water modeling. Both ground water hydraulic and quality management problems are considered. The considered problems are discretized problems of optimal control that are governed by discretized partial differential equations. Aspects of special interest in this work are inaccurate function evaluations and the ensuing numerical treatment within an optimization algorithm. Methods for noisy functions are appropriate for the considered practical application. Also, block preconditioners are constructed and analyzed that exploit the structure of the underlying linear system. Specifically, KKT systems are considered, and the preconditioners are tested for use within Krylov subspace methods. The project was financed by the foundation Stiftung Rheinland-Pfalz für Innovation and carried out in joint work with TGU GmbH, a company of consulting engineers for ground water and water resources.
The goal of this thesis is to transfer the logarithmic barrier approach, which led to very efficient interior-point methods for convex optimization problems in recent years, to convex semi-infinite programming problems. Based on a reformulation of the constraints into a nondifferentiable form this can be directly done for convex semi- infinite programming problems with nonempty compact sets of optimal solutions. But, by means of an involved max-term this reformulation leads to nondifferentiable barrier problems which can be solved with an extension of a bundle method of Kiwiel. This extension allows to deal with inexact objective values and subgradient information which occur due to the inexact evaluation of the maxima. Nevertheless we are able to prove similar convergence results as for the logarithmic barrier approach in the finite optimization. In the further course of the thesis the logarithmic barrier approach is coupled with the proximal point regularization technique in order to solve ill-posed convex semi-infinite programming problems too. Moreover this coupled algorithm generates sequences converging to an optimal solution of the given semi-infinite problem whereas the pure logarithmic barrier only produces sequences whose accumulation points are such optimal solutions. If there are certain additional conditions fulfilled we are further able to prove convergence rate results up to linear convergence of the iterates. Finally, besides hints for the implementation of the methods we present numerous numerical results for model examples as well as applications in finance and digital filter design.