Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (518) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (19)
- Fernerkundung (18)
- Optimierung (17)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Europäische Union (9)
- Finanzierung (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (53)
- Mathematik (47)
- Fachbereich 6 (38)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (24)
- Informatik (19)
- Anglistik (15)
- Rechtswissenschaft (14)
- Fachbereich 2 (12)
- Medienwissenschaft (4)
- Politikwissenschaft (3)
- Universitätsbibliothek (3)
- Fachbereich 3 (2)
- Fachbereich 5 (2)
- Pädagogik (2)
- Soziologie (2)
- Computerlinguistik und Digital Humanities (1)
- Geschichte, mittlere und neuere (1)
- Japanologie (1)
- Pflegewissenschaft (1)
- Phonetik (1)
- Sinologie (1)
Optimal control problems are optimization problems governed by ordinary or partial differential equations (PDEs). A general formulation is given byrn \min_{(y,u)} J(y,u) with subject to e(y,u)=0, assuming that e_y^{-1} exists and consists of the three main elements: 1. The cost functional J that models the purpose of the control on the system. 2. The definition of a control function u that represents the influence of the environment of the systems. 3. The set of differential equations e(y,u) modeling the controlled system, represented by the state function y:=y(u) which depends on u. These kind of problems are well investigated and arise in many fields of application, for example robot control, control of biological processes, test drive simulation and shape and topology optimization. In this thesis, an academic model problem of the form \min_{(y,u)} J(y,u):=\min_{(y,u)}\frac{1}{2}\|y-y_d\|^2_{L^2(\Omega)}+\frac{\alpha}{2}\|u\|^2_{L^2(\Omega)} subject to -\div(A\grad y)+cy=f+u in \Omega, y=0 on \partial\Omega and u\in U_{ad} is considered. The objective is tracking type with a given target function y_d and a regularization term with parameter \alpha. The control function u takes effect on the whole domain \Omega. The underlying partial differential equation is assumed to be uniformly elliptic. This problem belongs to the class of linear-quadratic elliptic control problems with distributed control. The existence and uniqueness of an optimal solution for problems of this type is well-known and in a first step, following the paradigm 'first optimize, then discretize', the necessary and sufficient optimality conditions are derived by means of the adjoint equation which ends in a characterization of the optimal solution in form of an optimality system. In a second step, the occurring differential operators are approximated by finite differences and the hence resulting discretized optimality system is solved with a collective smoothing multigrid method (CSMG). In general, there are several optimization methods for solving the optimal control problem: an application of the implicit function theorem leads to so-called black-box approaches where the PDE-constrained optimization problem is transformed into an unconstrained optimization problem and the reduced gradient for these reduced functional is computed via the adjoint approach. Another possibilities are Quasi-Newton methods, which approximate the Hessian by a low-rank update based on gradient evaluations, Krylov-Newton methods or (reduced) SQP methods. The use of multigrid methods for optimization purposes is motivated by its optimal computational complexity, i.e. the number of required computer iterations scales linearly with the number of unknowns and the rate of convergence, which is independent of the grid size. Originally multigrid methods are a class of algorithms for solving linear systems arising from the discretization of partial differential equations. The main part of this thesis is devoted to the investigation of the implementability and the efficiency of the CSMG on commodity graphics cards. GPUs (graphic processing units) are designed for highly parallelizable graphics computations and possess many cores of SIMD-architecture, which are able to outperform the CPU regarding to computational power and memory bandwidth. Here they are considered as prototype for prospective multi-core computers with several hundred of cores. When using GPUs as streamprocessors, two major problems arise: data have to be transferred from the CPU main memory to the GPU main memory, which can be quite slow and the limited size of the GPU main memory. Furthermore, only when the streamprocessors are fully used to capacity, a remarkable speed-up comparing to a CPU is achieved. Therefore, new algorithms for the solution of optimal control problems are designed in this thesis. To this end, a nonoverlapping domain decomposition method is introduced which allows the exploitation of the computational power of many GPUs resp. CPUs in parallel. This algorithm is based on preliminary work for elliptic problems and enhanced for the application to optimal control problems. For the domain decomposition into two subdomains the linear system for the unknowns on the interface is solved with a Schur complement method by using a discrete approximation of the Steklov-Poincare operator. For the academic optimal control problem, the arising capacitance matrix can be inverted analytically. On this basis, two different algorithms for the nonoverlapping domain decomposition for the case of many subdomains are proposed in this thesis: on the one hand, a recursive approach and on the other hand a simultaneous approach. Numerical test compare the performance of the CSMG for the one domain case and the two approaches for the multi-domain case on a GPU and CPU for different variants.
Krylov subspace methods are often used to solve large-scale linear equations arising from optimization problems involving partial differential equations (PDEs). Appropriate preconditioning is vital for designing efficient iterative solvers of this type. This research consists of two parts. In the first part, we compare two different kinds of preconditioners for a conjugate gradient (CG) solver attacking one partial integro-differential equation (PIDE) in finance, both theoretically and numerically. An analysis on mesh independence and rate of convergence of the CG solver is included. The knowledge of preconditioning the PIDE is applied to a relevant optimization problem. The second part aims at developing a new preconditioning technique by embedding reduced order models of nonlinear PDEs, which are generated by proper orthogonal decomposition (POD), into deflated Krylov subspace algorithms in solving corresponding optimization problems. Numerical results are reported for a series of test problems.
Religion, churches and religious communities have growing importance in the Law of the European Union. Since long a distinct law on religion of the European Union is developing. This collection of those norms of European Union Law directly concerning religion mirrors today's status of this dynamic process.
In this thesis, in order to shed light on the biological function of the membrane-bound Glucocorticoid Receptor (mGR), proteomic changes induced by 15 min in vivo acute stress and by short in vitro activation of the mGR were analyzed in T-lymphocytes. The numerous overlaps between the two datasets suggest that the mGR mediates physiologically relevant actions and participates in the early stress response, triggering rapid early priming events that pave the way for the slower genomic GC activities. In addition, a new commercially available method with suitable sensitivity to detect the human mGR is reported and the transcriptional origin of this protein investigated. Our results indicates that specific GR-transcripts, containing exon 1C and 1D, are associated with the expression of this membrane isoform.
The contribution of three genes (C15orf53, OXTR and MLC1) to the etiology of chromosome 15-bound schizophrenia (SCZD10), bipolar disorder (BD) and autism spectrum disorder (ASD) were studied. At first, the uncharacterized gene C15orf53 was comprehensively analyzed. Previous genome-wide association studies (GWAS) in bipolar disorder samples have identified an association signal in close vicinity to C15orf53 on chromosome 15q14. This gene is located in exactly the genomic region, which is segregating in our SCZD10 families. An association study with bipolar disorder (BD) and SCZD10 individual samples did not reveal any association of single nucleotide polymorphisms (SNPs) in C15orf53. Mutational analysis of C15orf53 in SCZD10-affected individuals from seven multiplex families did not show any mutations in the 5'-untranslated region, the coding region and the intron-exon boundaries. Gene expression analysis revealed that C15orf53 was expressed in a subpopulation of leukocytes, but not in human post-mortem limbic brain tissue. Summarizing these studies, C15orf53 is unlikely to be a strong candidate gene for the etiology of BD or SCZD10. The second investigated gene was the human oxytocin receptor gene (OXTR). Five well described SNPs located in the OXTR gene were taken for a transmission-disequilibrium test (TDT) in parents-child trios with ASD-affected children. Neither in the complete sample nor in a subgroup with children that had an intelligence quotient (IQ) above 70, association was found, independent from the application of Haploview or UNPHASED for analysis. The third gene, MLC1, was investigated with regards to its implication in the etiology of SCZD10. Mutations in the MLC1 gene lead to megalencephalic leukoencephalopathy with subcortical cysts (MLC) and one variant coding for the amino acid methionine (Met) instead of leucine (Leu) at position 309 was identified to segregate in a family affected with SCZD10. For further investigation of MLC1 and its possible implication in the etiology of SCZD10, a constitutive Mlc1 knockout mouse model should be created. Mouse embryonic stem cells (mES) were electroporated with a knockout vector construct and analyzed with respect to homologous recombination of the knockout construct with the genomic DNA (gDNA) of the mES. Polymerase chain reaction (PCR) on the available stem cell clones did not reveal any homologous recombined ES. Additionally, we conducted experiments to knockdown MLC1 and using microRNAs. The 3'-untranslated region of the MLC1 gene was analyzed with the bioinformatics tool TargetScan to screen for potential microRNA target sites. In the 3'-untranslated region of the MLC1 gene, a potential binding site for miR-137 was identified. The gene expression level of genes that had been linked to psychiatric disorders and carried a predicated miR-137 binding site has been proven to be immediately responsive to miR-137. Thus, there is new evidence that MLC1 is a candidate gene for the etiology of SCZD10.
The stress hormone cortisol as the end-product of the hypothalamic-pituitary-adrenal (HPA) axis has been found to play a crucial role in the release of aggressive behavior (Kruk et al., 2004; Böhnke et al., 2010). In order to further explore potential mechanisms underlying the relationship between stress and aggression, such as changes in (social) information processing, we conducted two experimental studies that are presented in this thesis. In both studies, acute stress was induced by means of the Socially Evaluated Cold Pressor Test (SECP) designed by Schwabe et al. (2008). Stressed participants were classified as either cortisol responders or nonresponders depending on their rise in cortisol following the stressor. Moreover, basal HPA axis activity was measured prior to the experimental sessions and EEG was recorded throughout the experiments. The first study dealt with the influence of acute stress on cognitive control processes. 41 healthy male participants were assigned to either the stress condition or the non-stressful control procedure of the SECP. Before as well as after the stress induction, all participants performed a cued task-switching paradigm in order to measure cognitive control processes. Results revealed a significant influence of acute and basal cortisol levels, respectively, on the motor preparation of the upcoming behavioral response, that was reflected in changes in the magnitude of the terminal Contingent Negative Variation (CNV). In the second study, the effect of acute stress and subsequent social provocation on approach-avoidance motivation was examined. 72 healthy students (36 males, 36 females) took part in the study. They performed an approach-avoidance task, using emotional facial expressions as stimuli, before as well as after the experimental manipulation of acute stress (again via the SECP) and social provocation realized by means of the Taylor Aggression Paradigm (Taylor, 1967). Additionally to salivary cortisol, testosterone samples were collected at several points in time during the experimental session. Results indicated a positive relationship between acute testosterone levels and the motivation to approach social threat stimuli in highly provoked cortisol responders. Similar results were found when the testosterone-to-cortisol ratio at baseline was taken into account instead of acute testosterone levels. Moreover, brain activity during the approach-avoidance task was significantly influenced by acute stress and social provocation, as reflected in reductions of early (P2) as well as of later (P3) ERP components in highly provoked cortisol responders. This may indicate a less accurate, rapid processing of socially relevant stimuli due to an acute increase in cortisol and subsequent social provocation. In conclusion, the two studies presented in this thesis provide evidence for significant changes in information processing due to acute stress, basal cortisol levels and social provocation, suggesting an enhanced preparation for a rapid behavioral response in the sense of a fight-or-flight reaction. These results confirm the model of Kruk et al. (2004) proposing a mediating role of changed information processes in the stress-aggression-link.
Time series archives of remotely sensed data offer many possibilities to observe and analyse dynamic environmental processes at the Earth- surface. Based on these hypertemporal archives, which offer continuous observations of vegetation indices, typically at repetition rates from one to two weeks, sets of phenological parameters or metrics can be derived. Examples of such parameters are the beginning and end of the annual growing period, as well as its length. Even though these parameters do not correspond exactly to conventional observations of phenological events, they nevertheless provide indications of the dynamic processes occurring in the biosphere. The development of robust algorithms for the derivation of phenological metrics can be challenging. Currently, such algorithms are most commonly based on digital filters or the Fourier analysis of time series. Polynomial spline models offer a useful alternative to existing methods. The possibilities of using spline models in the analytical description of time series are numerous, and their specific mathematical properties may help to avoid known problems occurring with the more common methods for deriving phenological metrics. Based on a selection of different polynomial spline models suitable for the analysis of remotely sensed time series of vegetation indices, a method to derive various phenological parameters from such time series was developed and implemented in this work. Using an example data set from an intensively used agricultural area showing highly dynamic variations in vegetation phenology, the newly developed method was verified by a comparison of the results of the spline based approach to the results of two alternative, well established methods.
Arctic and Antarctic polynya systems are of high research interest since extensive new ice formation takes place in these regions. The monitoring of polynyas and the ice production is crucial with respect to the changing sea-ice regime. The thin-ice thickness (TIT) distribution within polynyas controls the amount of heat that is released to the atmosphere and has therefore an impact on the ice-production rates. This thesis presents an improved method to retrieve thermal-infrared thin-ice thickness distributions within polynyas. TIT with a spatial resolution of 1 km × 1 km is calculated using the MODIS ice-surface temperature and atmospheric model variables within the Laptev Sea polynya for the winter periods 2007/08 and 2008/09. The improvement of the algorithm is focused on the surface-energy flux parameterizations. Furthermore, a thorough sensitivity analysis is applied to quantify the uncertainty in the thin-ice thickness results. An absolute mean uncertainty of -±4.7 cm for ice below 20 cm of thickness is calculated. Furthermore, advantages and drawbacks using different atmospheric data sets are investigated. Daily MODIS TIT composites are computed to fill the data gaps arising from clouds and shortwave radiation. The resulting maps cover on average 70 % of the Laptev Sea polynya. An intercomparison of MODIS and AMSR-E polynya data indicates that the spatial resolution issue is essential for accurately deriving polynya characteristics. Monthly fast-ice masks are generated using the daily TIT composites. These fast-ice masks are implemented into the coupled sea-ice/ocean model FESOM. An evaluation of FESOM sea-ice concentrations is performed with the result that a prescribed high-resolution fast-ice mask is necessary regarding the accurate polynya location. However, for a more realistic simulation of other small-scale sea-ice features further model improvements are required. The retrieval of daily high-resolution MODIS TIT composites is an important step towards a more precise monitoring of thin sea ice and sea-ice production. Future work will address a combined remote sensing " model assimilation method to simulate fully-covered thin-ice thickness maps that enable the retrieval of accurate ice production values.
We are living in a connected world, surrounded by interwoven technical systems. Since they pervade more and more aspects of our everyday lives, a thorough understanding of the structure and dynamics of these systems is becoming increasingly important. However - rather than being blueprinted and constructed at the drawing board - many technical infrastructures like for example the Internet's global router network, the World Wide Web, large scale Peer-to-Peer systems or the power grid - evolve in a distributed fashion, beyond the control of a central instance and influenced by various surrounding conditions and interdependencies. Hence, due to this increase in complexity, making statements about the structure and behavior of tomorrow's networked systems is becoming increasingly complicated. A number of failures has shown that complex structures can emerge unintentionally that resemble those which can be observed in biological, physical and social systems. In this dissertation, we investigate how such complex phenomena can be controlled and actively used. For this, we review methodologies stemming from the field of random and complex networks, which are being used for the study of natural, social and technical systems, thus delivering insights into their structure and dynamics. A particularly interesting finding is the fact that the efficiency, dependability and adaptivity of natural systems can be related to rather simple local interactions between a large number of elements. We review a number of interesting findings about the formation of complex structures and collective dynamics and investigate how these are applicable in the design and operation of large scale networked computing systems. A particular focus of this dissertation are applications of principles and methods stemming from the study of complex networks in distributed computing systems that are based on overlay networks. Here we argue how the fact that the (virtual) connectivity in such systems is alterable and widely independent from physical limitations facilitates a design that is based on analogies between complex network structures and phenomena studied in statistical physics. Based on results about the properties of scale-free networks, we present a simple membership protocol by which scale-free overlay networks with adjustable degree distribution exponent can be created in a distributed fashion. With this protocol we further exemplify how phase transition phenomena - as occurring frequently in the domain of statistical physics - can actively be used to quickly adapt macroscopic statistical network parameters which are known to massively influence the stability and performance of networked systems. In the case considered in this dissertation, the adaptation of the degree distribution exponent of a random, scale-free overlay allows - within critical regions - a change of relevant structural and dynamical properties. As such, the proposed scheme allows to make sound statements about the relation between the local behavior of individual nodes and large scale properties of the resulting complex network structures. For systems in which the degree distribution exponent cannot easily be derived for example from local protocol parameters, we further present a distributed, probabilistic mechanism which can be used to monitor a network's degree distribution exponent and thus to reason about important structural qualities. Finally, the dissertation shifts its focus towards the study of complex, non-linear dynamics in networked systems. We consider a message-based protocol which - based on the Kuramoto model for coupled oscillators - achieves a stable, global synchronization of periodic heartbeat events. The protocol's performance and stability is evaluated in different network topologies. We further argue that - based on existing findings about the interrelation between spectral network properties and the dynamics of coupled oscillators - the proposed protocol allows to monitor structural properties of networked computing systems. An important aspect of this dissertation is its interdisciplinary approach towards a sensible and constructive handling of complex structures and collective dynamics in networked systems. The associated investigation of distributed systems from the perspective of non-linear dynamics and statistical physics highlights interesting parallels both to biological and physical systems. This foreshadows systems whose structures and dynamics can be analyzed and understood in the conceptual frameworks of statistical physics and complex systems.
In this thesis, we mainly investigate geometric properties of optimal codebooks for random elements $X$ in a seperable Banach space $E$. Here, for a natural number $ N $ and a random element $X$ , an $N$-optimal codebook is an $ N $-subset in the underlying Banach space $E$ which gives a best approximation to $ X $ in an average sense. We focus on two types of geometric properties: The global growth behaviour (growing in $N$) for a sequence of $N$-optimal codebooks is described by the maximal (quantization) radius and a so-called quantization ball. For many distributions, such as central-symmetric distributions on $R^d$ as well as Gaussian distributions on general Banach spaces, we are able to estimate the asymptotics of the quantization radius as well as the quantization ball. Furthermore, we investigate local properties of optimal codebooks, in particular the local quantization error and the weights of the Voronoi cells induced by an optimal codebook. In the finite-dimensional setting, we are able to proof for many interesting distributions classical conjectures on the asymptotic behaviour of those properties. Finally, we propose a method to construct sequences of asymptotically optimal codebooks for random elements in infinite dimensional Banach spaces and apply this method to construct codebooks for stochastic processes, such as fractional Brownian Motions.
This dissertation focuses on the link between labour market institutions and precautionary savings. It is evaluated whether private households react to changes in social insurance provision such as the income replacement in case of unemployment by increased savings for precautionary reasons. The dissertation consists of three self-contained chapters, each focusing on slightly different aspects of the topic. The first chapter titled "Precautionary saving and the (in)stability of subjective earnings uncertainty" empirically looks at the influence of future income uncertainty on household saving behavior. Numerous cross-section studies on precautionary saving use subjective expectations regarding the income variance one year ahead as a proxy for income uncertainty. Using such proxies observed only at one point in time, however, may give rise to biased estimates for precautionary wealth if expectations are not stable over time. Survey data from the Dutch DNB Household Survey suggest that subjective future income distributions are not stable over the mid-term. Moreover, in this study I contrast estimates of precautionary wealth using the variation coefficient observed at one point in time with those using a simple mid-term average. Estimates of precautionary wealth based on the average are about 40% to 80% higher than the estimates using the variation coefficient observed only once. In addition to that, wealth accumulation for precautionary reasons is estimated for different parts of the income distribution. The share of precautionary wealth is highest for households at the center of the income distribution. By linking saving behaviour with unemployment insurance, the following chapters then shed some light on an issue that has largely been neglected in the literature on labour market institutions so far. Whereas the third chapter models the relevance of unemployment insurance for income uncertainty and intertemporal decision making during institutional reform processes, chapter 4 seeks to establish empirically a relationship between saving behavior and unemployment insurance. Social insurance, especially unemployment insurance, provides agents with income insurance against not marketable income risks. Since the early 1990s, reform measures like more activating policies as suggested by the OECD Jobs Study in 1994 have been observed in Europe. In the third chapter it is argued that such changes in unemployment insurance reduce public insurance and increase income uncertainty. Moreover, a simple three period model is discussed which shows a link between a welfare state reform and agents' saving decisions as one possible reaction of agents to self-insure against income risk. Two sources of uncertainty seem to be important in this context: (1) uncertain results of the reform process concerning the replacement rate, and (2) uncertainty regarding the timing of information about the content of the reform. It can be shown that the precautionary motive for saving explains an increased accumulation of capital in times of reform activities. In addition to that, early information about the expected replacement rate increases agents' utility and reduces under and oversaving. Following the argument of the previous chapters, that an important feature of labour market institutions in modern welfare states is to provide cash transfers as income replacement in case of unemployment, it is hypothesised that unemployment benefits reduce the motive to save for precautionary reasons. Based on consumer sentiment data from the European Commission's Consumer Survey, chapter four finally provides some evidence that aggregate saving intentions are significantly influenced by unemployment benefits. It can be shown that higher benefits lower the intention to save.
The main objective of the present thesis was to investigate whether antibody effects observed in earlier in vitro studies can translate into the protection against chemical carcinogenesis in vivo as the basis of an immunoprophylactic approach against carcinogens. As model for chemical carcinogenesis, we selected B[a]P the prototype polycyclic aromatic hydrocarbon (PAH), an environmental pollutant emanating from both natural and anthropogenic sources. Many in vivo models conveniently use high doses of carcinogens mostly given as single bolus, which provides simple surrogate readouts, but poorly reflects chronic exposure to the low concentrations found in the environment. In addition, these concentrations cannot be matched with equimolar antibody concentrations obtained by immunisation. However, low B[a]P concentrations do not permit to directly measure chemical carcinogenesis. Therefore, in the present thesis, the pharmacokinetic, metabolism and B[a]P mediated immunotoxicity were chosen as experimental read-outs. B[a]P conjugate vaccines based on ovalbumin, tetanus toxoid and diphtheria toxoid (DT) as carrier proteins were developed to actively immunise mice against B[a]P. B[a]P-DT conjugate induced the most robust immune response. The antibodies reacted not only with B[a]P but also with the proximate carcinogen 7,8-diol-B[a]P. Antibodies modulated the bioavailability of B[a]P and its metabolic activation in a dose-dependent manner by sequestration in the blood. In order to further improve the vaccination, we replaced the protein carrier by promiscuous T-helper cell epitopes to induce higher antibody titer with increased specificity for the B[a]P hapten. We hypothesised that a reduction of B cell binding sites on the carrier, compared to whole protein carrier, should favour the activation of B cells recognising the hapten instead of the carrier protein. An internal processing of the carrier and cleavage of the B[a]P-BA and subsequent presentation of the carrier peptide by MHC II molecules to T cell receptor should induce a B cell dependent immune response by activating B cells capable to recognise B[a]P. We demonstrated that a vaccination against B[a]P using promiscuous T-helper cell epitopes as a carrier is feasible and some tested peptide conjugates were more immunogenic as whole protein conjugates with increased specificity. We showed that vaccination against B[a]P reduces immunotoxicity. B[a]P suppressed the proliferative response of both T and B cells after a sub-acute administration, an effect that was completely reversed by vaccination. In immunized mice the immunotoxic effect of B[a]P on IFN-γ, Il-12, TNF-ï¡ production and B cell activation was restored. In addition, specific antibodies inhibited the induction of Cyp1a1 by B[a]P in lymphocytes and Cyp1b1 in the liver, enzymes that are known to convert the procarcinogen B[a]P to the ultimate DNA-adduct forming metabolite, a major risk factor of chemical carcinogenesis. In order to replace Freund adjuvant and to improve the immunisation strategy in terms of antibody quantity and quality, several adjuvants that are potentially compatible with their use in humans were tested. In combination with Freund adjuvant, the conjugate-vaccine induced high levels of B[a]P-specific antibodies. We showed that all adjuvants tested induced specific antibodies against B[a]P and 7,8-diol-B[a]P, its carcinogenic metabolite. The highest antibody levels were obtained with Quil A, MF-59 and Alum. Biological activity in terms of enhanced retention of B[a]P was confirmed in mice immunised with Quil A, Montanide, Alum and MF-59. Our findings demonstrate that a vaccination against B[a]P is feasible in combination with adjuvants licensed in humans. Based on these results and with the current understanding of the mechanisms of chemical carcinogenesis of the ubiquitous carcinogen B[a]P and of the effects of specific antibodies, an immunoprophylactic approach against chemical carcinogenesis is absolutely warranted. Nevertheless, the direct effects of B[a]P-specific antibodies on the different stages of carcinogenesis (e.g. adduct formation) and whether these effects may translate into long-term protective effect against tumourigenesis needs to be proven in further experiments.
This dissertation focuses on e-marketing strategy's effective elements in tourism industry. As case study, research focus is on Airlines, tour operator, chain hotels in Iran and Germany. It aims to show various possibilities to enhance the company- e-marketing strategy and successfully performance e-marketing strategies with recognition effective elements and their important during the strategy designing and implementation process. For the purpose of this research due to the nature of the research, Explanatory -exploratory-applicable; after studying and consulting, Delphi technique has been chosen. In results, we have some effective elements and their important according the Delphi and AHP method. For example between elements "Tourists' Needs, Experience and Expects" with the importance coefficient of %204 is the most remarkable elements and "Customer satisfactions' elements group" with average value 5.54 according the research results have more important than other groups.
On the Influence of Ignored Stimuli: Generalization and Application of Distractor-Response Binding.
(2011)
In selection tasks where target stimuli are accompanied by distractors, responses to target stimuli, target stimuli and the distractor stimuli can be encoded together as one episode in memory. Subsequent repetition of any aspect of such an episode can lead to the retrieval of the whole episode including the response. Thus, repeating a distractor can retrieve responses given to previous targets; this mechanism was labeled distractor-response binding and has been evidenced in several visual setups. Three experiments of the present thesis implemented a priming paradigm with an identification task to generalize this mechanism to auditory and tactile stimuli as well as to stimulus concepts. In four more experiments the possible effect of distractor-response binding on drivers' reactions was investigated. The same paradigm was implemented using more complex stimuli, foot responses, go/no-go responses, and a dual task setup with head-up and head-down displays. The results indicate that distractor-response binding effects occur with auditory and tactile stimuli and that the process is mediated by a conceptual representation of the distractor stimuli. Distractor-response binding effects also revealed for stimuli, responses, and framework conditions likely to occur in a driving situation. It can be concluded that the effect of distractor-response binding needs to be taken into account for the design of local danger warnings in driver assistance systems.
Religion, churches and religious communities have growing importance in the Law of the European Union. Since long a distinct law on religion of the European Union is developing.rnThis collection of those norms of European Union Law directly concerning religion mirrors today's status of this dynamic process.
This thesis centers on formal tree languages and on their learnability by algorithmic methods in abstractions of several learning settings. After a general introduction, we present a survey of relevant definitions for the formal tree concept as well as special cases (strings) and refinements (multi-dimensional trees) thereof. In Chapter 3 we discuss the theoretical foundations of algorithmic learning in a specific type of setting of particular interest in the area of Grammatical Inference where the task consists in deriving a correct formal description for an unknown target language from various information sources (queries and/or finite samples) in a polynomial number of steps. We develop a parameterized meta-algorithm that incorporates several prominent learning algorithms from the literature in order to highlight the basic routines which regardless of the nature of the information sources have to be run through by all those algorithms alike. In this framework, the intended target descriptions are deterministic finite-state tree automata. We discuss the limited transferability of this approach to another class of descriptions, residual finite-state tree automata, for which we propose several learning algorithms as well. The learnable class by these techniques corresponds to the class of regular tree languages. In Chapter 4we outline a recent range of attempts in Grammatical Inference to extend the learnable language classes beyond regularity and even beyond context-freeness by techniques based on syntactic observations which can be subsumed under the term 'distributional learning', and we describe learning algorithms in several settings for the tree case taking this approach. We conclude with some general reflections on the notion of learning from structural information.
The 23rd Annual Congress of the European Consortium for Church and State Research took place in Oxford, United Kingdom from 29 September to 2 October 2011. Founded in 1989, the Consortium unites experts in law and religion from Member States of the European Union. The Oxford conference took as its theme Religion and Discrimination Law focusing on the manner in which State governments had sought to implement the non-discrimination policy of the EU by legislation and through courts and tribunals. The proceedings comprise three introductory papers considering the historical, cultural and social background; the prohibition on discrimination, and the exemptions to the general prohibition. This is followed by national reports from twenty-three countries describing the reach of discrimination law in the field of religion. These are supplemented by further papers analysing the jurisprudence of the Strasbourg Court and the background to EU Directive 2000/78/EC and by some concluding reflections. The proceedings begin with the text of a public lecture given at the opening of the Congress by Sir Nicolas Bratza, President of the European Court of Human Rights on the subject of freedom of religion under Article 9 of the Convention.
Psychiatric/Behavioral disorders/traits are usually polygenic in nature, where a particular phenotype is the manifestation of multiple genes. However, the existence of large families with numerous members who are affected by these disorders/traits steers us towards a Mendelian (or monogenic) possibility, where the phenotype is caused by a single gene. In order to better understand the genetic architecture of general psychiatric/behavioral disorders/traits, this thesis investigates large pedigrees that display a Mendelian pattern for attention-deficit/hyperactivity disorder, schizophrenia and bipolar disorder. Numerous challenges in the field of psychiatric and behavioral sciences have impeded the genetic investigation of such disorders/traits. Examples include frequent cross-disorders, genetic heterogeneity across subjects as well as the use of diagnostic tools that can be subjective at times. To overcome these challenges, this thesis investigates large multi-generational pedigrees, which comprise a significant number of members who exhibit specific psychiatric/behavioral phenotypes. These pedigrees provide high-resolution experimental setups that can dissect the genetic complexities of psychiatric/behavioral disorders/traits. This thesis adopts a classical two-stage genetic approach to investigate the various psychiatric/behavioral disorders/traits in large pedigrees. The classical two-stage genetic approach is commonly used by many human geneticists to study a wide spectrum of human physiological disorders but is only being applied to the field of psychiatric and behavioral genetics recently. Through the study of large pedigrees, this thesis discovers the genomic regions that may play a causative role in the expression of certain psychiatric/behavioral disorders/traits within the vast genome.
In addition to the well-recognised effects of both, genes and adult environment, it is now broadly accepted that adverse conditions during pregnancy contribute to the development of mental and somatic disorders in the offspring, such as cardiovascular disorders, endocrinological disorders, metabolic disorders, schizophrenia, anxious and depressive behaviour and attention deficit hyperactivity disorder (ADHD). Early life events may have long lasting impact on tissue structure and function and these effects appear to underlie the developmental origins of vulnerability to chronic diseases. The assumption that prenatal adversity, such as maternal emotional states during pregnancy, may have adverse effects on the developing infant is not new. Accordant references can be found in an ancient Indian text (ca. 1050 before Christ), in biblical texts and in documents originating during the Middle Ages. Even Hippocrates stated possible effects of maternal emotional states on the developing fetus. Since the mid-1950s, research examining the effects of maternal psychosocial stress during pregnancy appeared in the literature. Extensive research in this field has been conducted since the early 1990s. Thus, the relationship between early life events and long-term health outcomes was already postulated over 20 years ago. David Barker and colleagues demonstrated that children of lower birth weight - which represents a crude marker of an adverse intrauterine environment - were at increased risk of high blood pressure, cardiovascular disorders, and type-2 diabetes later in life. These provocative findings led to a large amount of subsequent research, initially focussing on the role of undernutrition in determining fetal outcomes. The phenomenon of prenatal influences that determine in part the risk of suffering from chronic disease later in life has been named the "fetal origins of health and disease" paradigm. The concept of "prenatal programming" has now been extended to many other domains, such as the effects of prenatal maternal stress, prenatal tobacco exposure, alcohol intake, medication, toxins, as well as maternal infection and diseases. During the process of prenatal programming, environmental agents are transmitted across the placenta and act on specific fetal tissues during sensitive periods of development. Thus, developmental trajectories are changed and the organisation and function of tissue structure and organ system is altered. The biological purpose of those "early life programming" may consist in evolutionary advantages. The offspring adapts its development to the expected extrauterine environment which is forecast by the clues available during fetal life. If the fetus receives signals of a challenging environment, e.g. due to maternal stress hormones or maternal undernutrition, its survival may be promoted due to developmental adaptation processes. However, if the expected environment does not match with the real environment, maladapation and later disease risk may result. For example, a possible indicator of a "response ready" trait, such as hyperactivity/inattention may have been advantageous in an adverse ancient environment. However, it is of disadvantage when the postnatal environment demands oppositional skills, such as attention and concentration " e.g. in the classroom, at school, to achieve academic success. Borderline personality disorder (BPD) is a prevalent psychiatric disorder, characterized by impulsivity, affective instability, dysfunctional interpersonal relationships and identity disturbance. Although many studies report different risk factors, the exact etiologic mechanisms are not yet understood. In addition to the well-recognised effects of genetic components and adverse childhood experiences, BPD may potentially be co-determined by further environmental influences, acting very early in life: during pre- and perinatal period. There are several hints that may suggest possible prenatal programming processes in BPD. For example, patients with BPD are characterized by elevated stress sensitivity and reactivity and dysfunctions of the neuroendocrine stress system, such as the hypothalamic pituitary adrenal (HPA) axis. Furthermore, patients with BPD show a broad range of somatic comorbidities " especially those disorders for which prenatal programming processes have been described. During infancy and childhood, BPD patients already show behavioural and emotional abnormalities as well as pronounced temperamental traits, such as impulsivity, emotional dysregulation and inattention that may potentially be co-determined by prenatal programming processes. Such temperamental traits - similar to those, seen in patients with ADHD - have been described to be associated with low birthweight which indicates a suboptimal intrauterine environment. Moreover, the functional and structural alterations in the central nervous system (CNS) in patients with BPD might also be mediated in part by prenatal agents, such as prenatal tobacco exposure. Prenatal adversity may thus constitute a further, additional component in the multifactorial genesis of BPD. The association between BPD and prenatal risk factors has not yet been studied in such detail. We are not aware of any further study that assessed pre- and perinatal risk factors, such as maternal psychoscocial stress, smoking, alcohol intake, obstetric complications and lack of breastfeeding in patients with BPD.
Variational inequality problems constitute a common basis to investigate the theory and algorithms for many problems in mathematical physics, in economy as well as in natural and technical sciences. They appear in a variety of mathematical applications like convex programming, game theory and economic equilibrium problems, but also in fluid mechanics, physics of solid bodies and others. Many variational inequalities arising from applications are ill-posed. This means, for example, that the solution is not unique, or that small deviations in the data can cause large deviations in the solution. In such a situation, standard solution methods converge very slowly or even fail. In this case, so-called regularization methods are the methods of choice. They have the advantage that an ill-posed original problem is replaced by a sequence of well-posed auxiliary problems, which have better properties (like, e.g., a unique solution and a better conditionality). Moreover, a suitable choice of the regularization term can lead to unconstrained auxiliary problems that are even equivalent to optimization problems. The development and improvement of such methods are a focus of current research, in which we take part with this thesis. We suggest and investigate a logarithmic-quadratic proximal auxiliary problem (LQPAP) method that combines the advantages of the well-known proximal-point algorithm and the so-called auxiliary problem principle. Its exploration and convergence analysis is one of the main results in this work. The LQPAP method continues the recent developments of regularization methods. It includes different techniques presented in literature to improve the numerical stability: The logarithmic-quadratic distance function constitutes an interior point effect which allows to treat the auxiliary problems as unconstrained ones. Furthermore, outer operator approximations are considered. This simplifies the numerical solution of variational inequalities with multi-valued operators since, for example, bundle-techniques can be applied. With respect to the numerical practicability, inexact solutions of the auxiliary problems are allowed using a summable-error criterion that is easy to implement. As a further advantage of the logarithmic-quadratic distance we verify that it is self-concordant (in the sense of Nesterov/Nemirovskii). This motivates to apply the Newton method for the solution of the auxiliary problems. In the numerical part of the thesis the LQPAP method is applied to linearly constrained, differentiable and nondifferentiable convex optimization problems, as well as to nonsymmetric variational inequalities with co-coercive operators. It can often be observed that the sequence of iterates reaches the boundary of the feasible set before being close to an optimal solution. Against this background, we present the strategy of under-relaxation, which robustifies the LQPAP method. Furthermore, we compare the results with an appropriate method based on Bregman distances (BrPAP method). For differentiable, convex optimization problems we describe the implementation of the Newton method to solve the auxiliary problems and carry out different numerical experiments. For example, an adaptive choice of the initial regularization parameter and a combination of an Armijo and a self-concordance step size are evaluated. Test examples for nonsymmetric variational inequalities are hardly available in literature. Therefore, we present a geometric and an analytic approach to generate test examples with known solution(s). To solve the auxiliary problems in the case of nondifferentiable, convex optimization problems we apply the well-known bundle technique. The implementation is described in detail and the involved functions and sequences of parameters are discussed. As far as possible, our analysis is substantiated by new theoretical results. Furthermore, it is explained in detail how the bundle auxiliary problems are solved with a primal-dual interior point method. Such investigations have by now only been published for Bregman distances. The LQPAP bundle method is again applied to several test examples from literature. Thus, this thesis builds a bridge between theoretical and numerical investigations of solution methods for variational inequalities.