Refine
Year of publication
Document Type
- Doctoral Thesis (769)
- Article (187)
- Book (109)
- Contribution to a Periodical (102)
- Working Paper (53)
- Part of Periodical (21)
- Conference Proceedings (17)
- Other (16)
- Review (10)
- Master's Thesis (9)
Language
- German (791)
- English (435)
- French (75)
- Multiple languages (15)
- Russian (1)
Keywords
- Deutschland (76)
- Luxemburg (51)
- Stress (36)
- Schule (34)
- Schüler (30)
- Politischer Unterricht (28)
- Fernerkundung (25)
- Modellierung (25)
- Geschichte (24)
- Demokratie (23)
Institute
- Psychologie (206)
- Geographie und Geowissenschaften (204)
- Politikwissenschaft (124)
- Universitätsbibliothek (80)
- Rechtswissenschaft (77)
- Mathematik (65)
- Wirtschaftswissenschaften (64)
- Medienwissenschaft (56)
- Geschichte, mittlere und neuere (45)
- Fachbereich 4 (38)
- Fachbereich 6 (37)
- Informatik (30)
- Germanistik (28)
- Fachbereich 1 (25)
- Kunstgeschichte (23)
- Anglistik (21)
- Soziologie (18)
- Fachbereich 2 (16)
- Fachbereich 3 (12)
- Computerlinguistik und Digital Humanities (10)
- Philosophie (10)
- Romanistik (9)
- Allgemeine Sprach- und Literaturwissenschaft (5)
- Ethnologie (5)
- Fachbereich 5 (5)
- Geschichte, alte (5)
- Pädagogik (5)
- Klassische Philologie (4)
- Sinologie (4)
- Japanologie (3)
- Archäologie (2)
- Servicezentrum eSciences (2)
- Forschungszentrum Europa (1)
- Pflegewissenschaft (1)
- Phonetik (1)
- Slavistik (1)
- Theologische Fakultät (1)
Agricultural monitoring is necessary. Since the beginning of the Holocene, human agricultural
practices have been shaping the face of the earth, and today around one third of the ice-free land
mass consists of cropland and pastures. While agriculture is necessary for our survival, the
intensity has caused many negative externalities, such as enormous freshwater consumption, the
loss of forests and biodiversity, greenhouse gas emissions as well as soil erosion and degradation.
Some of these externalities can potentially be ameliorated by careful allocation of crops and
cropping practices, while at the same time the state of these crops has to be monitored in order
to assess food security. Modern day satellite-based earth observation can be an adequate tool to
quantify abundance of crop types, i.e., produce spatially explicit crop type maps. The resources to
do so, in terms of input data, reference data and classification algorithms have been constantly
improving over the past 60 years, and we live now in a time where fully operational satellites
produce freely available imagery with often less than monthly revisit times at high spatial
resolution. At the same time, classification models have been constantly evolving from
distribution based statistical algorithms, over machine learning to the now ubiquitous deep
learning.
In this environment, we used an explorative approach to advance the state of the art of crop
classification. We conducted regional case studies, focused on the study region of the Eifelkreis
Bitburg-Prüm, aiming to develop validated crop classification toolchains. Because of their unique
role in the regional agricultural system and because of their specific phenologic characteristics
we focused solely on maize fields.
In the first case study, we generated reference data for the years 2009 and 2016 in the study
region by drawing polygons based on high resolution aerial imagery, and used these in
conjunction with RapidEye imagery to produce high resolution maize maps with a random forest
classifier and a gaussian blur filter. We were able to highlight the importance of careful residual
analysis, especially in terms of autocorrelation. As an end result, we were able to prove that, in
spite of the severe limitations introduced by the restricted acquisition windows due to cloud
coverage, high quality maps could be produced for two years, and the regional development of
maize cultivation could be quantified.
In the second case study, we used these spatially explicit datasets to link the expansion of biogas
producing units with the extended maize cultivation in the area. In a next step, we overlayed the
maize maps with soil and slope rasters in order to assess spatially explicit risks of soil compaction
and erosion. Thus, we were able to highlight the potential role of remote sensing-based crop type
classification in environmental protection, by producing maps of potential soil hazards, which can
be used by local stakeholders to reallocate certain crop types to locations with less associated
risk.
In our third case study, we used Sentinel-1 data as input imagery, and official statistical records
as maize reference data, and were able to produce consistent modeling input data for four
consecutive years. Using these datasets, we could train and validate different models in spatially
iv
and temporally independent random subsets, with the goal of assessing model transferability. We
were able to show that state-of-the-art deep learning models such as UNET performed
significantly superior to conventional models like random forests, if the model was validated in a
different year or a different regional subset. We highlighted and discussed the implications on
modeling robustness, and the potential usefulness of deep learning models in building fully
operational global crop classification models.
We were able to conclude that the first major barrier for global classification models is the
reference data. Since most research in this area is still conducted with local field surveys, and only
few countries have access to official agricultural records, more global cooperation is necessary to
build harmonized and regionally stratified datasets. The second major barrier is the classification
algorithm. While a lot of progress has been made in this area, the current trend of many appearing
new types of deep learning models shows great promise, but has not yet consolidated. There is
still a lot of research necessary, to determine which models perform the best and most robust,
and are at the same time transparent and usable by non-experts such that they can be applied
and used effortlessly by local and global stakeholders.
This thesis is concerned with two classes of optimization problems which stem
mainly from statistics: clustering problems and cardinality-constrained optimization problems. We are particularly interested in the development of computational techniques to exactly or heuristically solve instances of these two classes
of optimization problems.
The minimum sum-of-squares clustering (MSSC) problem is widely used
to find clusters within a set of data points. The problem is also known as
the $k$-means problem, since the most prominent heuristic to compute a feasible
point of this optimization problem is the $k$-means method. In many modern
applications, however, the clustering suffers from uncertain input data due to,
e.g., unstructured measurement errors. The reason for this is that the clustering
result then represents a clustering of the erroneous measurements instead of
retrieving the true underlying clustering structure. We address this issue by
applying robust optimization techniques: we derive the strictly and $\Gamma$-robust
counterparts of the MSSC problem, which are as challenging to solve as the
original model. Moreover, we develop alternating direction methods to quickly
compute feasible points of good quality. Our experiments reveal that the more
conservative strictly robust model consistently provides better clustering solutions
than the nominal and the less conservative $\Gamma$-robust models.
In the context of clustering problems, however, using only a heuristic solution
comes with severe disadvantages regarding the interpretation of the clustering.
This motivates us to study globally optimal algorithms for the MSSC problem.
We note that although some algorithms have already been proposed for this
problem, it is still far from being “practically solved”. Therefore, we propose
mixed-integer programming techniques, which are mainly based on geometric
ideas and which can be incorporated in a
branch-and-cut based algorithm tailored
to the MSSC problem. Our numerical experiments show that these techniques
significantly improve the solution process of a
state-of-the-art MINLP solver
when applied to the problem.
We then turn to the study of cardinality-constrained optimization problems.
We consider two famous problem instances of this class: sparse portfolio optimization and sparse regression problems. In many modern applications, it is common
to consider problems with thousands of variables. Therefore, globally optimal
algorithms are not always computationally viable and the study of sophisticated
heuristics is very desirable. Since these problems have a discrete-continuous
structure, decomposition methods are particularly well suited. We then apply a
penalty alternating direction method that explores this structure and provides
very good feasible points in a reasonable amount of time. Our computational
study shows that our methods are competitive to
state-of-the-art solvers and heuristics.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
The Second Language Acquisition of English Non-Finite Complement Clauses – A Usage-Based Perspective
(2022)
One of the most essential hypotheses of usage-based theories and many constructionist approaches to language is that language entails the piecemeal learning of constructions on the basis of general cognitive mechanisms and exposure to the target language in use (Ellis 2002; Tomasello 2003). However, there is still a considerable lack of empirical research on the emergence and mental representation of constructions in second language (L2) acquisition. One crucial question that arises, for instance, is whether L2 learners’ knowledge of a construction corresponds to a native-like mapping of form and meaning and, if so, to what extent this representation is shaped by usage. For instance, it is unclear how learners ‘build’ constructional knowledge, i.e. which pieces of frequency-, form- and meaning-related information become relevant for the entrenchment and schematisation of a L2 construction.
To address these issues, the English catenative verb construction was used as a testbed phenomenon. This idiosyncratic complex construction is comprised of a catenative verb and a non-finite complement clause (see Huddleston & Pullum 2002), which is prototypically a gerund-participial (henceforth referred to as ‘target-ing’ construction) or a to-infinitival complement (‘target-to’ construction):
(1) She refused to do her homework.
(2) Laura kept reading love stories.
(3) *He avoids to listen to loud music.
This construction is particularly interesting because learners often show choices of a complement type different from those of native speakers (e.g. Gries & Wulff 2009; Martinez‐Garcia & Wulff 2012) as illustrated in (3) and is commonly claimed to be difficult to be taught by explicit rules (see e.g. Petrovitz 2001).
By triangulating different types of usage data (corpus and elicited production data) and analysing these by multivariate statistical tests, the effects of different usage-related factors (e.g. frequency, proficiency level of the learner, semantic class of verb, etc.) on the representation and development of the catenative verb construction and its subschemas (i.e. target-to and target-ing construction) were examined. In particular, it was assessed whether they can predict a native-like form-meaning pairing of a catenative verb and non-finite complement.
First, all studies were able to show a robust effect of frequency on the complement choice. Frequency does not only lead to the entrenchment of high-frequency exemplars of the construction but is also found to motivate a taxonomic generalisation across related exemplars and the representation of a more abstract schema. Second, the results indicate that the target-to construction, due to its higher type and token frequency, has a high degree of schematicity and productivity than the target-ing construction for the learners, which allows for analogical comparisons and pattern extension with less entrenched exemplars. This schema is likely to be overgeneralised to (less frequent) target-ing verbs because the learners perceive formal and semantic compatibility between the unknown/infrequent verb and this pattern.
Furthermore, the findings present evidence that less advanced learners (A2-B2) make more coarse-grained generalisations, which are centred around high-frequency and prototypical exemplars/low-scope patterns. In the case of high-proficiency learners (C1-C2), not only does the number of native-like complement choices increase but relational information, such as the semantic subclasses of the verb, form-function contingency and other factors, becomes also relevant for a target-like choice. Thus, the results suggests that with increasing usage experience learners gradually develop a more fine-grained, interconnected representation of the catenative verb construction, which gains more resemblance to the form-meaning mappings of native speakers.
Taken together, these insights highlight the importance for language learning and teaching environments to acknowledge that L2 knowledge is represented in the form of highly interconnected form-meaning pairings, i.e. constructions, that can be found on different levels of abstraction and complexity.
Mit vorliegendem Dokument sollen Überlegungen zur Rolle der Ausbildung auf dem grenzüberschreitenden
Arbeitsmarkt angestellt werden. Hierfür werden gleichzeitig die durch die verschiedenen Hefte der Großregion, als auch die bei der am 1. Dezember 2020 organisierten Online-Podiumsdiskussion zum Thema „Mismatches, Kompetenzen, Ausbildung… Welche Passverhältnisse für den grenzüberschreitenden Arbeitsmarkt?“ angeregten Diskussionen herangezogen. Konkreter ausgedrückt beabsichtigt dieser Beitrag, eine Antwort auf folgende Frage zu geben: wie können die Ausbildung und deren unterschiedliche Praxismaßnahmen in den beruflichen Bereichen, aber auch in schulischen und universitären Bereichen dazu beitragen, die Ungleichgewichte abzumildern, die auf dem Arbeitsmarkt der Großregion zu beobachten sind? Insofern liefert das Dokument einige Denkanstöße für die grenzüberschreitende Zusammenarbeit im Ausbildungsbereich.
Due to the transition towards climate neutrality, energy markets are rapidly evolving. New technologies are developed that allow electricity from renewable energy sources to be stored or to be converted into other energy commodities. As a consequence, new players enter the markets and existing players gain more importance. Market equilibrium problems are capable of capturing these changes and therefore enable us to answer contemporary research questions with regard to energy market design and climate policy.
This cumulative dissertation is devoted to the study of different market equilibrium problems that address such emerging aspects in liberalized energy markets. In the first part, we review a well-studied competitive equilibrium model for energy commodity markets and extend this model by sector coupling, by temporal coupling, and by a more detailed representation of physical laws and technical requirements. Moreover, we summarize our main contributions of the last years with respect to analyzing the market equilibria of the resulting equilibrium problems.
For the extension regarding sector coupling, we derive sufficient conditions for ensuring uniqueness of the short-run equilibrium a priori and for verifying uniqueness of the long-run equilibrium a posteriori. Furthermore, we present illustrative examples that each of the derived conditions is indeed necessary to guarantee uniqueness in general.
For the extension regarding temporal coupling, we provide sufficient conditions for ensuring uniqueness of demand and production a priori. These conditions also imply uniqueness of the short-run equilibrium in case of a single storage operator. However, in case of multiple storage operators, examples illustrate that charging and discharging decisions are not unique in general. We conclude the equilibrium analysis with an a posteriori criterion for verifying uniqueness of a given short-run equilibrium. Since the computation of equilibria is much more challenging due to the temporal coupling, we shortly review why a tailored parallel and distributed alternating direction method of multipliers enables to efficiently compute market equilibria.
For the extension regarding physical laws and technical requirements, we show that, in nonconvex settings, existence of an equilibrium is not guaranteed and that the fundamental welfare theorems therefore fail to hold. In addition, we argue that the welfare theorems can be re-established in a market design in which the system operator is committed to a welfare objective. For the case of a profit-maximizing system operator, we propose an algorithm that indicates existence of an equilibrium and that computes an equilibrium in the case of existence. Based on well-known instances from the literature on the gas and electricity sector, we demonstrate the broad applicability of our algorithm. Our computational results suggest that an equilibrium often exists for an application involving nonconvex but continuous stationary gas physics. In turn, integralities introduced due to the switchability of DC lines in DC electricity networks lead to many instances without an equilibrium. Finally, we state sufficient conditions under which the gas application has a unique equilibrium and the line switching application has finitely many.
In the second part, all preprints belonging to this cumulative dissertation are provided. These preprints, as well as two journal articles to which the author of this thesis contributed, are referenced within the extended summary in the first part and contain more details.
The forward testing effect is an indirect benefit of retrieval practice. It refers to the finding that retrieval practice of previously studied information enhances learning and retention of subsequently studied other information in episodic memory tasks. Here, two experiments were conducted that investigated whether retrieval practice influences participants’ performance in other tasks, i.e., arithmetic tasks. Participants studied three lists of words in anticipation of a final recall test. In the testing condition, participants were immediately tested on lists 1 and 2 after study of each list, whereas in the restudy condition, they restudied lists 1 and 2 after initial study. Before and after study of list 3, participants did an arithmetic task. Finally, participants were tested on list 3, list 2, and list 1. Different arithmetic tasks were used in the two experiments. Participants did a modular arithmetic task in Experiment 1a and a single-digit multiplication task in Experiment 1b. The results of both experiments showed a forward testing effect with interim testing of lists 1 and 2 enhancing list 3 recall in the list 3 recall test, but no effects of recall testing of lists 1 and 2 for participants’ performance in the arithmetic tasks. The findings are discussed with respect to cognitive load theory and current theories of the forward testing effect.
Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer’s test conditions, validation is essential with regard to the quality of gaze data and other factors potentially threatening the validity of this signal. In this study, we evaluated the impact of accuracy and areas of interest (AOIs) size on the classification of simulated gaze (fixation) data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying accuracy. As hypothesized, we found that accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed in falsely classified gaze inside AOIs (Type I errors; false alarms) and falsely classified gaze outside the predefined AOIs (Type II errors; misses). Our results indicate that smaller AOIs generally minimize false classifications as long as accuracy is good enough. For studies with lower accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of more probable Type I errors. Proper estimation of accuracy is therefore essential for making informed decisions regarding the size of AOIs in eye tracking research.
The temporal stability of psychological test scores is one prerequisite for their practical usability. This is especially true for intelligence test scores. In educational contexts, high stakes decisions with long-term consequences, such as placement in special education programs, are often based on intelligence test results. There are four different types of temporal stability: mean-level change, individual-level change, differential continuity, and ipsative continuity. We present statistical methods for investigating each type of stability. Where necessary, the methods were adapted for the specific challenges posed by intelligence research (e.g., controlling for general intelligence in lower order test scores). We provide step-by-step guidance for the application of the statistical methods and apply them to a real data set of 114 gifted students tested twice with a test-retest interval of 6 months.
• Four different types of stability need to be investigated for a full picture of temporal stability in psychological research
• Selection and adaption of the methods for the use in intelligence research
• Complete protocol of the implementation
We examined the long-term relationship of psychosocial risk and health behaviors on clinical events in patients awaiting heart transplantation (HTx). Psychosocial characteristics (e.g., depression), health behaviors (e.g., dietary habits, smoking), medical factors (e.g., creatinine), and demographics (e.g., age, sex) were collected at the time of listing in 318 patients (82% male, mean age = 53 years) enrolled in the Waiting for a New Heart Study. Clinical events were death/delisting due to deterioration, high-urgency status transplantation (HU-HTx), elective transplantation, and delisting due to clinical improvement. Within 7 years of follow-up, 92 patients died or were delisted due to deterioration, 121 received HU-HTx, 43 received elective transplantation, and 39 were delisted due to improvement. Adjusting for demographic and medical characteristics, the results indicated that frequent consumption of healthy foods (i.e., foods high in unsaturated fats) and being physically active increased the likelihood of delisting due improvement, while smoking and depressive symptoms were related to death/delisting due to clinical deterioration while awaiting HTx. In conclusion, psychosocial and behavioral characteristics are clearly associated with clinical outcomes in this population. Interventions that target psychosocial risk, smoking, dietary habits, and physical activity may be beneficial for patients with advanced heart failure waiting for a cardiac transplant.