000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Year of publication
Document Type
- Article (14)
- Doctoral Thesis (9)
- Book (2)
- Other (2)
- Lecture (1)
Has Fulltext
- yes (28)
Keywords
- Film (8)
- Geschichte (8)
- Dokumentarfilm (3)
- Filmgeschichte (3)
- Frau (3)
- Kulturfilm (3)
- Reise (3)
- Amateur (2)
- Filmwissenschaft (2)
- Forschungsdaten (2)
Institute
- Medienwissenschaft (15)
- Informatik (6)
- Fachbereich 4 (3)
- Psychologie (2)
- Forschungszentrum Europa (1)
- Servicezentrum eSciences (1)
- Universitätsbibliothek (1)
- Wirtschaftswissenschaften (1)
Building on Social Virtual Reality to Support Flexible Collaboration and Enrich Therapy Sessions
(2025)
Social virtual environments allow their users to meet and collaborate in a shared three-dimensional space, even when far apart from each other in the real world. Within these spaces, the appearance and interaction capabilities of both users and environments can be adapted and changed in a myriad of ways. To enable virtual environments to fulfill their potential of supporting a wide variety of collaboration use-cases, both the impacts of basic interaction design decisions and the individual needs of specific usage areas need to be explored further.
This thesis approaches this topic in two ways. First, the basic building blocks of collaboration in social virtual environments are explored by asking the question: "How can social virtual spaces that allow interaction beyond real-world constraints utilize the potential of mutual assistance and shared workflows between multiple users?". Going into further detail for a serious use-case in which direct collaborative interactions and their effect on the included users are especially important, it then explores the potential of collaborative virtual spaces in the therapy domain by asking "How can the potential of social virtual spaces be utilized to support and improve therapy encounters?"
With regards to the first research question, the thesis presents two theoretical frameworks detailing different aspects of supporting smooth and varied collaboration processes. In addition, several user studies on the topic of collaborative virtual interaction are described, focusing on the role that different users can play during shared interaction and the effects that this distribution of roles and responsibilities has on both the performance and experience of the involved user pairs.
The results presented for this first research question show that social virtual spaces have the potential to provide dedicated support for collaborative workflows. To enable users to adapt their working mode individually and as a team, interaction techniques should complement a team's natural interaction and communication. When presenting novel interactions to users, providing them with a way to support each other can ease their adaptation to these interactions. In these cases, the inclusion of all interested collaborators as active participators should be prioritized in order to let all users benefit from being immersed in a virtual environment.
Addressing the combination of social virtual spaces with therapy in relation to the second research question, this thesis presents the result of a series of interviews with practicing physio- and psychotherapists. Motivated by the recorded expert feedback, it also reports on two more detailed explorations of specific areas of interest. The work presented for the second research question demonstrated the promise of using virtual environments in both exercise- and conversation-based therapy practice. Investigating the potential of shared interactions, the exploration of virtual recordings and the adaptation of virtual appearances, the presented work uncovered several topic areas that could be further explored regarding their possible use in the treatment of patients.
Taken together, the six research articles presented in this thesis show both the value of supporting and understanding shared interactions in virtual spaces and their potential place in serious use-cases like the therapy domain. When introducing shared virtual environments to new user groups, the opportunity for mutual support through shared interaction techniques could be a crucial building block towards making virtual spaces both accessible and attractive to a variety of users.
This dissertation addresses the measurement and evaluation of the energy and resource efficiency of software systems. Studies show that the environmental impact of Information and Communications Technologies (ICT) is steadily increasing and is already estimated to be responsible for 3 % of the total greenhouse gas (GHG) emissions. Although it is the hardware that consumes natural resources and energy through its production, use, and disposal, software controls the hardware and therefore has a considerable influence on the used capacities. Accordingly, it should also be attributed a share of the environmental impact. To address this softwareinduced impact, the focus is on the continued development of a measurement and assessment model for energy and resource-efficient software. Furthermore, measurement and assessment methods from international research and practitioner communities were compared in order to develop a generic reference model for software resource and energy measurements. The next step was to derive a methodology and to define and operationalize criteria for evaluating and improving the environmental impact of software products. In addition, a key objective is to transfer the developed methodology and models to software systems that cause high consumption or offer optimization potential through economies of scale. These include, e. g., Cyber-Physical Systems (CPS) and mobile apps, as well as applications with high demands on computing power or data volumes, such as distributed systems and especially Artificial Intelligence (AI) systems.
In particular, factors influencing the consumption of software along its life cycle are considered. These factors include the location (cloud, edge, embedded) where the computing and storage services are provided, the role of the stakeholders, application scenarios, the configuration of the systems, the used data, its representation and transmission, or the design of the software architecture. Based on existing literature and previous experiments, distinct use cases were selected that address these factors. Comparative use cases include the implementation of a scenario in different programming languages, using varying algorithms, libraries, data structures, protocols, model topologies, hardware and software setups, etc. From the selection, experimental scenarios were devised for the use cases to compare the methods to be analyzed. During their execution, the energy and resource consumption was measured, and the results were assessed. Subtracting baseline measurements of the hardware setup without the software running from the scenario measurements makes the software-induced consumption measurable and thus transparent. Comparing the scenario measurements with each other allows the identification of the more energyefficient setup for the use case and, in turn, the improvement/optimization of the system as a whole. The calculated metrics were then also structured as indicators in a criteria catalog. These indicators represent empirically determinable variables that provide information about a matter that cannot be measured directly, such as the environmental impact of the software. Together with verification criteria that must be complied with and confirmed by the producers of the software, this creates a model with which the comparability of software systems can be established.
The gained knowledge from the experiments and assessments can then be used to forecast and optimize the energy and resource efficiency of software products. This enables developers, but also students, scientists and all other stakeholders involved in the life cycleof software, to continuously monitor and optimize the impact of their software on energy and resource consumption. The developed models, methods, and criteria were evaluated and validated by the scientific community at conferences and workshops. The central outcomes of this thesis, including a measurement reference model and the criteria catalog, were disseminated in academic journals. Furthermore, the transfer to society has been driven forward, e. g., through the publication of two book chapters, the development and presentation of exemplary best practices at developer conferences, collaboration with industry, and the establishment of the eco-label “Blue Angel” for resource and energy-efficient software products. In the long term, the objective is to effect a change in societal attitudes and ultimately to achieve significant resource savings through economies of scale by applying the methods in the development of software in general and AI systems in particular.
Today, almost every modern computing device is equipped with multicore processors capable of efficient concurrent and parallel execution of threads. This processor feature can be leveraged by concurrent programming, which is a challenge for software developers for two reasons: first, it introduces a paradigm shift that requires a new way of thinking. Second, it can lead to issues that are unique to concurrent programs due to the non-deterministic, interleaved execution of threads. Consequently, the debugging of concurrency and related performance issues is a rather difficult and often tedious task. Developers still lack on thread-aware programming tools that facilitate the understanding of concurrent programs. Ideally, these tools should be part of their daily working environment, which typically includes an Integrated Development Environment (IDE). In particular, the way source code is visually presented in traditional source-code editors does not convey much information on whether the source code is executed concurrently or in parallel in the first place.
With this dissertation, we pursue the main goal of facilitating and supporting the understanding and debugging of concurrent programs. To this end, we formulate and utilize a visualization paradigm that particularly includes the display of interactive glyph-based visualizations embedded in the source-code editor close to their corresponding artifacts (in-situ).
To facilitate the implementation of visualizations that comply with our paradigm as plugins for IDEs, we designed, implemented and evaluated a programming framework called CodeSparks. After presenting the design goals and the architecture of the framework, we demonstrate its versatility with a total of fourteen plugins realized by different developers using the CodeSparks framework (CodeSparks plugins). With focus group interviews, we empirically investigated how developers of the CodeSparks plugins experienced working with the framework. Based on the plugins, deliberate design decisions and the interview results, we discuss to what extent we achieved our design goals. We found that the framework is largely target programming-language independent and that it supports the development of plugins for a wide range of source-code-related tasks while hiding most of the details of the underlying plugin development API.
In addition, we applied our visualization paradigm to thread-related runtime data from concurrent programs to foster the awareness of source code being executed concurrently or in parallel. As a result, we developed and designed two in-situ thread visualizations, namely ThreadRadar and ThreadFork, with the latter building on the former. Both thread visualizations are based on a debugging approach, which combines statistical profiling, thread-aware runtime metrics, clustering of threads on the basis of these metrics, and finally interactive glyph-based in-situ visualizations. To address scalability issues of the ThreadRadar in terms of space required and the number of displayable thread clusters, we designed a revised thread visualization. This revision also involved the question of how many thread clusters k should be computed in the first place. To this end, we conducted experiments with the clustering of threads for artifacts from a corpus of concurrent Java programs that include real-world Java applications and concurrency bugs. We found that the maximum k on the one hand and the optimal k according to four cluster validation indices on the other hand rarely exceed three. However, occasionally thread clusterings with k > 3 are available and also optimal. Consequently, we revised both the clustering strategy and the visualization as parts of our debugging approach, which resulted in the ThreadFork visualization. Both in-situ thread visualizations, including their additional features that support the exploration of the thread data, are implemented in a tool called CodeSparks-JPT, i.e., as a CodeSparks plugin for IntelliJ IDEA.
With various empirical studies, including anecdotal usage scenarios, a usability test, web surveys, hands-on sessions, questionnaires and interviews, we investigated quality aspects of the in-situ thread visualizations and their corresponding tools. First, by a demonstration study, we illustrated the usefulness of the ThreadRadar visualization in investigating and fixing concurrency bugs and a performance bug. This was confirmed by a subsequent usability test and interview, which also provided formative feedback. Second, we investigated the interpretability and readability of the ThreadFork glyphs as well as the effectiveness of the ThreadFork visualization through anonymous web surveys. While we have found that the ThreadFork glyphs are correctly interpreted and readable, it remains unproven that the ThreadFork visualization effectively facilitates understanding the dynamic behavior of threads that concurrently executed portions of source code. Moreover, the overall usability of CodeSparks-JPT is perceived as "OK, but not acceptable" as the tool has issues with its learnability and memorability. However, all other usability aspects of CodeSparks-JPT that were examined are perceived as "above average" or "good".
Our work supports software-engineering researchers and practitioners in flexibly and swiftly developing novel glyph-based visualizations that are embedded in the source-code editor. Moreover, we provide in-situ thread visualizations that foster the awareness of source code being executed concurrently or in parallel. These in-situ thread visualizations can, for instance, be adapted, extended and used to analyze other use cases or to replicate the results. Through empirical studies, we have gradually shaped the design of the in-situ thread visualizations through data-driven decisions, and evaluated several quality aspects of the in-situ thread visualizations and the corresponding tools for their utility in understanding and debugging concurrent programs.
Sowohl national als auch international wird die zunehmende Digitalisierung von Prozessen gefordert. Die Heterogenität und Komplexität der dabei entstehenden Systeme erschwert die Partizipation für reguläre Nutzergruppen, welche zum Beispiel kein Expertenwissen in der Programmierung oder einen informationstechnischen Hintergrund aufweisen. Als Beispiel seien hier Smart Contracts genannt, deren Programmierung komplex ist und bei denen etwaige Fehler unmittelbar mit monetärem Verlust durch die direkte Verknüpfung der darunterliegenden Kryptowährung verbunden sind. Die vorliegende Arbeit stellt ein alternatives Protokoll für cyber-physische Verträge vor, das sich besonders gut für die menschliche Interaktion eignet und auch von regulären Nutzergruppen verstanden werden kann. Hierbei liegt der Fokus auf der Transparenz der Übereinkünfte und es wird weder eine Blockchain noch eine darauf beruhende digitale Währung verwendet. Entsprechend kann das Vertragsmodell der Arbeit als nachvollziehbare Verknüpfung zwischen zwei Parteien verstanden werden, welches die unterschiedlichen Systeme sicher miteinander verbindet und so die Selbstorganisation fördert. Diese Verbindung kann entweder computergestützt automatisch ablaufen, oder auch manuell durchgeführt werden. Im Gegensatz zu Smart Contracts können somit Prozesse Stück für Stück digitalisiert werden. Die Übereinkünfte selbst können zur Kommunikation, aber auch für rechtlich bindende Verträge genutzt werden. Die Arbeit ordnet das neue Konzept in verwandte Strömungen wie Ricardian oder Smart Contracts ein und definiert Ziele für das Protokoll, welche in Form der Referenzimplementierung umgesetzt werden. Sowohl das Protokoll als auch die Implementierung werden im Detail beschrieben und durch eine Erweiterung der Anwendung ergänzt, welche es Nutzenden in Regionen ohne direkte Internetverbindung ermöglicht, an ebenjenen Verträgen teilnehmen zu können. Weiterhin betrachtet die Evaluation die rechtlichen Rahmenbedinungen, die Übertragung des Protokolls auf Smart Contracts und die Performanz der Implementierung.
Computer simulation has become established in a two-fold way: As a tool for planning, analyzing, and optimizing complex systems but also as a method for the scientific instigation of theories and thus for the generation of knowledge. Generated results often serve as a basis for investment decisions, e.g., road construction and factory planning, or provide evidence for scientific theory-building processes. To ensure the generation of credible and reproducible results, it is indispensable to conduct systematic and methodologically sound simulation studies. A variety of procedure models exist that structure and predetermine the process of a study. As a result, experimenters are often required to repetitively but thoroughly carry out a large number of experiments. Moreover, the process is not sufficiently specified and many important design decisions still have to be made by the experimenter, which might result in an unintentional bias of the results.
To facilitate the conducting of simulation studies and to improve both replicability and reproducibility of the generated results, this thesis proposes a procedure model for carrying out Hypothesis-Driven Simulation Studies, an approach that assists the experimenter during the design, execution, and analysis of simulation experiments. In contrast to existing approaches, a formally specified hypothesis becomes the key element of the study so that each step of the study can be adapted and executed to directly contribute to the verification of the hypothesis. To this end, the FITS language is presented, which enables the specification of hypotheses as assumptions regarding the influence specific input values have on the observable behavior of the model. The proposed procedure model systematically designs relevant simulation experiments, runs, and iterations that must be executed to provide evidence for the verification of the hypothesis. Generated outputs are then aggregated for each defined performance measure to allow for the application of statistical hypothesis testing approaches. Hence, the proposed assistance only requires the experimenter to provide an executable simulation model and a corresponding hypothesis to conduct a sound simulation study. With respect to the implementation of the proposed assistance system, this thesis presents an abstract architecture and provides formal specifications of all required services.
To evaluate the concept of Hypothesis-Driven Simulation Studies, two case studies are presented from the manufacturing domain. The introduced approach is applied to a NetLogo simulation model of a four-tiered supply chain. Two scenarios as well as corresponding assumptions about the model behavior are presented to investigate conditions for the occurrence of the bullwhip effect. Starting from the formal specification of the hypothesis, each step of a Hypothesis-Driven Simulation Study is presented in detail, with specific design decisions outlined, and generated inter- mediate data as well as final results illustrated. With respect to the comparability of the results, a conventional simulation study is conducted which serves as reference data. The approach that is proposed in this thesis is beneficial for both practitioners and scientists. The presented assistance system allows for a more effortless and simplified execution of simulation experiments while the efficient generation of credible results is ensured.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
Similarity-based retrieval of semantic graphs is a core task of Process-Oriented Case-Based Reasoning (POCBR) with applications in real-world scenarios, e.g., in smart manufacturing. The involved similarity computation is usually complex and time-consuming, as it requires some kind of inexact graph matching. To tackle these problems, we present an approach to modeling similarity measures based on embedding semantic graphs via Graph Neural Networks (GNNs). Therefore, we first examine how arbitrary semantic graphs, including node and edge types and their knowledge-rich semantic annotations, can be encoded in a numeric format that is usable by GNNs. Given this, the architecture of two generic graph embedding models from the literature is adapted to enable their usage as a similarity measure for similarity-based retrieval. Thereby, one of the two models is more optimized towards fast similarity prediction, while the other model is optimized towards knowledge-intensive, more expressive predictions. The evaluation examines the quality and performance of these models in preselecting retrieval candidates and in approximating the ground-truth similarities of a graph-matching-based similarity measure for two semantic graph domains. The results show the great potential of the approach for use in a retrieval scenario, either as a preselection model or as an approximation of a graph similarity measure.
Designing a Randomized Trial with an Age Simulation Suit—Representing People with Health Impairments
(2020)
Due to demographic change, there is an increasing demand for professional care services, whereby this demand cannot be met by available caregivers. To enable adequate care by relieving informal and formal care, the independence of people with chronic diseases has to be preserved for as long as possible. Assistance approaches can be used that support promoting physical activity, which is a main predictor of independence. One challenge is to design and test such approaches without affecting the people in focus. In this paper, we propose a design for a randomized trial to enable the use of an age simulation suit to generate reference data of people with health impairments with young and healthy participants. Therefore, we focus on situations of increased physical activity.
Die Publikation, die sich primär an Forschende aus den Geisteswissenschaften wendet, bietet eine praxisbezogene kurze Einführung in das Forschungsdatenmanagement. Sie ist als Planungsinstrument für ein Forschungsprojekt konzipiert und bietet Hilfestellung bei der Erarbeitung eines digitalen Forschungskonzepts und der Erstellung eines Datenmanagementplans. Ausgehend von der Analyse ausgewählter Arbeitssituationen (Projektplanung und Antrag-stellung, Quellenbearbeitung, Publikation und Archivierung) und deren Veränderung in einer zunehmend digital organisierten Forschungspraxis werden die Zusammenhänge zwischen Forschungs- und Datenmanagementprozess thematisiert. Eine Checkliste in Form eines Fragenkatalogs und eine kommentierte Mustervorlage für einen Daten-managementplan helfen bei der Projektplanung und -beantragung.
Der Vortrag zeigt das Spektrum auf, in dem Frauenreisen als eigenständige Kategorie der Literatur- und Medienwissenschaft etabliert wurde. Ohne den Fokus der Konfrontation mit der orientalischen Lebenswelt und der Exklusion der Männer in bestimmten Lebensbereichen hätte sich diese Form von Reiseberichten nicht etablieren können.
Im Zentrum dieses Aufsatzes steht der Spielfilm "Lisbon Story" (Deutschland / Portugal 1994/1995) des Filmregisseurs Wim Wenders. Es werden aber auch die früheren Arbeiten wie "Alice in den Städten" (BR Deutschland 1973/1974) und "Im Lauf der Zeit" (BR Deutschland 1975/1976) herangezogen, denn Kinder spielen in Wenders Spielfilmen eine bedeutende Rolle.
Rezensiert wird das umfangreiche Buch von Matthias Steinle, das die wechselseitige Darstellung der Bundesrepublik Deutschland und der Deutschen Demokratischen Republik in Dokumentarfilmen analysiert. Die Materialauswahl umfasst mehr als 60 Filme, wobei der Begriff von Dokumentarfilm weit gefasst ist und auch Kino-Wochenschauen berücksichtigt werden,
"Triumph der Bilder"
(2017)
Rundfunkmanuskript - gesendet in der Reihe "Reisen damals " vom Norddeutschen Rundfunk. Der Reisebericht von Louise Mühlbach über ihre Reise nach Ägypten, veröffentlicht 1871, ist mittlerweile online nachzulesen unter https://babel.hathitrust.org/cgi/pt?id=njp.32101012484679;view=1up;seq=140. Sprecherin des Beitrages war Evelyn Hamann.
This dissertation looked at both design-based and model-based estimation for rare and clustered populations using the idea of the ACS design. The ACS design (Thompson, 2012, p. 319) starts with an initial sample that is selected by a probability sampling method. If any of the selected units meets a pre-specified condition, its neighboring units are added to the sample and observed. If any of the added units meets the pre-specified condition, its neighboring units are further added to the sample and observed. The procedure continues until there are no more units that meet the pre-specified condition. In this dissertation, the pre-specified condition is the detection of at least one animal in a selected unit. In the design-based estimation, three estimators were proposed under three specific design setting. The first design was stratified strip ACS design that is suitable for aerial or ship surveys. This was a case study in estimating population totals of African elephants. In this case, units/quadrant were observed only once during an aerial survey. The Des Raj estimator (Raj, 1956) was modified to obtain an unbiased estimate of the population total. The design was evaluated using simulated data with different levels of rarity and clusteredness. The design was also evaluated on real data of African elephants that was obtained from an aerial census conducted in parts of Kenya and Tanzania in October (dry season) 2013. In this study, the order in which the samples were observed was maintained. Re-ordering the samples by making use of the Murthy's estimator (Murthy, 1957) can produce more efficient estimates. Hence a possible extension of this study. The computation cost resulting from the n! permutations in the Murthy's estimator however, needs to be put into consideration. The second setting was when there exists an auxiliary variable that is negatively correlated with the study variable. The Murthy's estimator (Murthy, 1964) was modified. Situations when the modified estimator is preferable was given both in theory and simulations using simulated and two real data sets. The study variable for the real data sets was the distribution and counts of oryx and wildbeest. This was obtained from an aerial census that was conducted in parts of Kenya and Tanzania in October (dry season) 2013. Temperature was the auxiliary variable for two study variables. Temperature data was obtained from R package raster. The modified estimator provided more efficient estimates with lower bias compared to the original Murthy's estimator (Murthy, 1964). The modified estimator was also more efficient compared to the modified HH and the modified HT estimators of (Thompson, 2012, p. 319). In this study, one auxiliary variable is considered. A fruitful area for future research would be to incorporate multi-auxiliary information at the estimation phase of an ACS design. This could, in principle, be done by using for instance a multivariate extension of the product estimator (Singh, 1967) or by using the generalized regression estimator (Särndal et al., 1992). The third case under design-based estimation, studied the conjoint use of the stopping rule (Gattone and Di Battista, 2011) and the use of the without replacement of clusters (Dryver and Thompson, 2007). Each of these two methods was proposed to reduce the sampling cost though the use of the stopping rule results in biased estimates. Despite this bias, the new estimator resulted in higher efficiency gain in comparison to the without replacement of cluster design. It was also more efficient compared to the stratified design which is known to reduce final sample size when networks are truncated at stratum boundaries. The above evaluation was based on simulated and real data. The real data was the distribution and counts of hartebeest, elephants and oryx obtained in the same census as above. The bias attributed by the stopping rule has not been evaluated analytically. This may not be direct since the truncated network formed depends on the initial unit sampled (Gattone et al., 2016a). This and the order of the bias however, deserves further investigation as it may help in understanding the effect of the increase in the initial sample size together with the population characteristics on the efficiency of the proposed estimator. Chapter four modeled data that was obtained using the stratified strip ACS (as described in sub-section (3.1)). This was an extension of the model of Rapley and Welsh (2008) by modeling data that was obtained from a different design, the introduction of an auxiliary variable and the use of the without replacement of clusters mechanism. Ideally, model-based estimation does not depend on the design or rather how the sample was obtained. This is however, not the case if the design is informative; such as the ACS design. In this case, the procedure that was used to obtain the sample was incorporated in the model. Both model-based and design-based simulations were conducted using artificial and real data. The study and the auxiliary variable for the real data was the distribution and counts of elephants collected during an aerial census in parts of Kenya and Tanzania in October (dry season) and April (wet season) 2013 respectively. Areas of possible future research include predicting the population total of African elephants in all parks in Kenya. This can be achieved in an economical and reliable way by using the theory of SAE. Chapter five compared the different proposed strategies using the elephant data. Again the study variable was the elephant data from October (dry season) 2013 and the auxiliary variable was the elephant data from April (wet season) 2013. The results show that the choice of particular strategy to use depends on the characteristic of the population under study and the level and the direction of the correlation between the study and the auxiliary variable (if present). One general area of the ACS design that is still behind, is the implementation of the design in the field especially on animal populations. This is partly attributed by the challenges associated with the field implementation, some of which were discussed in section 2.3. Green et al. (2010) however, provides new insights in undertaking the ACS design during an aerial survey such as how the aircraft should turn while surveying neighboring units. A key point throughout the dissertation is the reduction of cost during a survey which can be seen by the reduction in the number of units in the final sample (through the use of stopping rule, use of stratification and truncating networks at stratum boundaries) and ensuring that units are observed only once (by using the without replacement of cluster sampling technique). The cost of surveying an edge unit(s) is assumed to be low in which case the efficiency of the ACS design relative to the non-adaptive design is achieved (Thompson and Collins, 2002). This is however not the case in aerial surveys as the aircraft flies at constant speed and height (Norton-Griffiths, 1978). Hence the cost of surveying an edge unit is the same as the cost of surveying a unit that meets the condition of interest. The without replacement of cluster technique plays a greater role of reducing the cost of sampling in such surveys. Other key points that motivated the sections in the dissertation include gains in efficiency (in all sections) and practicability of the designs in the specific setting. Even though the dissertation focused on animal populations, the methods can as well be implemented in any population that is rare and clustered such as in the study of forestry, plants, pollution, minerals and so on.
Karl May wurde Ende des 19. Jahrhunderts zum Erfolgs- und Volksschriftsteller mit seinem Orient-Zyklus "Durch die Wüste". In Abgrenzung dazu stellt dieser Vortrag einige heute unbekannte Frauen vor, deren Reiseschilderungen sich so abenteuerlich lesen wie die Romane Karl Mays - allerdings mit dem entscheidenden Unterschied, dass ihre Reisen real, und nicht nur in der Phantasie, stattfanden. - Der Aufsatz gibt einen Vortrag der Verfasserin wieder, der an der Hochschule Darmstadt am 28.04.1989 gehalten wurde. rn
Der vorliegende Bericht basiert auf einer universitätsweiten Online-Umfrage zum Status quo des Forschungsdatenma-nagements an der Universität Trier. Er ist ein erster Schritt, um den aktuellen und zukünftigen Bedarf an zentralen Dienstleistungen zu identifizieren. Neue Handlungsfelder sollen frühzeitig erkannt werden, auch um der Strategie-entwicklung eine Richtung zu weisen.rnDie Befragten befürworten generell die Initiative zur Entwicklung zentraler IT- und Beratungsangebote. Sie sind bereit, die eigenen Forschungsdaten anderen zur Nachnutzung zur Verfügung zu stellen, sofern die geeigneten Instrumente vorhanden, sind die eine solche Arbeitsweise unterstützen. Allerdings wird eine unkommentierte Bereit-stellung von Rohdaten eher kritisch beurteilt. Der Dokumentationsaufwand einer öffentlichen Bereitstellung von Daten wird in einem ungünstigen Kosten-Nutzenverhältnis gesehen. Es fällt auf, dass die Datenarchivierung größ-tenteils in proprietären Formaten erfolgt.
Die Vorlesung befasst sich mit einem wenig erforschten Subgenre des Films: dem Amateurfilm. Gefragt wird nach Definitionen, die das Genre von der industriellen Filmproduktion und dem etablierten Kinowesen abgrenzen. Benannt werden Kriterien, die das Betätigungsfeld der Freizeitfilmer und die sozialen Gebrauchsweisen ihrer Filme bestimmen. In historischem Längsschnitt wird die von der kapitalkräftigen Filmindustrie durchgesetzte Formatfrage behandelt. Als quantitativ bedeutendster Teil der Amateurfilmproduktion wird der Familienfilm charakterisiert.
Der Aufsatz ist eine launige Hommage an das Frühe Kino. Er lässt die zeitgenössischen Quellen sprechen, die belegen, wie in den Pionierjahren des Films, vor dem Ersten Weltkrieg, das Ereignis "Kino" wahrgenommen wurde. Erinnert wird somit an einen Abschnitt der Filmgeschichte, der zu unrecht vergessen wurde bzw. ein Schattendasein führt - gemessen am nachfolgenden Schema der Langfilme und der zivilisierten Lichtspieltheater.
The contribution of three genes (C15orf53, OXTR and MLC1) to the etiology of chromosome 15-bound schizophrenia (SCZD10), bipolar disorder (BD) and autism spectrum disorder (ASD) were studied. At first, the uncharacterized gene C15orf53 was comprehensively analyzed. Previous genome-wide association studies (GWAS) in bipolar disorder samples have identified an association signal in close vicinity to C15orf53 on chromosome 15q14. This gene is located in exactly the genomic region, which is segregating in our SCZD10 families. An association study with bipolar disorder (BD) and SCZD10 individual samples did not reveal any association of single nucleotide polymorphisms (SNPs) in C15orf53. Mutational analysis of C15orf53 in SCZD10-affected individuals from seven multiplex families did not show any mutations in the 5'-untranslated region, the coding region and the intron-exon boundaries. Gene expression analysis revealed that C15orf53 was expressed in a subpopulation of leukocytes, but not in human post-mortem limbic brain tissue. Summarizing these studies, C15orf53 is unlikely to be a strong candidate gene for the etiology of BD or SCZD10. The second investigated gene was the human oxytocin receptor gene (OXTR). Five well described SNPs located in the OXTR gene were taken for a transmission-disequilibrium test (TDT) in parents-child trios with ASD-affected children. Neither in the complete sample nor in a subgroup with children that had an intelligence quotient (IQ) above 70, association was found, independent from the application of Haploview or UNPHASED for analysis. The third gene, MLC1, was investigated with regards to its implication in the etiology of SCZD10. Mutations in the MLC1 gene lead to megalencephalic leukoencephalopathy with subcortical cysts (MLC) and one variant coding for the amino acid methionine (Met) instead of leucine (Leu) at position 309 was identified to segregate in a family affected with SCZD10. For further investigation of MLC1 and its possible implication in the etiology of SCZD10, a constitutive Mlc1 knockout mouse model should be created. Mouse embryonic stem cells (mES) were electroporated with a knockout vector construct and analyzed with respect to homologous recombination of the knockout construct with the genomic DNA (gDNA) of the mES. Polymerase chain reaction (PCR) on the available stem cell clones did not reveal any homologous recombined ES. Additionally, we conducted experiments to knockdown MLC1 and using microRNAs. The 3'-untranslated region of the MLC1 gene was analyzed with the bioinformatics tool TargetScan to screen for potential microRNA target sites. In the 3'-untranslated region of the MLC1 gene, a potential binding site for miR-137 was identified. The gene expression level of genes that had been linked to psychiatric disorders and carried a predicated miR-137 binding site has been proven to be immediately responsive to miR-137. Thus, there is new evidence that MLC1 is a candidate gene for the etiology of SCZD10.
In der vorliegenden Arbeit wurden potentielle Kandidatengene für Periodische Katatonie und Schizophrenie untersucht. Es erfolgte eine strukturelle und funktioneller Promotoranalyse des Megalencephalic leukoencephalopathy with subcortikal cysts 1 (Mlc1/MLC1)-Gens, welches eine Rolle bei der Entwicklung der Megalenzephalen Leukoenzephalopathie spielt und auch für die Ätiogenese der Periodischen Katatonie diskutiert wird. Die in silico Promotoranalyse ergab, daß Bindestellen für wichtige Transkriptionsfaktoren und gängige Promotorelemente wie TATA- und GC-Boxen fehlten. Ebenso konnte in vitro keine Aktivität des Promotors nachgewiesen werden, was vermuten läßt, daß ein noch nicht identifiziertes Enhancer-Element oder einen Ko-Faktor für die Aktivierung des Mlc1-Promotors nötig ist. Als ein weiteres Kandidatengen für die Periodische Katatonie wurde das Gen für die mitotic checkpoint kinase BUB1B auf eine mögliche Ätiologie für die Periodische Katatonie untersucht. Aufgrund fehlender kausativer Mutationen konnte BUB1B als Kandidatengen für die Periodische Katatonie ausgeschlossen werden. Ein weiterer Teil dieser Arbeit umfaßte eine Studie zur Untersuchung der Gene für den nikotinergen Acetylcholinrezeptor (CHRNA7), des D-aminosäure Aktivators (DAOA) und des bromodomain containing protein 1 (BRD1) mit einer Assoziation zur Schizophrenie. Es konnte hierbei eine Assoziation von BRD1 mit Schizophrenie bestätigt werden.