Browsing by Subject "coherence"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access EU foreign policy and ‘perceived coherence’: the case of Kosovo(Routledge, 2018-10-11) Mutluer, D.; Tsarouhas, DimitriTo what extent has the European Union’s (EU) foreign policy been coherent in the Western Balkans? Moreover, is EU policy behaviour seen as coherent by local stakeholders? Such questions are of high significance regarding the role of the EU as an external actor and with regard to the Western Balkans in particular. This article assesses EU policy coherence in the case of Kosovo, focusing on the latter’s EU accession prospects and the EU rule of law mission EULEX. Introducing the novel concept of ‘perceived coherence’, the paper argues that EU policies and actors are not perceived as coherent by both local elites and civil society organizations. As a result, the effectiveness of the implementation of the Union’s foreign policy in Kosovo remains low.Item Open Access Signal representation and recovery under measurement constraints(2012) Özçelikkale Hünerli, AyçaWe are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both We are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both We are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both in the average, and also in terms of guarantees that hold with high probability, as a function of system parameters. Our investigation also reveals a possible relationship between the concept of coherence of random fields as defined in optics, and the concept of coherence of bases as defined in compressive sensing, through the fractional Fourier transform. We also consider an extension of our discussions to stationary Gaussian sources. We find explicit expressions for the mean-square error for equidistant sampling, and comment on the decay of error introduced by using finite-length representations instead of infinite-length representations.Item Open Access The use of formulaic language by English as a foreign language (EFL) learners in writing proficiency exams(2015) Kılıç, Sultan ZarifThis study investigates the ways EFL learners use formulaic language that is taught in their curriculum through course books when taking writing proficiency exams and whether there is a relationship between their formulaic language use and their scores of coherence, total writing and overall proficiency. The study was carried out with 150 EFL learners with the same exit level of proficiency at Yıldız Technical University, the School of Foreign Languages. In order to explore how formulaic language was used by the participants, a content analysis of the course books was carried out to determine the target formulaic language list and their frequency of occurrence in the books. Following that, a content analysis of the participants’ writing proficiency exam papers was conducted so as to see their formulaic language use. The results of the two content analyses were compared to draw conclusions. In order to find a possible relationship between the students’ formulaic language use and their scores of coherence, total writing and overall proficiency, the scores that the students have received for coherence and total writing in the final writing proficiency exam and their overall proficiency score at the end of the academic year were taken into consideration. The results of the content analyses conducted by counting the number of formulaic expressions presented in the course books and used by the students in the writing proficiency exam revealed that the students mostly used the formulaic expressions that were more frequently represented in the course books accurately while the expressions they used inaccurately were less represented in the course books. The data gained through the analysis of the relationship between the students’ formulaic language use and their coherence, total writing and overall proficiency scores revealed that there was no statistically significant relationship between the related variables implying that the concepts are not directly interconnected. These findings suggest that the students use formulaic language taught in their curriculum through course books; however, their formulaic language use is not related to their scores of coherence, total writing and overall proficiency. In light of the findings, the study provides insights into the future teaching practices in regards to formulaic language. It also offers implications for all stakeholders such as administrators, language instructors, and curriculum and material developers in order to design curricula, develop materials, and conduct classes accordingly.