The Sun as a laboratory of particle physics

Estudiante: Núria Vinyoles Vergés

The main goal of this thesis is to use solar models to study the impact of different types of weakly interacting particles on the solar structure. Then, based on the structural changes they produce, the goal is to set the most restrictive bounds to properties of these particles using solar data from…
Estudiante: Núria Vinyoles Vergés

The main goal of this thesis is to use solar models to study the impact of different types of weakly interacting particles on the solar structure. Then, based on the structural changes they produce, the goal is to set the most restrictive bounds to properties of these particles using solar data from helioseismology and neutrinos. With that purpose, a new statistical analysis that combines helioseismology and solar neutrino observations is presented and it is used to place upper limits to the properties of non standard weakly interacting particles, and in particular, to axions, hidden photons and minicharged particles. The results are the most restrictive solar bounds, being approximately a factor two better than previous ones. Moreover, the results obtained for hidden photons and minicharged particles are globally the most restrictive bounds.

Cosmology with galaxy surveys

Estudiante: Pujol, Arnau

One of the fundamental goals of Cosmology is to understand the matter and energy content of the Universe, and the way its contents determine the growth of fluctuations and the evolution of the observed structure. In order to tackle these questions, we need to study the overall structure and evolution of the universe and its constituents. We thus need to sample large enough volumes as to be meaningful and representative. We also need to probe different epochs to learn about its evolution. Observationally, these requirements are only met by big surveys that probe wide areas and a large fraction of the local Universe and that are deep enough to sample structures at high redshift. In the last few years, the number of surveys has been impressive and impossible to review here in detail. The most useful of them in terms of cosmological and large-scale structure applications have been the detailed redshift surveys of the local universe. The Two-Degree Field Galaxy Redshift Survey (2dFGRS; Colless et al 2001) and the Sloan Digital Sky Survey (SDSS; York et al 2000) have measured many hundreds of thousands of galaxy spectra. This has resulted in a major improvement in our knowledge of the galaxy population through, for example, the galaxy luminosity function (e.g., Norberg et al 2002; Blanton et al 2003), the galaxy power spectrum (e.g. Cole et al 2005; Tegmark et al 2004a), galaxy clustering (e.g., Norberg et al 2001; Gaztañaga 2002; Zehavi et al 2004, Cabre & Gaztanaga 2008), clusters of galaxies (e.g., de Propis et al 2002; Bahcall et al 2003), the cosmic star formation history (e.g., Baldry et al 2002; Glazebrook et al 2003), galaxy biasing (e.g., Verde et al 2002; Tegmark et al 2004a, Gaztañaga et al 2005), weak lensing (e.g., Fischer et al 2000; Gaztañaga 2003; Hirata et al, 2004), strong lensing (e.g., Inada et al 2003).... In this project we want to focus in biasing, ie how light from galaxies trace the underlaying dark matter (DM) distribution. The goal is to study how to best measure (or model) biasing evolution for current and upcoming galaxy surveys, such as SDSS, DES and PAU. We need to characterize biasing as a function of cosmic time and galaxy properties such as spectral type, color, morphology, magnitude or spatial environment. The biasing parameters can be linked to models of galaxy formation and can also be used to provide new clues on the nature of DM and DE (i.e. through new standard rulers or the evolution of the growth factor). The study can be done by measuring redshift space distortions (due to peculiar velocity of galaxies), the higher order clustering of galaxies and also by the comparison of the galaxy distribution to weak lensing mass reconstruction. We plan to use the N-body simulation run in our group (ie see MICE: www.ice.cat/mice) which include both the dark matter distribution and mock galaxy distribution form a variety of prescriptions and models of galaxy formation.
Galaxy surveys are an important tool for cosmology. The distribution of galaxies allow us to study the formation of structures and their evolution, which are needed ingredients to study the evolution and content of the Universe. However, according to the standard model of cosmology, the so-called ΛCDM model,…
Estudiante: Pujol, Arnau

Galaxy surveys are an important tool for cosmology. The distribution of galaxies allow us to study the formation of structures and their evolution, which are needed ingredients to study the evolution and content of the Universe. However, according to the standard model of cosmology, the so-called ΛCDM model, most of the matter is made of dark matter, which gravitates but does not interact with light. Hence, the galaxies that we observe from our telescopes only represent a small fraction of the total mass of the Universe. Because of this, we need to understand the connection between galaxies and dark matter in order to infer the total mass distribution of the Universe from galaxy surveys.  At large scales, galaxies trace the matter distribution. In particular, the galaxy density fluctuations at large scales are proportional to the underlying matter fluctuations by a factor that is called galaxy bias. This factor allows us to infer the total matter distribution from the distribution of galaxies, and hence knowledge of galaxy bias has a very important impact on our cosmological studies. This PhD thesis is focused on the study of galaxy and halo bias at large scales.  There are several techniques to study galaxy bias, in this thesis we focus on two of them. The first technique uses the fact that galaxy bias can be modelled from a galaxy formation model. One of the most common models is the Halo Occupation Distribution (HOD) model, that assumes that galaxies populate dark matter haloes depending only on the halo mass. With this hypothesis, and assuming a halo bias model, we can relate galaxy clustering with matter clustering and halo occupation. However, this hypothesis is not always accurate enough. We use the Millennium Simulation to study galaxy and halo bias, the halo mass dependence of halo bias, and its effects on galaxy bias prediction. We also study the local density dependence of halo bias, and we show that density constrains much more bias than mass. Another technique to study galaxy bias is by using weak gravitational lensing to directly measure mass in observations. Weak gravitational lensing is the field that studies the weak image distortions of galaxies due to the light deflections produced by the presence of a foreground mass distribution. Theses distortions can then be used to infer the total mass (baryonic and dark) distribution at large scales. We develop and study a new method to measure galaxy bias from the combination of weak lensing and galaxy density fields. The method consists on reconstructing the weak lensing maps from the distribution of the foreground galaxies. Bias is then measured from the correlations between the reconstructed and real weak lensing fields. We test the different systematics of the method and the regimes where this method is consistent with other methods to measure linear bias.

Cosmological models of the early- and late- universe with bradion and tachion fields

Estudiante: N. Myrzakulov

Foreign adviser of this thesis defended at Eurasian National university, Astana
Estudiante: N. Myrzakulov

Foreign adviser of this thesis defended at Eurasian National university, Astana

Producing simulated catalogues for next generation galaxy surveys

Estudiante: Izard, A.
Supervisada por: Pablo Fosalba Vela; Martin Crocce

El objetivo de la tesis es profundizar en el estudio del modelo de formación de estructuras a gran escala en el universo usando las observaciones de los nuevos cartografiados de galaxias. Con este objetivo, durante el proyecto de tesis se trabajará en el desarrollo de herramientas analíticas y numéricas que permitan, en un primer lugar, modelar adecuadamente los observables con el nivel de complejidad y detalle que se ajuste a las propiedades de los datos futuros, y en un segundo término, la explotación óptima de dichos datos para obtener cotas de alta precisión sobre los parámetros cosmológicos básicos. En concreto se espera centrar la actividad en dos de las áreas mas activas en la actualidad en la cosmología observacional y que potencialmente pueden aportar mas información sobre el proceso de formación de estructuras: la abundancia de cúmulos de galaxias, la distribución (o clustering) de galaxias y el efecto de lente débil (weak lensing) debido a las grandes estructuras del universo. El programa de tesis se desarrollará en el contexto de la participación activa del grupo receptor, el grupo de Astrofísica extragaláctica y Cosmología, en grandes cartografiados de galaxias como son el Dark Energy Survey (DES), Physics of the Accelerating Universe (PAU) y la misión espacial EUCLID.
Current and future galaxy surveys will be able to map the large-scale structure of the Universe with unprecedented detail and measure cosmological parameters with exquisite precision. In order to develop the science cases and the analysis pipelines, it is necessary an accurate modelling of…
Estudiante: Izard, A.
Supervisada por: Pablo Fosalba Vela; Martin Crocce

Current and future galaxy surveys will be able to map the large-scale structure of the Universe with unprecedented detail and measure cosmological parameters with exquisite precision. In order to develop the science cases and the analysis pipelines, it is necessary an accurate modelling of the non-linear gravitational evolution. This thesis presents a methodology for producing accurate mock catalogues, much faster than conventional methods (2-3 orders of magnitude) and incorporating past light cone effects. First, we present the optimization of a quasi N-body method in the compromise between accuracy and computational cost. We studied how variations in the code parameter space have and impact on the accuracy of observables such as the halo abundance and distribution and matter clustering. We propose optimal parameter configurations for achieving high accuracy as compared to exact N-body simulations and we explore different calibration techniques to match even better two-point halo clustering statistics. The next step is mimicking the geometry of real astrophysical observations, in which distant objects are seen in the past light cone. We introduce ICE-COLA, a simulation code developed for this thesis that implements the production of light cone catalogues on-the-fly. The user can generate three different kinds of data types. The first contains all the information of the phase-space matter distribution while the others store high-level data catalogues ready to use to model galaxy surveys. This enables large compression factors of ∼ 2 orders of magnitude in the data volume to be stored. In particular, the code can generate halo catalogues in the light cone and pixelated two-dimensional projected matter density maps in spherical concentric shells around the observer. Using ICE-COLA we produce large light cone simulations and perform an extensive validation of the catalogues. We introduce a novel methodology to model weak gravitational lensing with an approximate method and we show that we can resolve most of the scales probed by current weak lensing experiments. Finally we extend the results to halo mock catalogues with weak lensing quantities, which represents a key step forward modelling galaxy clustering and weak lensing observables consistently in a quasi N-body approach.

Observation and interpretation of type IIb supernova explosions

Estudiante: Antonia Morales Garoffolo
Supervisada por: Nancy Elias de la Rosa; Jordi Isern Vilaboy

p { margin-bottom: 0.08in; } Las supernovas (SNs) juegan un importante papel en muchos campos de la física moderna, desde la cosmología hasta la física nuclear. En particular, al devolver al medio interestellar elementos pesados ​​sintetizados durante toda la vida de la estrella y la explosión (nucleosíntesis explosiva) hace de las SNs los principales contribuyentes a la evolución química de las galaxias. La cantidad y composición de este material depende de la física de la explosion y de la estructura de la estrella antes de explotar. Hay dos clases principales de SNs: Termonuclear (SNIa -- recientes estudios han mostrado una importante diversidad entre estos objetos, una vez se consideraban altamente homogéneos), procedentes de la explosión de una enana blanca (WR) por la acreción de masa de una estrella compañera, y de Colapso Gravitatorio (CC-SNs – heterogéneas debido a las diferentes configuraciones de la estrella progenitora en el momento de la explosión; hay varios tipos como II-P, II-L, IIn, IIb, Ib, Ic) que proceden del colapso del núcleo de una estrella masiva con masa(ZAMS) > 8 masas solares.
Recientemente, el Institut de Ciencies de L'Espai (ICE/IEEC-CSIC) ha ayudado en la construcción y gestión del Observatorio del Montsec (OAdM), ubicado al sur de los Pirineos. Este observatorio alberga un telescopio de 80 cm que será controlado de forma remota desde el instituto. Así pues, la estrategia básica de este proyecto es la creación de un programa de seguimiento de la evolución de SNs cercanas desde el observatorio del Montsec con la finalidad de gestionar un estudio coordinado del estudio físico de los diferentes tipos de supernovas y de la determinación de la contribución de éstas al enriquecimiento químico. A su vez, se desarrollarán programas y modelos adecuados para interpretar adecuadamente los datos.
De manera complementaria, se trabajará activamente en dos grandes colaboraciones internacionales para el estudio de supernovas usando los telescopios del Observatorio Europeo Austral (ESO): ESO-New Technology Telescope (NTT) and Telescopio Nazionale Galileo (TNG) long term program, y PESSTO - Public ESO Spectroscopic Survey for Transient Objects. Ambas colaboraciones se fundirán el próximo año en una sola y están formadas por numerosos institutos, principalmente europeos. Estos proyectos se centran en la obtención de datos de calidad en una amplia gama de longitudes de onda que luego serán comparados con los modelos teóricos, desarrollados por institutos dentro de la colaboración.
Core-collapse supernovae (CC-SNe) explosions represent the final demise of massive stars. Among the various types, there is a group of relatively infrequent CC-SNe termed type IIb, which appear to be hybrids between normal type II SNe (those characterised by H emission) and type Ib (those that lack H…
Estudiante: Antonia Morales Garoffolo
Supervisada por: Nancy Elias de la Rosa; Jordi Isern Vilaboy

Core-collapse supernovae (CC-SNe) explosions represent the final demise of massive stars. Among the various types, there is a group of relatively infrequent CC-SNe termed type IIb, which appear to be hybrids between normal type II SNe (those characterised by H emission) and type Ib (those that lack H features in their spectra but exhibit prominent He\,{\sc i} lines). The nature of the stellar progenitors leading to type IIb SNe is currently unknown, although two channels are contemplated: single massive stars that have lost part of their outer envelope as a consequence of stellar winds, and massive stars that shed mass by Roche-Lobe overflow to a companion. The latter is in fact the favoured scenario for most of the objects observed up to now. In the majority of cases, when there are no direct progenitor detections, some hints about type IIb SN progenitors (e.g., initial mass) can be derived indirectly from the objects' light curves (LCs) and spectra. Motivated by the relatively
few well-sampled
observational datasets that
exist up to date for type IIb SNe and the unknowns on their progenitors, we carried out extensive observations (mainly in the optical domain) for the young type IIb SNe 2011fu and 2013df. Both these SNe are particularly interesting because they show a first LC peak caused by shock breakout, followed by a secondary $^{56}$Ni-decay-powered maximum. The analysis of the data for SNe 2011fu and 2013df points to precursors that seem to have been stars with large radii (of the order of 100~R$_{\odot}$), with low mass hydrogen envelopes (tenths of
M$_{\odot})$, and relatively low initial masses ($12\textendash18$~M$_{\odot}$), which could have formed part of interacting binary systems.  The nature of a third SN IIb candidate, OGLE-2013-SN-100, proved to be enigmatic.
OGLE-2013-SN-100, shows a first peak in the LC, and other  characteristics somewhat similar to those of type IIb SNe. However, after a deeper analysis, we conclude OGLE-2013-SN-100 is likely not a SN of type IIb. We provide an alternative possible explanation for this object, which implies a combination of a SN explosion and interaction of its ejecta with circumstellar-material.  SNe~2011fu and 2013df were included in a larger sample of type IIb SNe to carry out a comparative study of their observables and environment. Regarding the host galaxies,  90~\% of the objects are located in giant ($r<-18$~mag) hosts. In addition, the SNe are about equally split in low star formation and high star formation rate spiral galaxies. Concerning the SN ultra-violet (UV), optical, and near-infrared (NIR) LCs, we find a dispersion in both shape and brightness. Particularly, a few objects show a sharp declining early phase in the UV and double-peaked optical-NIR LCs.
However, the absence of a first LC peak, in some of the cases, may be due to lack of early observations.
In addition, we found dispersion in the evolution of the colour indices of the SNe, making the colour comparison method not suitable to estimate extinction toward a type IIb SN. In the optical domain, the study of the (secondary) peak brightness in the \textit{R} band shows that low luminosity events could be uncommon and the average
brightness of the sample is $\sim -17.5$~mag. As for the spectral properties, the SNe that show an early spike in their LCs exhibit blue, shallow-lined early-time spectra and arise from extended progenitors ($R\sim 100$~R$_{\odot}$). Additionally, while there is an overall resemblance of the measured ejecta velocities, there is also dispersion of equivalent widths, nebular line luminosities and ratios among all the objects that could indicate differences in the ionisation state of the ejecta and mixing. All in all, we find heterogeineity in the studied observables of the sample of type IIb SNe, which reflects the variety of their explosion parameters and progenitor properties.

Thermal Diagnostics Experiments for LISA Pathfinder

Estudiante: Ferran Gibert Gutiérrez

The LISA Pathfinder project is an ESA/NASA technology demonstrator mission that must test the critical instrumentation required for a future space-borne gravitational wave observatory based on the LISA design. The satellite, to be launched by the end of 2015, carries two free-floating test masses and…
Estudiante: Ferran Gibert Gutiérrez

The LISA Pathfinder project is an ESA/NASA technology demonstrator mission that must test the critical instrumentation required for a future space-borne gravitational wave observatory based on the LISA design. The satellite, to be launched by the end of 2015, carries two free-floating test masses and an interferometer that measures the relative distance between them. The main objective of the satellite is to demonstrate that the residual acceleration noise between the masses is lower than 3e-14 m/s2/sqrt(Hz) in the band between 1-30 mHz. To achieve such a high sensitivity, the instrument is provided with an accurate control system that allows to sense and actuate on any of the 18 degrees of freedom of the system composed of the two test masses and the spacecraft, avoiding interfering the scientific measurements. The whole instrument is called the LISA Technology Package (LTP). At such low frequencies, the system is exposed to a broad list of external perturbations that eventually limit the sensitivity of the instrument. Amongst them, temperature fluctuations at different spots of the satellite can end up distorting the motion of the masses and the interferometer readouts through different mechanisms. In order to measure such fluctuations and to characterise their contribution to the system sensitivity, the satellite is equipped with a thermal diagnostic subsystem composed of a series of heaters and high precision temperature sensors. Three different kind of thermal perturbation mechanisms are to be studied with such a subsystem: (1) thermal effects inducing direct forces and torques to the test masses due to the presence of temperature gradients, (2) thermo-elastic distortion due to temperature fluctuations in the structure hosting the test masses and the interferometer and (3) thermo-optical distortion of two optical parts located outside the ultra-stabl e optical bench. This thesis focuses on the design of the experiments aimed to study the first two mechanisms. These experiments essentially consist in the injection of a series of heat loads near each of the thermal-sensitive locations in order to stress their associated thermal mechanism. Such an induced perturbation is visible with high SNR at both the optical measurements and the nearby temperature sensors, and allows to derive coupling coefficients for each of the effects or, at least, bound their contribution to the acceleration noise. The analysis of the impact of forces and torques on the test masses has followed two approaches: first, a simulator environment has been designed and implemented to estimate the impact of any kind of heat signal applied close to the test masses and, secondly, a test campaign has been carried out by means of a LTP-test mass replica installed in a torsion pendulum facility. Regarding the simulator, a state-space model has been developed including a thermal system of the whole spacecraft and a specific design for each of the mechanisms that generate forces and torques from temperature gradients: the radiometer effect, the radiation pressure effect and asymmetric outgassing. This model has been integrated to a general simulator of the whole LTP performance, what has allowed to simulate the whole chain between the heater activation and the final impact to the closed-loop performance of the LTP. In parallel, the experimental campaign by means of a torsion pendulum facility of the Universi ty of Trento has allowed to characterise the impact of each of the effects in different scenarios of absolute temperature and pressure. On the other hand, the analysis of thermo-elastic noise in the LTP is based on the results obtained during a spacecraft Thermal Vacuum test campaign. In this test, a series of heater activations in the suspension struts that attach the LTP core assembly to the satellite structure allowed to bound the impact of temperature fluctuations at these locations and to characterise the main mechanical distortion mode associated to them.

Design and Assessment of a low-frequency magnetic measurement system for eLISA

Estudiante: Ignacio Mateos
Supervisada por: José Alberto Lobo Gutiérrez; Juan Ramos (UPC)

Tesis related to LISA pathfinder
The primary purpose of this thesis is the design, development and validation of a system capable of measuring magnetic fields with low-noise conditions at sub-millihertz frequencies. Such an instrument is conceived as a part of a space mission concept for a gravitational-wave observatory called eLISA…
Estudiante: Ignacio Mateos
Supervisada por: José Alberto Lobo Gutiérrez; Juan Ramos (UPC)

The primary purpose of this thesis is the design, development and validation of a system capable of measuring magnetic fields with low-noise conditions at sub-millihertz frequencies. Such an instrument is conceived as a part of a space mission concept for a gravitational-wave observatory called eLISA (evolved Laser Interferometer Space Antenna). In addition, the work of this thesis is also well-suited for use in magnetically sensitive fundamental physics experiments requiring long integration time, such as high-precision measurement of the weak equivalence principle. Within this context, the baseline design of the instrument is also foreseen to monitor the environmental magnetic field in a proposed mission concept involving space atom-interferometric measurements, known as STE-QUEST (Space-Time Explorer and Quantum Equivalence Principle Space Test). Different magnetic sensing technologies (fluxgate, anisotropic magnetoresistance, and atomic magnetometer), together with dedicated electronic noise reduction techniques, are studied in order to assess if they can be used for space missions demanding low-frequency requirements. Moreover, these space missions require the careful control of the local magnetic environment generated by the spacecraft. The reason is that the main on-board instrument can only operate successfully and achieve its performance if the magnetic environment, including that generated by the spacecraft itself, is sufficiently benign.  Therefore, this work also involves the investigation of the magnetic characteristics of the magnetometer and its possible impact on the scientific experiment. Finally, another potential problem is  the accuracy of the magnetic field estimation from the magnetometer to the region of interest.  A robust interpolation method based on an new magnetometer array configuration is presented in this work. Although other topics are covered, the objectives mentioned here are the main issues considered in this thesis.

Cosmology with Galaxy Clustering

Estudiante: Kai Hoffmann

One of the fundamental goals of Cosmology is to understand how the Universe evolved from initial density fluctuations to the large-scale structure of galaxies which is observed today as the cosmic web. Cosmological models can predict the properties of this structure for a given decomposition of the universal energy into matter and the acceleration of the cosmic expansion. While the major part of matter is believed to consist of an unknown and invisible type of particles, the observed galaxies are assumed to be biased tracers of the total matter distribution. A detailed understanding of galaxy bias is necessary for constraining cosmological models by comparing their predictions to observations. In this thesis galaxy bias will be studied using observational and simulated data. As a first step different methods for measuring the bias will be investigated. For this purpose the bias will be derived from the MICE Simulation and the Millennium Simulation using two- and three-point correlation functions. This results will be compared to direct determinations of the bias from the density contrast. The analysis will be performed for different scales, mass ranges and redshifts. Furthermore the three-point correlation function will be used to test the local model for bias. Deviations of the measurements from the local model predictions will be studied for different cluster shapes. As a second step the dependence of galaxy bias on galaxy properties such as spectral type, color, morphology, magnitude and spatial environment will be investigated. For this purpose mock galaxy catalogs will be used, including semi-analytic models that are imposed on the Millennium Simulation and halo occupation models from the MICE simulation. The properties of the mock galaxies will be compared to observational data such as PAU and SDSS. These studies will provide a base for further investigations of galaxy bias derived from weak lensing observables and redshift space distortions.
For constraining cosmological models via the growth of large-scale matter fluctuations it is important to understand how the observed galaxies trace the full matter density field. The relation between the density fields of matter and galaxies is often approximated by a second- order expansion of…
Estudiante: Kai Hoffmann

For constraining cosmological models via the growth of large-scale matter fluctuations it is important to understand how the observed galaxies trace the full matter density field. The relation between the density fields of matter and galaxies is often approximated by a second- order expansion of a so-called bias function. The freedom of the parameters in the bias function weakens cosmological constraints from observations. In this thesis we study two methods for determining the bias parameters independently from the growth. Our analysis is based on the matter field from the large MICE Grand Challenge simulation. Haloes, identified in this simulation, are associated with galaxies. The first method is to measure the bias parameters directly from third-order statistics of the halo and matter distributions. The second method is to predict them from the abundance of haloes as a function of halo mass (hereafter referred to as mass function). Our bias estimations from third-order statistics are based on three-point auto- and cross- correlations of halo and matter density fields in three dimensional configuration space. Using three-point auto-correlations and a local quadratic bias model we find a ∼ 20% overestimation of the linear bias parameter with respect to the reference from two-point correlations. This deviation can originate from ignoring non-local and higher-order contributions to the bias function, as well as from systematics in the measurements. The effect of such inaccuracies in the bias estimations on growth measurements are comparable with errors in our measure- ments, coming from sampling variance and noise. We also present a new method for measuring the growth which does not require a model for the dark matter three-point correlation. Res- ults from both approaches are in good agreement with predictions. By combining three-point auto- and cross-correlations one can either measure the linear bias without being affected by quadratic (local or non-local) terms in the bias functions or one can isolate such terms and compare them to predictions. Our linear bias measurements from such combinations are in very good agreement with the reference linear bias. The comparison of the non-local contributions with predictions reveals a strong scale dependence of the measurements with significant deviations from the predictions, even at very large scales. Our second approach for obtaining the bias parameters are predictions derived from the mass function via the peak-background split approach. We find significant 5−10% deviations between these predictions and the reference from two-point clustering. These deviations can only partly be explained with systematics affecting the bias predictions, coming from the halo mass function binning, the mass function error estimation and the mass function parameterisation from which the bias predictions are derived. Studying the mass function we find unifying relations between different mass function parameterisation. Furthermore, we find that the standard Jack-Knife method overestimates the mass function error covariance in the low mass range. We explain these deviations and present a new improved covariance estimator.

Bayesian data analysis for LISA Pathfinder. Techniques applied to system identificaction experiments.

Estudiante: Nikolaos Karnesis
Supervisada por: Carlos Sopuerta ; Miquel Nofrarias Serra

LISA is a joint mission between the European Space Agency (ESA) and the US National Aeronautics and Space Administration (NASA) that will become the first space-based Gravitational-Wave (GW) detector. LISA is a constellation of three spacecrafts that will access GW signals at frequencies of 1mHz and below, around five orders of magnitude below the kHz band where Earth-based detectors, such as VIRGO and LIGO operate. LISA will open a new window to the observation of the Universe and is expected to provide revolutionary discoveries in the areas of Astrophysics, Cosmology, and Fundamental Physics. Due to the technological complexity of LISA, ESA approved a percursor mission, LISA PathFinder (LPF), to test the readiness of the main LISA technology. The scientific working principle of LISA is the detection of tiny relative displacements between pairs of proof masses in nominally geodesic motion, or free fall, induced by passing GWs. LPF consists in a single spacecraft hosting two proof masses in nominal free fall, whose motions are monitored by means of a Mach-Zender laser interferometer. LPF is expected to be launched around 2012 and its ultimate objective is to measure the noise in the proof masses motion, and to understand its physical origin. There are many sources of noise identified (thermal, magnetic, particles of cosmic origin, etc), and properly modelling them requires a careful planning of the measurement sequence, plus of course the use of suitable analysis tools to process the various data channels. The research work proposed for this PhD project consists of the following three points: 1. Development of Data Analysis Tools for the LTPDA software tool in order to perform the Data Analysis during the mission operations and also for the scientific parts that will be carried out by our Research Group. Also to participate in the Mock Data Analysis challenges organized by the LPF community. 2. To study how to develop a LISA noise model from the outcome of the LPF mission. 3. To develop Data Analysis Tools for LISA, which consists in the detection of GW signals and the estimation of the physical parameters of the associated sources. Also to participate in Mock Data Analysis challenges organized by the LISA scientific community (LISC).
The eLISA concept design consists of a constellation of three space-crafts forming a triangle in the sky. While in a sun centered orbit, it will constantly monitor the distance oscillations between the test bodies enclosed in the different space-crafts. Its principal goal, is to detect oscillations that…
Estudiante: Nikolaos Karnesis
Supervisada por: Carlos Sopuerta ; Miquel Nofrarias Serra

The eLISA concept design consists of a constellation of three space-crafts forming a triangle in the sky. While in a sun centered orbit, it will constantly monitor the distance oscillations between the test bodies enclosed in the different space-crafts. Its principal goal, is to detect oscillations that are caused by passing Gravitational-Waves. The technical complexity of this design was the reason for ESA and NASA to approve the LISA Pathfinder mission (LPF) which aims at testing all the key technologies for future Gravitational-Wave space observatories.
The LISA Technology Package (LTP) instrument onboard the LPF satellite, can be considered as one eLISA arm, squeezed from 1 million km to approximately 30 cm, and it aims to measure the differential acceleration between two test-bodies with unparalleled precision via a Mach-Zehnder interferometer. Among its objectives we have: The estimation of the acceleration noise models, the derivation of an accurate dynamical model of the system in all degrees-of-freedom, and the estimation of the systems’ parameters. In this thesis, we focus on a Bayesian analysis framework to set-up analysis strategies to process the planned system identification experiments.

We first model the system using different approximations, and then we develop and apply Markov Chain Monte Carlo (MCMC) algorithms to simulated data-sets. We report the accuracy on the parameters over the planned system identification experiments, that can be divided in two categories; the x-axis system identification experiments, that are performed over the sensitive axis defined by the line joining the two
test masses; and the so-called cross-talk experiments, where different degrees of freedom of the test bodies of the system are excited. The various cross-coupling physical effects that produce signal leakage on the sensitive differential interferometer channel,are then identified and estimated. In addition, the pipeline analysis designed for on-line
data analysis during operations is presented.

Finally, we also investigate the possible model selection problems in LPF data analysis, and we apply the reversible jump MCMC algorithm to simulated data sets. Different applications to the x-axis and the cross-talk experiments are considered, where the efficiency of the developed tools is demonstrated. We also show the association of
the model selection results to the design of the experiment itself. The above work is integrated to the LTP data analysis dedicated toolbox, the LTPDA.

Stellar activity in exoplanet hosts

Estudiante: Enrique Herrero Casas

Stellar activity in exoplanet hosts Most of the efforts on the search and characterization of Earth-like exoplanets are currently focused on low mass stars. Some important properties related to the structure and processes in this type of stars are still unknown, so a careful characterization…
Estudiante: Enrique Herrero Casas

Stellar activity in exoplanet hosts Most of the efforts on the search and characterization of Earth-like exoplanets are currently focused on low mass stars. Some important properties related to the structure and processes in this type of stars are still unknown, so a careful characterization is essential as one of the next steps in exoplanet sciences. The characterization of stellar activity in low mass stars was carried out through several techniques that allowed us both to model and to simulate the relationships between the observational data and the stellar properties. Several empyrical relations for low mass stars allow to find correlations between certain activity indicators and the rotation period. These have permitted us to generate synthetic samples of stars with stochastic distributions of stellar and geometric properties allowing to estimate the inclination of the rotation axis from the distribution in the activity-vsini diagram. The methodology was applied to a sample of 1200 observed low mass stars and the best candidates for a targeted transit search were selected. Spot modelling techniques allow to obtain physical information about the stellar surface from time series photometric and spectroscopic data. In this work we analyse Kepler photometry of LHS 6343 A, an M-dwarf being eclipsed by a brown dwarf companion every 12.718 days, and showing photometric oscillations with the same periodicity and a phase lag of 100o from the eclipses. The accurate modeling of the Kepler data allowed to explain these oscillations with the presence of active regions appearing at a fixed longitude, thus suggesting a possible magnetic connection between both components. On the other hand, we also studied an alternative explanation for the photometric oscillations in LHS 6343 A in terms of the Doppler beaming effect, showing that this could be the main cause of the observed oscillations. Stellar activity effects are responsible for the noise observed at different amplitude and timescales on time series data. Such noise represents one of the main limitations for exoplanetary sciences. In order to characterize it, we designed a methodology to simulate the photosphere of an active rotating star through the integration of small surface elements from Phoenix atmosphere models. This allows to characterize the signal produced by activity and further study its relationship with the stellar properties, as well as the possible effects produced on exoplanet measurements. The methodology allowed us to present several strategies in order to correct or reduce the effects of spots on the photometry of exoplanet transits, as these may induce significant variations on the measurement of the planetary radius. We focused on a comprehensive analysis of HD 189733, a K5 star hosting a giant planet, which has simultaneous photometric (MOST) and spectroscopic (SOPHIE) data available. An accurate surface map was obtained using the methodology above, accurately reproducing the light curve and radial velocity observations. Such map was used in order to study the effects of activity on the exoplanet transits. We showed that the effects of spot-crossing events are significant even for mid-infrared wavelengths. Moreover, the chromatic effects of spots not occulted by the planet show a signal with a wavelength dependence and amplitude that are very similar to the signature of the atmosphere of a planet dom- inated by dust. The radial velocity theoretical curve is in agreement with the observations up to the typical instrumental systematics of SOPHIE. The results from this work conclude that it is essential to correctly model the stellar activity signals for exoplanetary sciences, and we provide some tools and strategies to characterize and reduce such effects and extract astrophysical information.
Institute of Space Sciences (IEEC-CSIC)

Campus UAB, Carrer de Can Magrans, s/n
08193 Barcelona.
Phone: +34 93 737 9788
Email: ice@ice.csic.es
Website developed with RhinOS

Síguenos

An institute of the Consejo Superior de Investigaciones Científicas

Affiliated with the Institut d'Estudis Espacials de Catalunya